1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
|
<chapter id="internals">
<title>xine internals</title>
<sect1>
<title>Engine architecture and data flow</title>
<mediaobject>
<imageobject>
<imagedata fileref="architecture.png" format="PNG">
</imageobject>
<imageobject>
<imagedata fileref="architecture.eps" format="EPS">
</imageobject>
<caption>
<para>xine engine architecture</para>
</caption>
</mediaobject>
<para>
Media streams usually consist of audio and video data multiplexed
into one bitstream in the so-called system-layer (e.g. AVI, Quicktime or MPEG).
A demuxer plugin is used to parse the system layer and extract audio and video
packages. The demuxer uses an input plugin to read the data and stores it
in pre-allocated buffers from the global buffer pool.
The buffers are then added to the audio or video stream fifo.
</para>
<para>
From the other end of these fifos the audio and video decoder threads
consume the buffers and hand them over to the current audio or video
decoder plugin for decompression. These plugins then send the decoded
data to the output layer. The buffer holding the encoded
data is no longer needed and thus released to the global buffer pool.
</para>
<para>
In the output layer, the video frames and audio samples pass through a
post plugin tree, which can apply effects or other operations to the data.
When reaching the output loops, frames and samples are enqueued to be
displayed, when the presentation time has arrived.
</para>
<para>
A set of extra information travels with the data. Starting at the input and
demuxer level, where this information is generated, the data is attached to
the buffers as they wait in the fifo. The decoder loops copy the data to
a storage of their own. From there, every frame and audio buffer leaving
the stream layer is tagged with the data the decoder loop storage currently
holds.
</para>
</sect1>
<sect1>
<title>Plugin system</title>
<para>
The plugin system enables some of xine's most valuable features:
<itemizedlist>
<listitem>
<para>
drop-in extensiability
</para>
</listitem>
<listitem>
<para>
support parallel installation of multiple (incompatible) libxine versions
</para>
</listitem>
<listitem>
<para>
support for multiple plugin directories
(<filename>$prefix/lib/xine/plugins</filename>,
<filename>$HOME/.xine/plugins</filename>, …)
</para>
</listitem>
<listitem>
<para>
support for recursive plugin directories
(plugins are found even in subdirectories of the plugin directories)
</para>
</listitem>
<listitem>
<para>
version management
(On start, xine finds all plugins in its plugin (sub)directories and
chooses an appropriate version (usually the newest) for each plugin.)
</para>
</listitem>
<listitem>
<para>
simplification
(Plugins don't have to follow any special naming convention,
and any plugin may contain an arbitrary subset of input, demuxer,
decoder or output plugins.)
</para>
</listitem>
</itemizedlist>
</para>
<para>
Essentally, plugins are just shared objects, ie dynamic libraries. In
contrast to normal dynamic libraries, they are stored outside of the
system's library PATHs and libxine does its own bookkeeping, which
enables most advanced features mentioned above.
</para>
<sect2>
<title>Plugin location and filesystem layout</title>
<para>
The primary goal for this new plugin mechanism was the need to support
simultaneous installation of several (most likely incompatible)
libxine versions without them overwriting each other's
plugins. Therefore, we have this simple layout:
</para>
<para>
Plugins are installed below XINE_PLUGINDIR
(<filename>/usr/local/lib/xine/plugins</filename> by default).
Note that plugins are never directly installed into XINE_PLUGINDIR.
Instead, a separate subdirectory is created for each "plugin
provider". A plugin provider is equivalent with the exact version of
one source package. Typical examples include "xine-lib-0.9.11" or
"xine-vcdnav-1.0". Every source package is free to install an
arbitrary number of plugins in its own, private directory. If a
package installs several plugins, they may optionally be organized
further into subdirectories.
</para>
<para>
So you will finally end up with something like this:
<screen>
/usr/local/lib/xine/plugins
xine-lib-0.9.11
demux_mpeg_block.so
decode_mpeg.so
video_out_xv.so
…
xine-vcdnav-0.9.11
input_vcdnav.so
xine-lib-1.2
input
file.so
stdin_fifo.so
vcd.so
demuxers
fli.so
avi.so
…
decoders
ffmpeg.so
mpeg.so (may contain mpeg 1/2 audio and video decoders)
pcm.so
…
output
video_xv.so
audio_oss.so
…
xine-lib-3.0
avi.so (avi demuxer)
mpeg.so (contains mpeg demuxers and audio/video decoders)
video_out_xv.so (Xv video out)
…</screen>
</para>
<para>
As you can see, every package is free to organize plugins at will
below its own plugin provider directory.
Additionally, administrators may choose to put plugins directly into
XINE_PLUGINDIR, or in a "local" subdirectory.
Users may wish to put additional plugins in ~/.xine/plugins/.
Again, there may be subdirectories to help organize the plugins.
</para>
<para>
The default value for XINE_PLUGINDIR can be obtained using the
<command>xine-config --plugindir</command> command.
</para>
</sect2>
<sect2>
<title>Plugin Content: What's inside the .so?</title>
<para>
Each plugin library (.so file) contains an arbitrary number of (virtual)
plugins. Typically, it will contain exactly one plugin. However, it
may be useful to put a set of related plugins in one library, so they
can share common code.
</para>
<para>
First of all, what is a virtual plugin?
A virtual plugin is essentially a structure that is defined by the
xine engine. This structure typically contains lots of function
pointers to the actual API functions.
For each plugin API, there are several API versions, and each API
version may specify a new, incompatible structure. Therefore, it is
essential that only those plugins are loaded that support current
libxine's API, so the .so file needs a plugin list that
provides libxine with the version information, even before it tries to
load any of the plugins.
</para>
<para>
This plugin list is held in an array named <varname>xine_plugin_info</varname>":
<programlisting>
plugin_info_t xine_plugin_info[] = {
/* type, API, "name", version, special_info, init_function */
{ PLUGIN_DEMUX, 20, "flac", XINE_VERSION_CODE, NULL, demux_flac_init_class },
{ PLUGIN_AUDIO_DECODER, 13, "flacdec", XINE_VERSION_CODE, &dec_info_audio, init_plugin },
{ PLUGIN_NONE, 0, "", 0, NULL, NULL }
};</programlisting>
</para>
<para>
The structure of xine_plugin_info may <emphasis>never</emphasis> be changed.
If it ever needs to be changed, it must be renamed to avoid
erraneous loading of incompatible plugins.
</para>
<para>
<varname>xine_plugin_info</varname> can contain any number of plugins
and must be terminated with a <type>PLUGIN_NONE</type> entry. Available plugin
types are:
<programlisting>
#define PLUGIN_NONE 0
#define PLUGIN_INPUT 1
#define PLUGIN_DEMUX 2
#define PLUGIN_AUDIO_DECODER 3
#define PLUGIN_VIDEO_DECODER 4
#define PLUGIN_SPU_DECODER 5
#define PLUGIN_AUDIO_OUT 6
#define PLUGIN_VIDEO_OUT 7
#define PLUGIN_POST 8</programlisting>
</para>
<para>
The plugin version number is generated from xine-lib's version number
like this: MAJOR * 10000 + MINOR * 100 + SUBMINOR.
This is not required, but it's an easy way to ensure that the version
increases for every release.
</para>
<para>
Every entry in <varname>xine_plugin_info</varname> has an initialization
function for the plugin class context.
This function returns a pointer to freshly allocated (typically
via <function>malloc()</function>) structure containing mainly function
pointers; these are the "methods" of the plugin class.
</para>
<para>
The "plugin class" is not what we call to do the job yet (like decoding
a video or something), it must be instantiated. One reason for having the
class is to hold any global settings that must be accessed by every
instance. Remember that xine library is multistream capable: multible
videos can be decoded at the same time, thus several instances of the
same plugin are possible.
</para>
<para>
If you think this is pretty much an object-oriented aproach,
then you're right.
</para>
<para>
A fictitious file input plugin that supports input plugin API 12 and
13, found in xine-lib 2.13.7 would then define this plugin list:
<programlisting>
#include <xine/plugin.h>
…
plugin_t *init_api12(void) {
input_plugin_t *this;
this = malloc(sizeof(input_plugin_t));
…
return (plugin_t *)this;
}
/* same thing, with different initialization for API 13 */
const plugin_info_t xine_plugin_info[] = {
{ PLUGIN_INPUT, 12, "file", 21307, init_api12 },
{ PLUGIN_INPUT, 13, "file", 21307, init_api13 },
{ PLUGIN_NONE, 0, "", 0, NULL }
}</programlisting>
This input plugin supports two APIs, other plugins might provide a
mixture of demuxer and decoder plugins that belong together somehow
(ie. share common code).
</para>
<para>
You'll find exact definitions of public functions and plugin structs
in the appropriate header files for each plugin type:
<filename>input/input_plugin.h</filename> for input plugins,
<filename>demuxers/demux.h</filename> for demuxer plugins,
<filename>xine-engine/video_decoder.h</filename> for video decoder plugins,
<filename>xine-engine/audio_decoder.h</filename> for audio decoder plugins,
<filename>xine-engine/post.h</filename> for post plugins,
<filename>xine-engine/video_out.h</filename> for video out plugins,
<filename>xine-engine/audio_out.h</filename> for audio out plugins.
Additional information will also be given in the dedicated sections below.
</para>
<para>
Many plugins will need some additional "private" data fields.
These should be simply added at the end of the plugin structure.
For example a demuxer plugin called "foo" with two private
fields "xine" and "count" may have a plugin structure declared in
the following way:
<programlisting>
typedef struct {
/* public fields "inherited" from demux.h */
demux_plugin_t demux_plugin;
xine_t *xine;
int count;
} demux_foo_t;</programlisting>
</para>
<para>
The plugin would then access public members via the
<varname>demux_plugin</varname> field and private fields directly.
</para>
<para>
Summary: Plugins consist of two C-style classes, each representing a different context.
<itemizedlist>
<listitem>
<para>
The first is the so called "plugin class" context. This is a singleton context,
which means it will exist either not at all or at most once per xine context.
This plugin class context is a C-style class which is subclassing the related
class from the xine plugin headers. This contains functions, which are
independent of the actual instance of the plugin. Most prominently, it contains
a factory method to instantiate the next context.
</para>
</listitem>
<listitem>
<para>
The second context is the instance context. This is another C-style class, which
is constructed and disposed withing the plugin class context. This one does
the actual work and subclasses the related plugin struct from the xine plugin
headers. It is instantiated for every separate running instance of the plugin
</para>
</listitem>
</itemizedlist>
</para>
</sect2>
</sect1>
<sect1>
<title>What is this metronom thingy?</title>
<para>
Metronom serves two purposes:
<itemizedlist>
<listitem>
<para>
Generate vpts (virtual presentation time stamps) from pts (presentation time stamps)
for a/v output and synchronization.
</para>
</listitem>
<listitem>
<para>
Provide a master clock (system clock reference, scr), possibly provided
by external scr plugins (this can be used if some hardware decoder or network
server dictates the time).
</para>
</listitem>
</itemizedlist>
</para>
<para>
pts/vpts values are given in 1/90000 sec units. pts values in mpeg streams
may wrap (that is, return to zero or any other value without further notice),
can be missing on some frames or (for broken streams) may "dance" around
the correct values. Metronom therefore has some heuristics built-in to generate
clean vpts values which can then be used in the output layers to schedule audio/video
output.
</para>
<para>
The heuristics used in metronom have always been a field of research. Current metronom's
implementation <emphasis>tries</emphasis> to stick to pts values as reported from demuxers,
that is, vpts may be obtained by a simple operation of vpts = pts + <varname>vpts_offset</varname>,
where <varname>vpts_offset</varname> takes into account any wraps. Whenever pts is zero,
metronom will estimate vpts based on previous values. If a difference is found between the
estimated and calculated vpts values by above formula, it will be smoothed by using a
"drift correction".
</para>
</sect1>
<sect1>
<title>How does xine synchronize audio and video?</title>
<para>
Every image frame or audio buffer leaving decoder is tagged by metronom with
a vpts information. This will tell video_out and audio_out threads when that
data should be presented. Usually there isn't a significative delay associated
with video driver, so we expect it to get on screen at the time it's
delivered for drawing. Unfortunately the same isn't true for audio: all sound
systems implement some amount of buffering (or fifo), any data being send to it
<emphasis>now</emphasis> will only get played some time in future. audio_out thread
must take this into account for making perfect A-V sync by asking the sound latency
to audio driver.
</para>
<para>
Some audio drivers can't tell the current delay introduced in playback. This is
especially true for most sound servers like ESD or aRts and explain why in such
cases the sync is far from perfect.
</para>
<para>
Another problem xine must handle is the sound card clock drift. vpts are
compared to the system clock (or even to a different clock provided by a scr plugin)
for presentation but sound card is sampling audio by its own clocking
mechanism, so a small drift may occur. As the playback goes on this
error will accumulate possibly resulting in audio gaps or audio drops. To avoid that
annoying effect, two countermeasures are available (switchable with xine config
option <parameter>audio.synchronization.av_sync_method</parameter>):
<itemizedlist>
<listitem>
<para>
The small sound card errors are feedbacked to metronom. The details
are given by <filename>audio_out.c</filename> comments:
<programlisting>
/* By adding gap errors (difference between reported and expected
* sound card clock) into metronom's vpts_offset we can use its
* smoothing algorithms to correct sound card clock drifts.
* obs: previously this error was added to xine scr.
*
* audio buf ---> metronom --> audio fifo --> (buf->vpts - hw_vpts)
* (vpts_offset + error) gap
* <---------- control --------------|
*
* Unfortunately audio fifo adds a large delay to our closed loop.
*
* These are designed to avoid updating the metronom too fast.
* - it will only be updated 1 time per second (so it has a chance of
* distributing the error for several frames).
* - it will only be updated 2 times for the whole audio fifo size
* length (so the control will wait to see the feedback effect)
* - each update will be of gap/SYNC_GAP_RATE.
*
* Sound card clock correction can only provide smooth playback for
* errors < 1% nominal rate. For bigger errors (bad streams) audio
* buffers may be dropped or gaps filled with silence.
*/</programlisting>
</para>
</listitem>
<listitem>
<para>
The audio is stretched or squeezed a slight bit by resampling, thus compensating
the drift: The next comment in <filename>audio_out.c</filename> explains:
<programlisting>
/* Alternative for metronom feedback: fix sound card clock drift
* by resampling all audio data, so that the sound card keeps in
* sync with the system clock. This may help, if one uses a DXR3/H+
* decoder board. Those have their own clock (which serves as xine's
* master clock) and can only operate at fixed frame rates (if you
* want smooth playback). Resampling then avoids A/V sync problems,
* gaps filled with 0-frames and jerky video playback due to different
* clock speeds of the sound card and DXR3/H+.
*/</programlisting>
</para>
</listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="osd">
<title>Overlays and OSD</title>
<para>
The roots of xine overlay capabilities are DVD subpictures and subtitles support
(also known as 'spu'). The DVD subtitles are encoded in a RLE (Run Length Encoding - the
most simple compressing technique) format, with a palette of colors and transparency
levels. You probably thought that subtitles were just simple text saved into DVDs, right?
Wrong, they are bitmaps.
</para>
<para>
In order to optimize to the most common case, xine's internal format for screen overlays
is a similar representation to the 'spu' data. This brings not only performance
benefit (since blending functions may skip large image areas due to RLE) but also
compatibility: it's possible to re-encode any xine overlay to the original spu format
for displaying with mpeg hardware decoders like DXR3.
</para>
<para>
Displaying subtitles requires the ability to sync them to the video stream. This
is done using the same kind of pts/vpts stuff of a-v sync code. DVD subtitles,
for example, may request: show this spu at pts1 and hide it at pts2. This brings the
concept of the 'video overlay manager', that is a event-driven module for managing
overlay's showing and hiding.
</para>
<para>
The drawback of using internal RLE format is the difficulty in manipulating it
as graphic. To overcome that we created the 'OSD renderer', where OSD stands
for On Screen Display just like in TV sets. The osd renderer is a module
providing simple graphic primitives (lines, rectagles, draw text etc) over
a "virtual" bitmap area. Everytime we want to show that bitmap it will
be RLE encoded and sent to the overlay manager for displaying.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="overlays.png" format="PNG">
</imageobject>
<imageobject>
<imagedata fileref="overlays.eps" format="EPS">
</imageobject>
<caption>
<para>overlays architecture</para>
</caption>
</mediaobject>
<sect2>
<title>Overlay Manager</title>
<para>
The overlay manager interface is available to any xine plugin. It's a bit unlikely
to be used directly, anyway here's a code snippet for enqueueing an overlay for
displaying:
<programlisting>
video_overlay_event_t event;
event.object.handle = this->video_overlay->get_handle(this->video_overlay,0);
memset(this->event.object.overlay, 0, sizeof(*this->event.object.overlay));
/* set position and size for this overlay */
event.object.overlay->x = 0;
event.object.overlay->y = 0;
event.object.overlay->width = 100;
event.object.overlay->height = 100;
/* clipping region is mostly used by dvd menus for highlighting buttons */
event.object.overlay->clip_top = 0;
event.object.overlay->clip_bottom = image_height;
event.object.overlay->clip_left = 0;
event.object.overlay->clip_right = image_width;
/* the hard part: provide a RLE image */
event.object.overlay->rle = your_rle;
event.object.overlay->data_size = your_size;
event.object.overlay->num_rle = your_rle_count;
/* palette must contain YUV values for each color index */
memcpy(event.object.overlay->clip_color, color, sizeof(color));
/* this table contains transparency levels for each color index.
0 = completely transparent, 15 - completely opaque */
memcpy(event.object.overlay->clip_trans, trans, sizeof(trans));
/* set the event type and time for displaying */
event.event_type = EVENT_SHOW_SPU;
event.vpts = 0; /* zero is a special vpts value, it means 'now' */
video_overlay->add_event(video_overlay, &event);</programlisting>
</para>
</sect2>
<sect2>
<title>OSD Renderer</title>
<para>
OSD is a general API for rendering stuff over playing video. It's available both
to xine plugins and to frontends.
</para>
<para>
The first thing you need is to allocate a OSD object for drawing from the
renderer. The code below allocates a 300x200 area. This size can't be changed
during the lifetime of a OSD object, but it's possible to place it anywhere
over the image.
</para>
<programlisting>
osd_object_t osd;
osd = this->osd_renderer->new_object(osd_renderer, 300, 200);</programlisting>
<para>
Now we may want to set font and color for text rendering. Although we will
refer to fonts over this document, in fact the OSD can be any kind of bitmap. Font
files are searched and loaded during initialization from
<filename>$prefix/share/xine/fonts/</filename> and <filename>~/.xine/fonts</filename>.
There's a sample utility to convert truetype fonts at
<filename>xine-lib/misc/xine-fontconv.c</filename>. Palette may be manipulated directly,
however most of the time it's convenient to use pre-defined text palettes.
</para>
<programlisting>
/* set sans serif 24 font */
osd_renderer->set_font(osd, "sans", 24);
/* copy pre-defined colors for white, black border, transparent background to
starting at the index used by the first text palette */
osd_renderer->set_text_palette(osd, TEXTPALETTE_WHITE_BLACK_TRANSPARENT, OSD_TEXT1);
/* copy pre-defined colors for white, no border, translucid background to
starting at the index used by the second text palette */
osd_renderer->set_text_palette(osd, TEXTPALETTE_WHITE_NONE_TRANSLUCID, OSD_TEXT2);</programlisting>
<para>
Now render the text and show it:
<programlisting>
osd_renderer->render_text(osd, 0, 0, "white text, black border", OSD_TEXT1);
osd_renderer->render_text(osd, 0, 30, "white text, no border", OSD_TEXT2);
osd_renderer->show(osd, 0); /* 0 stands for 'now' */</programlisting>
</para>
<para>
There's a 1:1 mapping between OSD objects and overlays, therefore the
second time you send an OSD object for displaying it will actually substitute
the first image. By using set_position() function we can move overlay
over the video.
</para>
<programlisting>
for( i=0; i < 100; i+=10 ) {
osd_renderer->set_position(osd, i, i );
osd_renderer->show(osd, 0);
sleep(1);
}
osd_renderer->hide(osd, 0);</programlisting>
<para>
For additional functions please check osd.h or the public header.
</para>
<sect3>
<title>OSD palette notes</title>
<para>
The palette functions demand some additional explanation, skip this if you
just want to write text fast without worring with details! :)
</para>
<para>
We have a 256-entry palette, each one defining yuv and transparency levels.
Although xine fonts are bitmaps and may use any index they want, we have
defined a small convention:
</para>
<programlisting>
/*
Palette entries as used by osd fonts:
0: not used by font, always transparent
1: font background, usually transparent, may be used to implement
translucid boxes where the font will be printed.
2-5: transition between background and border (usually only alpha
value changes).
6: font border. if the font is to be displayed without border this
will probably be adjusted to font background or near.
7-9: transition between border and foreground
10: font color (foreground)
*/</programlisting>
<para>
The so called 'transitions' are used to implement font anti-aliasing. That
convention requires that any font file must use only the colors from 1 to 10.
When we use the set_text_palette() function we are just copying 11 palette
entries to the specified base index.
</para>
<para>
That base index is the same we pass to render_text() function to use the
text palette. With this scheme is possible to have several diferent text
colors at the same time and also draw fonts over custom background.
</para>
<programlisting>
/* obtains size the text will occupy */
renderer->get_text_size(osd, text, &width, &height);
/* draws a box using font background color (translucid) */
renderer->filled_rect(osd, x1, y1, x1+width, y1+height, OSD_TEXT2 + 1);
/* render text */
renderer->render_text(osd, x1, y1, text, OSD_TEXT2);</programlisting>
</sect3>
<sect3>
<title>OSD text and palette FAQ</title>
<para>
Q: What is the format of the color palette entries?
</para>
<para>
A: It's the same as used by overlay blending code (YUV).
</para>
<para>
Q: What is the relation between a text palette and a palette
I set with xine_osd_set_palette?
</para>
<para>
A: xine_osd_set_palette will set the entire 256 color palette
to be used when we blend the osd image.
"text palette" is a sequence of 11 colors from palette to be
used to render text. that is, by calling osd_render_text()
with color_base=100 will render text using colors 100-110.
</para>
<para>
Q: Can I render text with colors in my own palette?
</para>
<para>
A: Sure. Just pass the color_base to osd_render_text()
</para>
<para>
Q: Has a text palette change effects on already drawed text?
</para>
<para>
A: osd_set_text_palette() will overwrite some colors on palette
with pre-defined ones. So yes, it will change the color
on already drawed text (if you do it before calling osd_show,
of course).
If you don't want to change the colors of drawed text just
use different color_base values.
</para>
<para>
Q: What about the shadows of osd-objects? Can I turn them off
or are they hardcoded?
</para>
<para>
A: osd objects have no shadows by itself, but fonts use 11
colors to produce an anti-aliased effect.
if you set a "text palette" with entries 0-9 being transparent
and 10 being foreground you will get rid of any borders or
anti-aliasing.
</para>
</sect3>
</sect2>
</sect1>
<sect1>
<title>MRLs</title>
<para>
This section defines a draft for a syntactic specification of MRLs as
used by xine-lib. The language of MRLs is designed to be a true subset
of the language of URIs as given in RFC2396. A type 2 grammar for the
language of MRLs is given in EBNF below.
</para>
<para>
Semantically, MRLs consist of two distinct parts that are evaluated by
different components of the xine architecture. The first part,
derivable from the symbol <input_source> in the given grammar, is
completely handed to the input plugins, with input plugins signaling
if they can handle the MRL.
</para>
<para>
The second part, derivable from <stream_setup> and delimited from the
first by a crosshatch ('#') contains parameters that modify the
initialization and playback behaviour of the stream to which the MRL
is passed. The possible parameters are mentioned in the manpage to
xine-ui.
</para>
<para>
The following definition should be regarded as a guideline only.
Of course any given input plugin only understands a subset of all
possible MRLs. On the other hand, invalid MRLs according to this
definition might be understood for convenience reasons.
Some user awareness is required at this point.
</para>
<para>
EBNF grammar for MRLs:
<programlisting>
<mrl> ::= <input_source>[#<stream_setup>]
<input_source> ::= (<absolute_mrl>|<relative_mrl>)
<absolute_mrl> ::= <input>:(<hierarch_part>|<opaque_part>)
<hierarch_part> ::= (<net_path>|<abs_path>)[?<query>]
<opaque_part> ::= (<unreserved>|<escaped>|;|?|:|@|&|=|+|$|,){<mrl_char>}
<relative_mrl> ::= (<abs_path>|<rel_path>)
<net_path> ::= //<authority>[<abs_path>]
<abs_path> ::= /<path_segments>
<rel_path> ::= <rel_segment>[<abs_path>]
<rel_segment> ::= <rel_char>{<rel_char>}
<rel_char> ::= (<unreserved>|<escaped>|;|@|&|=|+|$|,)
<input> ::= <alpha>{(<alpha>|<digit>|+|-|.)}
<authority> ::= (<server>|<reg_name>)
<server> ::= [[<userinfo>@]<host>[:<port>]]
<userinfo> ::= {(<unreserved>|<escaped>|;|:|&|=|+|$|,)}
<host> ::= (<hostname>|<ipv4_address>|<ipv6_reference>)
<hostname> ::= {<domainlabel>.}<toplabel>[.]
<domainlabel> ::= (<alphanum>|<alphanum>{(<alphanum>|-)}<alphanum>)
<toplabel> ::= (<alpha>|<alpha>{(<alphanum>|-)}<alphanum>)
<ipv4_address> ::= <digit>{<digit>}.<digit>{<digit>}.<digit>{<digit>}.<digit>{<digit>}
<port> ::= {<digit>}
<reg_name> ::= <reg_char>{<reg_char>}
<reg_char> ::= (<unreserved>|<escaped>|;|:|@|&|=|+|$|,)
<path_segments> ::= <segment>{/<segment>}
<segment> ::= {<path_char>}{;<param>}
<param> ::= {<path_char>}
<path_char> ::= (<unreserved>|<escaped>|:|@|&|=|+|$|,)
<query> ::= {<mrl_char>}
<stream_setup> ::= <stream_option>;{<stream_option>}
<stream_option> ::= (<configoption>|<engine_option>|novideo|noaudio|nospu)
<configoption> ::= <configentry>:<configvalue>
<configentry> ::= <unreserved>{<unreserved>}
<configvalue> ::= <stream_char>{<stream_char>}
<engine_option> ::= <unreserved>{<unreserved>}:<stream_char>{<stream_char>}
<stream_char> ::= (<unreserved>|<escaped>|:|@|&|=|+|$|,)
<mrl_char> ::= (<reserved>|<unreserved>|<escaped>)
<reserved> ::= (;|/|?|:|@|&|=|+|$|,|[|])
<unreserved> ::= (<alphanum>|<mark>)
<mark> ::= (-|_|.|!|~|*|'|(|))
<escaped> ::= %<hex><hex>
<hex> ::= (<digit>|A|B|C|D|E|F|a|b|c|d|e|f)
<alphanum> ::= (<alpha>|<digit>)
<alpha> ::= (<lowalpha>|<upalpha>)
<lowalpha> ::= (a|b|c|d|e|f|g|h|i|j|k|l|m|n|o|p|q|r|s|t|u|v|w|x|y|z)
<upalpha> ::= (A|B|C|D|E|F|G|H|I|J|K|L|M|N|O|P|Q|R|S|T|U|V|W|X|Y|Z)
<digit> ::= (0|1|2|3|4|5|6|7|8|9)</programlisting>
With <ipv6_reference> being an IPv6 address enclosed in [ and ] as defined in RFC2732.
</para>
</sect1>
</chapter>
|