Age | Commit message (Collapse) | Author |
|
|
|
hide/show. Hide overlay when there are no objects to display.
|
|
point, not at end of display set
|
|
|
|
--HG--
rename : src/libxineadec/xine_lpcm_decoder.c => src/audio_dec/xine_lpcm_decoder.c
|
|
This is a backport of the 1.2 code that was commited to utilize the new API
provided by FFmpeg for awhile now but this is especially important because
the old API has been eliminated all together from said copies of FFmpeg.
|
|
Relatively recent copies of FFmpeg before the major API clean up have both
the old SHA1 API and the new SHA (1/2) API so the recently added autoconf
check will reject perfectly valid copies of FFmpeg. Also tweak the
input_cdda code to make sure to use the new API and not include the compat
macros if both the old and new API are around.
|
|
|
|
|
|
|
|
|
|
|
|
First of all, it improves the qt demuxer, ensuring that 24-bit audio is
marked appropriately, and detecting little vs. big endian audio. It also
adjusts the buffer size when audio is 24-bit, ensuring that samples aren't
chopped in half (8192 does not divide evenly into 3 byte samples).
Secondly, in the lpcm decoder, the patch distinguishes between standard
24-bit lpcm (big and little endian) and special DVD-format 24-bit lpcm (see
http://wiki.multimedia.cx/index.php?title=PCM) and now handles both, instead
of only handling the DVD format.
The result is that xine now correctly plays all the 24-bit lpcm samples I
throw at it, whereas before only a few worked.
|
|
|
|
In opposite to the 'xine_get_current_frame' based snapshot function this grabbing
feature allow continuous grabbing of last or next displayed video frame.
Grabbed video frames are returned in simple three byte RGB format.
Depending on the capabilities of the used video output driver video image data is
taken as close as possible at the end of the video processing chain. Thus a returned
video image could contain the blended OSD data, is deinterlaced, cropped and scaled
and video properties like hue, sat could be applied.
With this patch such a decent grabbing feature is implemented for vdpau video out driver.
If a video output driver does not have a decent grabbing implementation then there
is a generic fallback feature that grabs the video frame as they are taken from the video
display queue (like the xine_get_current_frame' function).
In this case color correct conversation to a RGB image incorporating source cropping and
scaling to the requested grab size is also supported.
A more detailed description can be found in file "xine.h".
|
|
|
|
|
|
|
|
_x_get_current_frame_data() called get_last_frame() and locked the returned
frame afterwards. At the same time, video_out_loop() unlocked last_frame to
assign a different img afterwards. So in case the image got unlocked before
it gets locked again, the image resides already on the free image queue. So
when the image gets unlocked, it will be put a second time to the queue and
hence cause a loop in the list the queue is based on. Getting an image from
the queue will then run endlessly.
To fix this issue, a new mutex is introduced which protects write access to
last_frame and read accesses via get_last_frame() from other threads. Next,
the semantic of get_last_frame() had to be changed to return a locked image
already. Finally, functions calling get_last_frame() had to be adapted to
its new behavior (there was only a single function in xine-lib which had to
be adapted: _x_get_current_frame_data()).
|
|
|
|
|
|
demux_aac.c looks for 2 signatures in the given stream to detect if it is an
AAC stream, however only the absence of the second signature is used to rule
out a positive match. This may lead to false positives.
|
|
For now, it will only work when there are no video post plugins enabled.
In case there are video post plugins enabled the changeset has no effect.
--HG--
extra : rebase_source : cfb22e2ef12ac547ea24ebf59565fe8720ab3635
|
|
This is necessary as VDR expects its OSD flush call to return as quickly
as possible. Hence, we can nolonger wait until all changes have appeared
on screen. As a result, a following OSD change might be written to the
ARGB buffer while the buffer is currently transferred to screen, causing
visible distortions.
--HG--
extra : rebase_source : 19c4d5a1c73b5791e66f276d57fe62497d00fb7b
|
|
--HG--
extra : rebase_source : ce0547448abc3011feea54401c3e46702fbe6f11
|
|
|
|
actual video format (SD or HDTV) for vdpau output driver.
With the new configuration parameter 'video.output.vdpau_sd_only_properties" enabling of this video properties can be restricted
to SD video only. Videos with a frame width < 800 are considered as SD videos.
Often noise and sharpness corrections are only necessary for SD videos and are counterproductive to HDTV videos. With the restriction
enabled the user do not have to correct these settings each time the format changes.
--HG--
extra : rebase_source : 874a98cd3e9fc64084c0b5c2e9a7eec34414db2a
|
|
According to the nvidia vdpau documentation for the used formats in this function
there are no needs for special alignment for the fetched bitmap images . Thus we
can fetch the image data directly to the supplied image buffer avoiding heap memory allocation and additional copying of image data.
--HG--
extra : rebase_source : 0956eab8cd2f23ecc1452aa5ecc3db27fca635b9
|
|
For smooth operation of frame displaying on platforms with less processing power
(and in preparation for a new frame grabbing extension) the vdpau display queue length,
which is actually fixed at size 2, could now be increased by configuration. There is a new
configuration parameter 'video.output.vdpau_display_queue_length' with default value of 2.
Maximum queue length is 8. Increasing this parameter by 1 should almost be enough.
--HG--
extra : rebase_source : b8d045053d744b6693c50e4a8d81199ba7f315e2
|
|
The new implementation has the following advantages towards the existing one:
There is now a unique processing of RLE coded images and ARGB based overlay images.
For both formats scaled and unscaled images and a video window are supported.
Both formats are rendered now in given order into the same output surfaces not using
a dedicated output surface for ARGB images any more.
Processing of YCBCR overlay images now uses corresponding vdpau bitmap surfaces
eliminating the existing (possible slower) conversation to RGB images.
Optimized processing of first overlay from stack avoiding unnecessary surface initialization and rendering operations.
Currently the new implementation does not take the dirty rect information of a ARGB overlay into account for optimization (but is there actually a existing player implementation that provides this data?).
--HG--
extra : rebase_source : 037f67efdabb0b197e4d1ea2ce14d15f3eb3d8fe
|
|
decoder thread.
Raising nice priority is not limited to root user only on modern unix/linux systems.
So a log message about failure is helpful to everyone.
|
|
recalculation of displayed window.
This issue comes up when a post plugin only changes the cropping parameters of a frame without changing width, height or ratio of the frame.
--HG--
extra : rebase_source : 18832d5ec6acdb8ebe7a0d1d10ceaefb2aef663f
|
|
recalculation of displayed window.
This issue comes up when a post plugin only changes the cropping parameters of a frame without changing width, height or ratio of the frame.
|
|
Resulting left and right cropping parameters should be multiple of 2.
Left cropping offset calculation to YUY2 frames fixed.
|
|
Intercepted frames should never pass function vdpau_update_frame_format.
--HG--
extra : rebase_source : d1d05b67865eb6cc79225f96159184852f239262
|
|
--HG--
rename : src/libxineadec/gsm610/Makefile.am => contrib/gsm610/Makefile.am
rename : src/libxineadec/nosefart/diff_to_nosefart_cvs.patch => contrib/nosefart/diff_to_nosefart_cvs.patch
rename : src/libxineadec/nosefart/nes6502.c => contrib/nosefart/nes6502.c
rename : src/libxineadec/nosefart/nes6502.h => contrib/nosefart/nes6502.h
rename : src/libxineadec/nosefart/nes_apu.c => contrib/nosefart/nes_apu.c
rename : src/libxineadec/nosefart/nes_apu.h => contrib/nosefart/nes_apu.h
rename : src/libxineadec/nosefart/nsf.c => contrib/nosefart/nsf.c
rename : src/libxineadec/nosefart/nsf.h => contrib/nosefart/nsf.h
rename : src/libxineadec/nosefart/types.h => contrib/nosefart/types.h
rename : src/libxineadec/nosefart/version.h => contrib/nosefart/version.h
rename : doc/faq/faq.sgml => doc/faq/faq.docbook
rename : src/libsputext/demux_sputext.c => src/spu_dec/sputext_demuxer.c
rename : src/libxinevdec/image.c => src/video_dec/image.c
|
|
downwards within post
plugins because this corrupted the receiving frame acceleration data.
This issue occurs typically when a post plugin retrieves a new frame from the video out stage and
then does a _x_post_frame_copy_down from the frame that is delivered from the video decoder.
In this case the two frames are unrelated and acceleration data get messed up.
|
|
Frame duration calculation for repeated frames or fields is now correct,
but video_out_vdpau.c needs to be adjusted too, because for now it will
show for example 1.5x top field and 1.5x bottom field instead of 1x top
field, 1x bottom field and 1x top field again.
|
|
input_vdr.c cannot send buffers with preview flag set after a decoder
reset. Therefore, the decoder didn't get initialized anymore. So we
need to call ff_handle_preview_buffer() even with real data as long
as we are in decoder_init_mode to get a decoder initialized.
|
|
xine-libs OSD stack is event driven and some memory blocks are not copied
but responsibility to free the memory moves to different layers of the
OSD stack.
When argb_layer was introduced, this behavior was not taken into account
and as such it is likely that for example osd_free_object() frees the
argb_layer while vdpau_overlay_* functions still access the memory.
Passing responsibility for the argb_layer is not that easy as it seems as
the design goal of the argb_layer was to not duplicate any memory of the
argb_buffer which all other OSD functions usually do.
To solve this issue, argb_layer_t will be turned into a managed data
structure by introducing a ref_count member. ref_count increases as more
layers of the OSD stack hold a reference to that memory block, and it
decreases when they are no longer interested in it. When ref_count reaches
zero the memory block is freed automatically. To deal with ref counting,
set_argb_layer_ptr() has been introduced.
Some functions of the OSD layers had to be modified to deal with reference
tracking. For convinience, osd_free_object() should clear the argb_buffer
pointer so that the buffer may be freed safely after returning.
|
|
Drawing can only be triggered when a backup image exists. Otherwise it's
useless. Therefore, honor it only when a backup frame exists.
--HG--
extra : rebase_source : bffa00a9d9ed4e8994185655d20ba2b3fc9ad818
|
|
VDPAU applies different constraints to surface size and memory consumption
than xine. Hence, when decoding bad streams (due to receptions issues for
examples) with broken image sizes, it is likely that segfaults happen due
to different memory size assumptions.
So by adjusting image size to meet xine and VDPAU constraints, segfaults
can be avoided.
--HG--
extra : rebase_source : c493fac162bb34ab357783821bc72be85682c1eb
|
|
When using pitch 0 it is sufficient to allocate just a single row.
--HG--
extra : rebase_source : 47b554da704a5c6fc073fee587a968145d3fa230
|
|
--HG--
extra : rebase_source : 686c74f934f4f780d08909c24238114cc9f64f4c
|
|
As both types have the same signature, there was error during build
nor at runtime.
--HG--
extra : rebase_source : 125cbc9417554303cc6a3c04dfedfedcdcc4710b
|
|
guarded_vdp_output_surface_destroy() calls XLockDisplay() and
XUnlockDisplay() always and not depending on define LOCKDISPLAY.
This may cause a deadlock among threads using VDPAU API due to
different resource allocation orders when guarding is not used
(i. e. #undef LOCKDISPLAY is enabled).
--HG--
extra : rebase_source : abafc9604d8cfb6efe07f665361c4e01e60adc37
|
|
This function is similar to _x_query_buffer_usage() but retrieves also
the total and available (= free) number of buffers besides the number
of buffers ready for processing. For example if one wants to create a
buffering algorithm based on the number of frames ready, it's not that
easy to determine the maximum number of ready frames possible. In case
one configures engine.buffers.video_num_frames:50 it may happen, that
only 30 frames can actually be provided by the video out driver. Next
a video codec like H.264 may hold several frames in its display picture
buffer so that you may end up with only 13 ready frames at maximum. At
the same time, the number of available (= free) frames will be 0 (or
almost zero in case of vo). So it may be even easier to base the buffer
algorithm on the number of free buffers.
The reported numbers may also reveal that too few input buffers have
been provided to compensate a large a/v offset at input stage.
--HG--
extra : rebase_source : 255cb186891fbab5199a99031cf1b1e93ac19923
|
|
According to ITU-T Rec. H.264 (03/2005), Table D-1, there is no such
indicated display of picture. As a result, succeeding enum constants
were mapped to the wrong values.
|
|
The image allocated in decoder_init can immediately be freed after getting
access to accel_data. Accessing the accel pointer afterwards is safe for non
frame related functions although the frame has been freed as freed frames are
not deallocated but put into a free frame queue.
|
|
Currently a parsed picture was only decoded when the parser came across a
NAL unit which started a new picture (aka access unit). An end of sequence
NAL unit must be handled like the access unit delimiter NAL unit, as both
terminate the current access unit.
Handling the end of sequence NAL unit in this way fixes displaying of still
images which end in an end of seqeunce NAL unit. Otherwise they were not
decoded at all (in case of a still frame) or the second field was missing
(in case of a still image which had been coded as a pair of fields).
|