The video buffer took ownership of the producer frame (so that it could
swap frames quickly).
In order to support multiple sinks plugged to the decoder, the decoded
frame must not be consumed by the display video buffer.
Therefore, move the producer and consumer frames out of the video
buffer, and use FFmpeg AVFrame refcounting to share ownership while
avoiding copies.
This flag forced the decoder to wait for the previous frame to be
consumed by the display.
It was initially implemented as a compilation flag for testing, not
intended to be exposed at runtime. But to remove ifdefs and to allow
users to test this flag easily, it had finally been exposed by commit
ebccb9f6cc.
In practice, it turned out to be useless: it had no practical impact,
and it did not solve or mitigate any performance issues causing frame
skipping.
But that added some complexity to the codebase: it required an
additional condition variable, and made video buffer calls possibly
blocking, which in turn required code to interrupt it on exit.
To prepare support for multiple sinks plugged to the decoder (display
and v4l2 for example), the blocking call used for pacing the decoder
output becomes unacceptable, so just remove this useless "feature".
A skipped frame is detected when the producer offers a frame while the
current pending frame has not been consumed.
However, the producer (in practice the decoder) is not interested in the
fact that a frame has been skipped, only the consumer (the renderer) is.
Therefore, notify frame skip via a consumer callback. This allows to
manage the skipped and rendered frames count at the same place, and
remove fps_counter from decoder.
Video buffer is a tool between a frame producer and a frame consumer.
For now, it is used between a decoder and a renderer, but in the future
another instance might be used to swscale decoded frames.
There were only two frames simultaneously:
- one used by the decoder;
- one used by the renderer.
When the decoder finished decoding a frame, it swapped it with the
rendering frame.
Adding a third frame provides several benefits:
- the decoder do not have to wait for the renderer to release the
mutex;
- it simplifies the video_buffer API;
- it makes the rendering frame valid until the next call to
video_buffer_take_rendering_frame(), which will be useful for
swscaling on window resize.
The FPS counter was called only on new frames, so it could not print
values regularly, especially when there are very few FPS (when the
device surface does not change).
To the extreme, it was never able to display 0 fps.
Add a separate thread to print framerate every second.
The function video_buffer_offer_decoded_frame() returned a bool to
indicate whether the previous frame had been consumed.
This was confusing, because we could expect the returned bool report
whether the action succeeded.
Make the semantic explicit by using an output parameter.
Also revert the flag (report if the frame has been skipped instead of
consumed) to avoid confusion for the first frame (the previous is
neither skipped nor consumed because there is no previous frame).
Limit source code to 80 chars, and declare functions return type and
modifiers on a separate line.
This allows to avoid very long lines, and all function names are
aligned.
(We do this on VLC, and I like it.)