Video buffer is a tool between a frame producer and a frame consumer.
For now, it is used between a decoder and a renderer, but in the future
another instance might be used to swscale decoded frames.
There were only two frames simultaneously:
- one used by the decoder;
- one used by the renderer.
When the decoder finished decoding a frame, it swapped it with the
rendering frame.
Adding a third frame provides several benefits:
- the decoder do not have to wait for the renderer to release the
mutex;
- it simplifies the video_buffer API;
- it makes the rendering frame valid until the next call to
video_buffer_take_rendering_frame(), which will be useful for
swscaling on window resize.
The FPS counter was called only on new frames, so it could not print
values regularly, especially when there are very few FPS (when the
device surface does not change).
To the extreme, it was never able to display 0 fps.
Add a separate thread to print framerate every second.
The function video_buffer_offer_decoded_frame() returned a bool to
indicate whether the previous frame had been consumed.
This was confusing, because we could expect the returned bool report
whether the action succeeded.
Make the semantic explicit by using an output parameter.
Also revert the flag (report if the frame has been skipped instead of
consumed) to avoid confusion for the first frame (the previous is
neither skipped nor consumed because there is no previous frame).
Limit source code to 80 chars, and declare functions return type and
modifiers on a separate line.
This allows to avoid very long lines, and all function names are
aligned.
(We do this on VLC, and I like it.)