The frame can be unref immediately after it is pushed to the frame
sinks.
It was not really a memory leak because the frame was unref every time
by avcodec_receive_frame() (and freed on close), but a reference was
unnecessarily kept for too long.
The video buffer took ownership of the producer frame (so that it could
swap frames quickly).
In order to support multiple sinks plugged to the decoder, the decoded
frame must not be consumed by the display video buffer.
Therefore, move the producer and consumer frames out of the video
buffer, and use FFmpeg AVFrame refcounting to share ownership while
avoiding copies.
This flag forced the decoder to wait for the previous frame to be
consumed by the display.
It was initially implemented as a compilation flag for testing, not
intended to be exposed at runtime. But to remove ifdefs and to allow
users to test this flag easily, it had finally been exposed by commit
ebccb9f6cc.
In practice, it turned out to be useless: it had no practical impact,
and it did not solve or mitigate any performance issues causing frame
skipping.
But that added some complexity to the codebase: it required an
additional condition variable, and made video buffer calls possibly
blocking, which in turn required code to interrupt it on exit.
To prepare support for multiple sinks plugged to the decoder (display
and v4l2 for example), the blocking call used for pacing the decoder
output becomes unacceptable, so just remove this useless "feature".
A skipped frame is detected when the producer offers a frame while the
current pending frame has not been consumed.
However, the producer (in practice the decoder) is not interested in the
fact that a frame has been skipped, only the consumer (the renderer) is.
Therefore, notify frame skip via a consumer callback. This allows to
manage the skipped and rendered frames count at the same place, and
remove fps_counter from decoder.
Video buffer is a tool between a frame producer and a frame consumer.
For now, it is used between a decoder and a renderer, but in the future
another instance might be used to swscale decoded frames.
The function video_buffer_offer_decoded_frame() returned a bool to
indicate whether the previous frame had been consumed.
This was confusing, because we could expect the returned bool report
whether the action succeeded.
Make the semantic explicit by using an output parameter.
Also revert the flag (report if the frame has been skipped instead of
consumed) to avoid confusion for the first frame (the previous is
neither skipped nor consumed because there is no previous frame).
Limit source code to 80 chars, and declare functions return type and
modifiers on a separate line.
This allows to avoid very long lines, and all function names are
aligned.
(We do this on VLC, and I like it.)
The decoder initially read from the socket, decoded the video and sent
the decoded frames to the screen:
+---------+ +----------+
socket ---> | decoder | ---> | screen |
+---------+ +----------+
The design was simple, but the decoder had several responsabilities.
Then we added the recording feature, so we added a recorder, which
reused the packets received from the socket managed by the decoder:
+----------+
---> | screen |
+---------+ / +----------+
socket ---> | decoder | ----
+---------+ \ +----------+
---> | recorder |
+----------+
This lack of separation of concerns now have concrete implications: we
could not (properly) disable the decoder/display to only record the
video.
Therefore, split the decoder to extract the stream:
+----------+ +----------+
---> | decoder | ---> | screen |
+---------+ / +----------+ +----------+
socket ---> | stream | ----
+---------+ \ +----------+
---> | recorder |
+----------+
This will allow to record the stream without decoding the video.
The deprecated avcodec_decode_video2() should always the whole packet,
so there is no need to loop (cf doc/examples/demuxing_decoding.c in
FFmpeg).
This hack changed the packet size and data pointer. This broke recording
which used the same packet.
Configuration packets produced by MediaCodec have no valid PTS, and do
not produce frame. Do not queue their (invalid) PTS not to break the
matching between frames and their PTS.
Several frames may be read by read_packet() before they are consumed
(returned by av_read_frame()), so we need to store the PTS of frames in
order, so that the right PTS is assigned to the right frame.
Since PTS handling has been fixed, the recorder do not associate a PTS
to a wrong frame anymore, so PTS of "configuration packets" (which never
produce a frame), are never read by the recorder. Therefore, there is no
need to ignore them explicitly, so we can remove the MediaCodec flags
completely.
The PTS was read from the socket and set as the current one even before
the frame was consumed, so it could be assigned to the previous frame
"in advance".
Store the PTS for the current frame and the last PTS read from the
packet header of the next frame in separate fields.
As a side-effect, this fixes the warning on quit:
> Application provided invalid, non monotonically increasing dts to
> muxer in stream 0: 17164020 >= 17164020