i was wondering if it's possible to do the same thing. To start live streaming with FFmpeg, you have to download and install the software on your computer. ffplay -flags2 +exportmvs input.mp4 -vf codecviewmvpf+bf+bb ffmpeg -flags2 +exportmvs -i input.mp4 -vf codecviewmvpf+bf+bb output.mp4.
ffmpeg -i h264.mp4 -c:v copy -bsf:v h264mp4toannexb -an out.h264 ffmpeg -i.
so i found this page that explains this filter and how to use it in ffmepg/ffplay. Mainly used to simulate a capture device or live input stream (e.g. In DummySink::afterGettingFrame the buffer contains the H264 elementary stream frames extract from RTP buffer. need help using debug Motion Vectors filter of ffmpeg/ffplay in mpv. Next when this callback is executed you should call continuePlaying to register it again for next incoming data. The Sink doesnot depend on the codec, the startPlaying method on the MediaSink register the callback afterGettingFrame to be called when data will be received by the source. In the case of the RTSP client sample testRTSPClient.cpp, the source which depends on the codec is created processing the DESCRIBE answer calling MediaSession::createNew. The class H264VideoRTPSink is made to publish data through RTP, not to consume data. 'source1' -> 'source2' (a filter) -> 'source3' (a filter) -> 'sink' For example, you could tell FFmpeg to encode it using crf18 for pretty high-quality H.264/AVC encoding, or choose something else.
The first confusion is betweeen Source & Sink, the FAQ decribe briefly the workflow: In the command line above, you can supply encoding parameters to FFmpeg and encode the scaled video using those parameters. Personally, I use a CRF from 17 to 18 for SD content and 20-21.5 for HD content.