As per the title, I'm getting very different framerates. I have a simple C++ OpenCV app that just reads a frame from the camera and then encodes it in JPG. About 50% of the runs, the reading takes 20ms and the encoding 19ms. In the other 50% of the runs, reading takes 40-45ms and encoding takes 33ms.
The following ffmpeg command, which also encodes in x264 and streams to YouTube does everything in ~25ms (consistently getting 40 FPS).
ffmpeg -s 1280x720 -i /dev/video2 -f lavfi -i anullsrc -f flv -c:v libx264 -c:a aac -preset ultrafast rtmp://a.rtmp.youtube.com/live2/<key>
Both the OpenCV app and ffmpeg run on a i.MX8-based board. What I've checked so far:
- Both OpenCV and ffmpeg read from the webcam using V4L. Maybe they're using different versions / settings? Not sure how to check that though.
- The inconsistency between runs is what bugs me. If there'd be a settings issue, I'd expect the OpenCV operations to always run at the same speed, whereas right now it's almost like there are two different modes of operation. Normal, when data can be read at 50FPS and slow, at 20 FPS.