I am having problems encoding a HEVC video from a series of YUV420 Mat data via FFmpeg.
- I am using python in Ubuntu-20.04;
- I am retrieving frame data from a hardware triggered camera (BASLER), using pypylon;
- I want to write a video from that camera in HEVC codec, using my GPU-NVENC;
- I guess I have to use FFmpeg to achieve these;
What I have tried:
- I find that FFmpeg supports encoding from a camera, but it seems to only support webcams, not the camera I use (hardware triggered BASLER cameras with pypylon APIs);
- I find that FFmpeg supports transfering a video from one codec to another, which is not my case;
- I find that FFmpeg supports encoding a video from a series of jpeg images. But in my case, it will be inefficient if I first save each frame into a picture and then encode them into a video;
- The frame data retrieved from camera can be converted to YUV420 (directly from pypylon), which is suitable for HEVC encoding;
- I learnt that the basic unit in FFmpeg to encode a video is AVFrame. I guess I have to first turn my YUV420 data into AVFrame, then encode AVFrames into HEVC;
- But I do not know how to achieve that in python.
My simplified and expected codes:
camera = pylon.InstantCamera(tlf.CreateFirstDevice())
converter = pylon.ImageFormatConverter()
converter.OutputPixelFormat = pylon.PixelType_YUV420
video_handle = xxxxxx # HEVC
while True:
grabResult = camera.RetrieveResult(timeout, pylon.TimeoutHandling_ThrowException)
image = converter.Convert(grabResult).GetArray()
video_handle.write(frame) # encode into a hevc video via ffmpeg in NVENC