Length of video created with PyAV

623 Views Asked by At

When creating a video from a list of frames at an assigned framerate using PyAV, the videos end up being shorter that expected by a few seconds, while the framerate in the video metadata is correct. Here is a sample code of the function to save the videos:

def save_video(video_name,imlist,framerate,bitrate):
    container = av.open(video_name, mode='w')
    stream = container.add_stream('h264', framerate)
    stream.codec_context.bit_rate = bitrate*1e3 # Bitrate in kbps
    stream.width = image_width
    stream.height = image_height    
    stream.pix_fmt = 'yuv420p'
    for image in imlist:
            image = av.VideoFrame.from_ndarray(image)
            packet = stream.encode(image)
            container.mux(packet)
    container.close()

For example, when making a video from 600 frames at 30fps, the result is an 18s video instead of 20s. Loading the video again, it looks like some frames were dropped.

I would like to stick with PyAV as I can easily set the bitrate. I have tried using OpenCV, which works but produces very large video files.

What am I missing?

1

There are 1 best solutions below

0
Rotem On BEST ANSWER

We have to flush the remaining packets from encoder before closing the container.

Flushing the encoder is done by encoding None and muxing the output packets.
Add the following code before container.close():

remain_packets = stream.encode(None)
container.mux(remain_packets)

Code sample for testing:

import av
import numpy as np
import cv2

def save_video(video_name, imlist, framerate, bitrate):
    image_height, image_width, ch = imlist[0].shape
    container = av.open(video_name, mode='w')
    stream = container.add_stream('h264', framerate)
    stream.codec_context.bit_rate = bitrate*1e3 # Bitrate in kbps
    stream.width = image_width
    stream.height = image_height
    stream.pix_fmt = 'yuv420p'
    for image in imlist:
        image = av.VideoFrame.from_ndarray(image)
        packet = stream.encode(image)
        container.mux(packet)

    # Flush the encoder
    remain_packets = stream.encode(None)
    container.mux(remain_packets)

    container.close()


def make_sample_image(i, width, height):
    """ Build synthetic "raw RGB" image for testing """
    p = width//60
    img = np.full((height, width, 3), 60, np.uint8)
    cv2.putText(img, str(i+1), (width//2-p*10*len(str(i+1)), height//2+p*10), cv2.FONT_HERSHEY_DUPLEX, p, (30, 30, 255), p*2)  # Blue number (going to be blue when treated as RGB instead of BGR).
    return img


# Build a list of 100 images with sequential frame counter for testing:
imlist = []
for n in range(100):
    img = make_sample_image(n, 192, 108)
    imlist.append(img)

save_video("output_vid.mp4", imlist, 1, 1000)