I'm using FFMPEG to slice png files from videos.
I'm slicing the videos in fps between 1-3 depending on some video metadata. I can see that when the subjects in the video moving fast or the camera are not steady I will get blurred frames. I try to research how I can solve it (The quality of these frames is my main goal) and I tackled the minterpolate option.
I think that if I will use the blend option that will mean the 3 frames to 1 the "noise" of the blurred subjects will reduce.
So my current command now is like this:
./ffmpeg -i "/home/dev/ffmpeg/test/input/@3.mp4" -vf minterpolate=fps=1:mi_mode=blend,mpdecimate=hi=11456:lo=6720:frac=0.5 -vsync 0 "/home/dev/ffmpeg/test/output/3/(#%04d).png"
Am I right? Do you think of a better way to use FFMPEG to solve my problem?
You can create a better interpolation result if you use the
mcimethod in ffmpeg, rather than theblendmethod. There are also more advanced techniques available.If I understand you correctly you have a blurred image (let's call it B, in the middle) between two non-blurred images (let's call them A on the left and C on the right). Now you want to replace the middle frame B by a non-blurred version of B.
The
minterpolatefilter is used for interpolating in ffmpeg. It has two different approaches. Ablendmode, which fades out A and then fades in C to generate a new image for B.Blend
If I run it with
ffmpeg -i %02d.png -framerate 10 -vf minterpolate=fps=20:mi_mode=blend test-%02d.pngI get the following image.Motion estimation
You can also use the motion estimation or
mcimode, which allows you to do motion based interpolation. You can call it by changing the mode:ffmpeg -i %02d.png -framerate 10 -vf minterpolate=fps=20:mi_mode=mci test-%02d.pngThat generates a circle in the middle.Going further
The ffmpeg mci mode uses a classic algorithm. There are some more advanced optical flow and neural network based approaches available. Which can give better results with more complex images and with more complex motion fields.