Adding motion blur to the object only, not entire image

54 Views Asked by At

I'm new to image analysis, so please bear with me.

I have an image of a stationary projectile. I have the contour of the projectile as an array on the form

contour = [[x1 x2 x3,..., xn]
[y1 y2 y3,..., yn]]

where (x1,y1), (x2,y2),..., (xn,yn) are the pixel positions of the projectile's contour.

I want to apply a motion blur such that it appears that the projectile is moving horizontally relative to the background. My first idea was to simply apply the kernel

 # Create the horizontal kernel. 
        kernel_h = np.zeros((kernel_size, kernel_size))
          
        # Fill the middle column with ones.  
        kernel_h[int((kernel_size - 1)/2), :] = np.ones(kernel_size) 
          
        # Normalize.  
        kernel_h /= kernel_size

to the entire image. If I apply the kernel by writing

horizonal_blurred_image = cv2.filter2D(src=img, ddepth=-1, kernel=kernel_h)

I end up with the following image. While I do get the desired motion blur effect, I notice that my background also changes. This occurs because I'm horizontally averaging over the entire image instead of just at the points lying on the edge (and inside) of the projectile's contour. I guess this means that by doing the above I'm implicitly assuming that it is the camera that is moving, and not the projectile relative to the background.

To attempt to fix the issue, so that the projectile moves relative to the background I have tried to use the cv2.pointPolygonTest function, to determine the pixels lying inside (and on the edge) of the contour. To find these points I can loop through the image and use the cv2.pointPolygonTest(contour_x_values,contour_y_values,(x,y),False) command.

My idea was then that If I apply the identity kernel to all the points outside the contour, and the horizontal kernel to the points inside (and on the edge) of the contour I am back in business. I therefore wrote the following function:

def Motion_Blur_Kernel(img, kernel_size,x,y, contour):
    #img, is the original unblurred image 
    #kernel_size is the size of the kernel (in pixels). The greater the size, the more the motion.
    #contour = [[x1 x2 x3,..., xn]
    #[y1 y2 y3,..., yn]] is the array of points lying on the contour.  
    
    #Make kernel
    result = cv2.pointPolygonTest(np.column_stack(contour_x_values, contour_y_values), (x,y), False)
    if result == 1 or result == 0:
        # Create the horizontal kernel. 
        kernel_h = np.zeros((kernel_size, kernel_size))
          
        # Fill the middle column with ones.  
        kernel_h[int((kernel_size - 1)/2), :] = np.ones(kernel_size) 
          
        # Normalize.  
        kernel_h /= kernel_size
    else:
        kernel_h = [[0, 0, 0], [0, 1, 0], [0, 0, 0]]
        
    kernel_h

However, how can I now convolve this kernel with the original image in an efficient manner? I'm asking about efficiency, because I'm planning to do this for 100 images.

0

There are 0 best solutions below