Why does contour detection not working properly in image cropping?

36 Views Asked by At

I currently have a database of images, where each of the image are vertically concatenated from 3 small images (they are varying in width and height). See example below:

Example image

As you can see, 3 images have different height and width, and I want to crop the 3 images out automatically using OpenCV

What I have done:

import cv2

image = cv2.imread(r'application_data\input_image\1706004061083.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

binary = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 11, 10)
contours, hierarchy = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
filtered_contours = [contour for contour in contours if cv2.contourArea(contour) > 400 and 0.5 < cv2.boundingRect(contour)[3] / cv2.boundingRect(contour)[2] < 2.4]
filtered_contours = sorted(filtered_contours, key=lambda x: cv2.boundingRect(x)[1])

for index, contour in enumerate(filtered_contours):
    # Get the bounding box of the contour
    x, y, w, h = cv2.boundingRect(contour)

    # Crop the region 
    roi = image[y:y+h, x:x+w]

    # Display 
    cv2.imshow(f'Individual Image {index + 1}', roi)
    cv2.waitKey(0)

cv2.destroyAllWindows()

This code crops out the last 2 images and showed on the screen, but it cannot work with the first one. What is the problem here?

0

There are 0 best solutions below