How to use OpenCV to black out everything in an image except for license plate letters?

125 Views Asked by At

I'm trying to create a program that turns everything in an image black, except for the letters in a license plate area. I'm not sure where to start with this. I know I can use OpenCV in Python to load the image and access its pixel values, but how do I identify the bounding box of the license plate area in the image and set its pixel values to a specific color, like white? Additionally, how can I set the rest of the pixels outside the license plate area to black? Here's a starting point for my code:


# Step 1: Crop the license plate area
license_plate_area = opencv.imread('image.png')

# Step 2: Sharpen the license plate area using a kernel
sharpening_kernel = np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]])
sharpened_license_plate = opencv.filter2D(license_plate_area, -1, sharpening_kernel)

# Step 3: Increase contrast using CLAHE
lab = opencv.cvtColor(sharpened_license_plate, opencv.COLOR_BGR2LAB)
l, a, b = opencv.split(lab)
clahe = opencv.createCLAHE(clipLimit=3.0, tileGridSize=(8, 8))
limg = opencv.merge([clahe.apply(l), a, b])
enhanced_license_plate = opencv.cvtColor(limg, opencv.COLOR_LAB2BGR)

# Step 4: Preprocess for OCR - Grayscale conversion, Gaussian blur, and Adaptive thresholding
grayscale_license_plate = opencv.cvtColor(enhanced_license_plate, opencv.COLOR_BGR2GRAY)
blurred_license_plate = opencv.GaussianBlur(grayscale_license_plate, (3, 3), 0)
_, thresholded_license_plate = opencv.threshold(blurred_license_plate, 0, 255,
                                                opencv.THRESH_BINARY + opencv.THRESH_OTSU)

# Step 5: Morphological operations to clean up the image
morph_kernel = opencv.getStructuringElement(opencv.MORPH_RECT, (3, 3))
opened_license_plate = opencv.morphologyEx(thresholded_license_plate, opencv.MORPH_OPEN, morph_kernel,
                                           iterations=1)
# Dilate the black region to include pixels outside the margin
dilate_kernel = opencv.getStructuringElement(opencv.MORPH_RECT,
                                             (5, 5))
dilated_license_plate = opencv.dilate(opened_license_plate, dilate_kernel, iterations=1)
# Invert the image so that the big words are black and the background is white
inverted_license_plate = opencv.bitwise_not(dilated_license_plate)
# Set any white pixels inside the black margin to black
for y in range(dilated_license_plate.shape[0]):
    for x in range(dilated_license_plate.shape[1]):
        if dilated_license_plate[y, x] == 0:  # Black pixel encountered
            inverted_license_plate[y, x] = 255  # Invert to white
        elif inverted_license_plate[y, x] != 0:  # White pixel encountered
            inverted_license_plate[y, x] = 0  # Keep it black

opencv.imshow('Window', inverted_license_plate)
opencv.waitKey(0)
opencv.destroyAllWindows()

enter image description here

2

There are 2 best solutions below

2
fmw42 On BEST ANSWER

Here is one way to make the outside of the license area all white in Python/OpenCV.

Read the input. Then threshold on black. Then get the largest external contour. Use the contour to draw a white filled region on a black background as a mask. Use the mask to white out the area outside the license in the input. Save the results.

Input:

enter image description here

import cv2
import numpy as np

# read the image
img = cv2.imread('license_plate.png')
hh, ww = img.shape[:2]

# threshold on black
lower = (0,0,0)
upper = (0,0,0)
thresh = cv2.inRange(img, lower, upper)

# get largest external contour
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)

# draw white filled contour on black background as mask
mask = np.zeros_like(img, dtype=np.uint8)
mask = cv2.drawContours(mask, [big_contour], 0, (255,255,255), -1)
#mask = 255 - mask

# apply mask to input
scale = 1/255
result = 255-cv2.multiply(255-img, mask, scale=scale)

# save results
cv2.imwrite('license_plate_thresh.jpg', thresh)
cv2.imwrite('license_plate_mask.jpg', mask)
cv2.imwrite('license_plate_result.png', result)

# show the results
cv2.imshow('thresh', thresh)
cv2.imshow('mask', mask)
cv2.imshow('result', result)
cv2.waitKey(0)

Threshold image:

enter image description here

Mask image:

enter image description here

Result image:

enter image description here

1
yoosh On

I am assuming you want to read the number plate. In that case, you can use a library called Tesseract OCR/pytesseract

Then it becomes simple:

image = cv2.imread(image_path)
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = pytesseract.image_to_data(rgb, output_type=Output.DICT)

The result will then include the bounding boxes of each localized text "group". If you want the bounding boxes of the text aka number plate you can simple use.

for i in range(0, len(results["text"])):
    x = results["left"][i]
    y = results["top"][i]
    w = results["width"][i]
    h = results["height"][i]

Then you can darken all regions except the text area.

In case you want the text itself with confidence simply use text = results["text"][i] and results["conf"][i]