I have a grain structure with different random grain orientations represented by different colors in the image generated from phase field simulations. I need to identify and get the pixel values for every grain and also need to find the pixel coordinates at the junctions (triple junctions and junctions of higher order) and along the boundaries.
Grain structure with different orientations
Grain structure highlighting boundaries
I tried several image processing methods and also tried to solve this by implementing an algorithm which detects color change and connectivity and classifies the grains but was not able to get the pixel values and junction points from the code. Can anyone help? Here is one of the approaches I tried.
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Read the image
image = cv2.imread('image_path', cv2.IMREAD_GRAYSCALE)
# Apply threshold to create a binary image
_, binary_image = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY_INV)
# Apply morphological operations to remove small noise and text
kernel = np.ones((3, 3), np.uint8)
clean_image = cv2.morphologyEx(binary_image, cv2.MORPH_OPEN, kernel)
# Find connected components
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(clean_image, 8, cv2.CV_32S)
# Size threshold to filter out small connected components
size_threshold = 30
# Create a color map for visualization
colors = [np.array([np.random.randint(256), np.random.randint(256), np.random.randint(256)], dtype=np.uint8) for _ in range(num_labels)]
colors[0] = [255, 255, 255] # Setting background color to white
# Create a color image for visualization
output_image = np.zeros((image.shape[0], image.shape[1], 3), dtype=np.uint8)
# Variable to track the new labels for the larger connected components
new_label = 1
desired_label = list(range(50))
for i in range(1, num_labels):
coordinates = np.column_stack(np.where(labels == i))
x = stats[i, cv2.CC_STAT_LEFT]
y = stats[i, cv2.CC_STAT_TOP]
w = stats[i, cv2.CC_STAT_WIDTH]
h = stats[i, cv2.CC_STAT_HEIGHT]
area = stats[i, cv2.CC_STAT_AREA]
(cX, cY) = centroids[i]
if coordinates.shape[0] > size_threshold:
if new_label in desired_label:
output_image[labels == i] = colors[new_label]
cv2.rectangle(output_image, (x, y), (x + w, y + h), (0, 255, 0), 3)
cv2.circle(output_image, (int(cX), int(cY)), 4, (0, 0, 255), -1)
else:
output_image[labels == i] = [255, 255, 255]
new_label += 1
plt.figure(2)
print(f"Number of connected components (labels): {new_label - 1}")
plt.imshow(cv2.cvtColor(output_image, cv2.COLOR_BGR2RGB))
plt.axis('off')
# Define a smaller kernel for the morphological operation
kernel_dilate = np.ones((2, 2), np.uint8)
# Apply dilation to expand the white areas with only one iteration
dilated_image = cv2.dilate(output_image, kernel_dilate, iterations=1)
plt.figure(3)
plt.imshow(cv2.cvtColor(dilated_image, cv2.COLOR_BGR2RGB))
plt.axis('off')
# Convert the white areas of the dilated image to black and the black areas to white
intersection_image = cv2.bitwise_not(dilated_image)
# Convert the image to grayscale
intersection_image_gray = cv2.cvtColor(intersection_image, cv2.COLOR_BGR2GRAY)
# Apply binary thresholding to isolate the lines
_, intersection_image_binary = cv2.threshold(intersection_image_gray, 127, 255, cv2.THRESH_BINARY)
# Find the skeleton of the black lines
skeleton = cv2.ximgproc.thinning(intersection_image_binary)
# Apply a cross-shaped kernel to find intersection points
cross_kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (3, 3))
intersection_points = cv2.morphologyEx(skeleton, cv2.MORPH_HITMISS, cross_kernel)
# Draw the intersection points on the original image
intersection_image_with_points = intersection_image.copy()
intersection_image_with_points[intersection_points > 0] = [0, 0, 255] # Marking the intersection points in red
plt.figure(4)
plt.imshow(cv2.cvtColor(intersection_image_with_points, cv2.COLOR_BGR2RGB))
plt.axis('off')
y_coords, x_coords = np.where(intersection_points != 0)
# Draw the intersection points on the output_image
for x, y in zip(x_coords, y_coords):
cv2.circle(output_image, (x, y), 3, (0, 0, 255), -1) # Marking the intersection points in red
plt.figure(5)
plt.imshow(cv2.cvtColor(output_image, cv2.COLOR_BGR2RGB))
plt.axis('off')
plt.show()


Count number of unique neighbouring pixels...
I trimmed all the extraneous axes, borders and annotations off your image, so I started with this:
I think the 3 channels of colour are a nice visualisation, but pesky for processing so I converted each BGR888 pixel into a simple 24-bit greyscale value using
np.dot():Then I ran
SciPy'sgeneric_filterto pass a 3x3 filter over the image. I put all pixels from each 3x3 neighbourhood into a set (which cannot hold duplicates) and then got the length of the set which is now the number of unique neighbours each cell has. I saved that and then contrast-stretched it so you can see the pixels with most different neighbours show up as the brightest pixels:If you want to find coordinates having 4 different coloured neighbours, you can use:
Once you have the coordinates of the interesting points, you can obviously use them on your original image, rather than on my weird 24-bit greyscale image.
The processing time seems pretty reasonable, but if you wanted to speed it up, it would probably benefit from a
numbaversion.I didn't run any timing/performance tests, but it occurred to me that
len(np.unique(P))may be faster than usinglen(set(P)).