Inconsistencies with calculating mean brightness per pixel of different images

111 Views Asked by At

My code gets an image and calculates its mean brightness and standard deviation. When I run the code on a certain data set .TIF which is generally pretty dark and compare the values I get from running the code to the histogram in imageJ which I trust as a reliable source I get the same value. When I run it on a new data set .tiff and compare the values the brightness is very different, but the standard deviation is the same.

I attached a link to a google drive folder with the 2 images from each sample along with their values side by side (white background is ImageJ).

This is my how I calculate the mean brightness and std:

import cv2
import numpy as np


def calc_xray_count(image_path):
    original_image = cv2.imread(image_path, cv2.IMREAD_ANYDEPTH)
    median_filtered_image = cv2.medianBlur(original_image, 5)
    median_filtered_image += 1  # Avoid not counting black pixels in image
    pixel_count = np.prod(median_filtered_image.shape)
    img_brightness_sum = np.sum(median_filtered_image)
    img_var = np.var(median_filtered_image)
 
    if (pixel_count > 0):
        img_avg_brightness = (img_brightness_sum/pixel_count) -1 # Subtract back to real data
    else:
        img_avg_brightness = 0
 
    print(f"mean brightness: {img_avg_brightness}")
    print(f"mean std: {np.sqrt(img_var)}")
    return img_avg_brightness, img_var

Please note that I apply a median filter here. In the comparison I saved the image already with the applied median filter. The original images differ by DPI (72 vs 96), but after I apply the median filter they're both 96 DPI. Nevertheless, both images are 16 bit.

Since I want to count the black pixels in the image I temporary add +1 to the pixels of the image just to subtract them back.

I tried using different libraries instead of cv2, but the issue still persists.

I would kindly appreciate any help. Many thanks.

2

There are 2 best solutions below

0
Niandra Lades On

After testing with my own images shot at 8 bit, the results were good. This meant that the problem is reading the 16 bit image.

To make sure the image is properly read I did the following change to 'original_image':

original_image = cv2.imread(image_path, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_UNCHANGED | cv2.IMREAD_ANYCOLOR)
0
Niandra Lades On

Seems like this wasn't the final fix. The thing that helped was rewriting calc_xray_count() to:

    def calc_xray_count(self, image_path):
        original_image = cv2.imread(image_path, cv2.IMREAD_UNCHANGED | cv2.IMREAD_ANYDEPTH)
        median_filtered_image = cv2.medianBlur(original_image, 5)
        img_mean_count = median_filtered_image.mean()

        return img_mean_count