difference between png image saved after imshow and png from videocapture frame

30 Views Asked by At

I am trying to detect yellow colors from my usb camera image.i am using yellow mask to filter objects and draw rectangles around the found contours. If I save frame obtained from usb camera and then apply edge detection.it works fine. but when I Directly apply edge detection on frames obtained by videocapture,i don't get good results.i get many small contours. Can any one point the error.

The code presented below gives good results as I read camera frame saved as png.



    # Convert the image to HSV color space
    image=cv2.imread('/content/drive/MyDrive/droplets.png')
hsv = cv2.cvtColor(Frame, cv2.COLOR_RGB2HSV)

    # Define the range of yellow color in HSV
lower_yellow =  np.array([0,38, 57], dtype="uint8")
upper_yellow = np.array([70, 255, 255], dtype="uint8")
    # Threshold the image to get only yellow colors
mask = cv2.inRange(hsv, lower_yellow, upper_yellow)
    #cv2.imshow('orginal',mask)

    # Find contours in the binary image
contours, hierarchy = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)


for c in contours:
       
        QttyOfContours = QttyOfContours+1
        #print('Number_of_contours',QttyOfContours)
        #draw an rectangle "around" the object
        (x, y, w, h) = cv2.boundingRect(c)
        cv2.rectangle(Frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

        #find object's centroid
        CoordXCentroid = (x+x+w)/2
        CoordYCentroid = (y+y+h)/2

        ObjectCentroid = (CoordXCentroid,CoordYCentroid)
        #print(CoordXCentroid,CoordYCentroid)
        cv2.circle(Frame,(int(CoordXCentroid),int(CoordYCentroid)), 1, (0, 0, 0), 5)
        cv2.drawContours(Frame, contours, -1, (0, 255, 0), 2)

#cv2.startWindowThread()
cv2_imshow(Frame)
cv2.waitKey(100);


The same code gives bad results when I use

Cap=cv2.Videocapture(1,cv2.CAP_DSHOW)
ret,frame=cap_read()

This is the image saved directly from cv2.videocapture. Any help appreciated enter image description here

1

There are 1 best solutions below

2
Martin Brown On

I don't think there is an error in your code. The problem lies with the data that is going into your algorithm - MPEG video capture frames are somewhat noisy.

A video capture frame will probably default to lossy MPEG encoding and so captured video frames will have JPG artefacts which look a lot like smoke in the 8x8 blocks containing a sharp edge transition.

Quantisation error makes the high frequency noise rather bad in JPEG images near sharp edges. The human eye doesn't really see it at all but edge detecting algorithms do!

To make it behave better when you apply thresholding and edge detecting your options are:

  1. Set the camera to capture uncompressed or lossless compressed video images if you can (nothing can beat having good raw data to start with).
  2. Low pass filter the JPEG capture still with a simple kernel (eg 1/4 1/2 1/4) vertically and horizontally.

You may need to experiment to find a compromise LP filter that still allows decent edge detection whilst adequately supressing the JPEG artefacts. You may need to use a kernel with 5 terms to be sufficiently noise suppressing. A 1D kernel applied in both horizontal and vertical directions should be good enough though (and relatively fast).