Computer Vision: Locate ground image in a big 2D top-down map

102 Views Asked by At

I have an image taken at the ground level by a robot facing its front. I have a 2D map sized nxn that shows the layout of the rooms. I want to apply the image somehow to the 2D map to generate a probability distribution over positions in the map, to indicate which are the probable positions. For example, if I receive an image of a corner, then I know that positions in the 2D map that are closer to a corner should have a higher likelihood. The same goes when I have an image of a door.

How exactly should I apply the image to the map? I am thinking about something like, generate a nxnxk feature descriptor for the 2D map, and another 1x1xk descriptor from the image, then compute similarity between these descriptors over each pixel on the nxn map. But exactly how?

1

There are 1 best solutions below

0
Ali On

There can be more than one solution to your problem, but the first one comes to my mind is "template matching". In template matching we have a reference image:

enter image description here

and a query image:

enter image description here

There are 6 different methods for template matching which you can find here with an applied example.

The resulting image generated from matching can be used as a probability distribution where the pixels have brighter values (in cv.TM_CCOEFF method).

enter image description here