I am trying to make a task for an upcoming project and don't know which algorithms to use to solve this problem.
Given: Let's say I have a 256x256 point cloud from lidar, I also have a 1280x720 RGB image from the camera. Lidar points comes asynchronously from the image. The FOV of the lidar and the camera is very different, also the points from the lidar do not match the pixels in the image.
- For a better understanding, I'm trying to show this in simple simulated images. camera image
So if we just mix the image and the points, we will get bad results.
- After some adjustments and scaling, we can observe some acceptable result. (color for distance estimation) mixed image - dots with additions
But in the end result, I need to fill the area with a solid color and highlight the boundaries of the objects.
- something like this result image
So my question is: What kind of math and algorithms do I need to get to go from step 2 to step 3?
P.S. I try to work in OpenCV with C++.
I think i need some region growing\watershed algorithm but for point clouds not images, for filling problem. After this algorithm will fill spaces between points and form filled objects, edges can be simply find as last points in fill region.