I am computing bounding boxes for objects in an image taken with a 180deg fisheye lens. These bounding boxes will sometimes trigger an event that requires a rectified image. To rectify the image, I am currently using OpenCV's remap with maps precomputed by initUndistortRectifyMap:
map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), P, DIM, cv2.CV_16SC2)
undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
The following are the parameters used:
K = [[169.53432726 0. 322.5714669 ]
[ 0. 169.39413982 289.12455088]
[ 0. 0. 1. ]]
D = [[ 0.11919361]
[ 0.29904975]
[-0.20295709]
[ 0.05526085]]
P = [[121.41853905 0. 322.57040373]
[ 0. 121.31813842 289.11177491]
[ 0. 0. 1. ]]
DIM = (640, 640)
Is there a simple, efficient way to compute the coordinates of the bounding box corners in the rectified image (given precomputed maps for x and y)?
- I believe this unanswered post may have been wondering the same thing.
- I also think this post was trying to ask a similar question, but it's not clear how to apply the answer to my question (Caveat #1: if the "inverse" map can be used to map a point in the distorted image to a point in the undistorted image, then great--but not clear how to do that from the answer; Caveat #2: it's not clear the inverse map is available for the fisheye module).
- This post is also relevant--again, not clear to me that it actually answers the question, though
- I have attempted to invert
map1using the iterative algorithm proposed in this post, but the algorithm does not seem to converge on a useful inverse map. Instead, it returns negative and enormous coordinate values, which are self-evidently incorrect.