I want to use the Camera Calibration Tool on OpenCV on a very specific setting, however, i seem to make one or more Logic Mistakes. I actually don't have a real camera, but a simulated one.
I followed the official Calibration Tutorial
I have a distorted Image (distorted by the simulation), and i have the original Image. Furthermore i have a "map" and i mean by that, that i do know, how points or mapped from the original onto the distorted image
What i now tried is replacing cv2.findChessboardCorners with "my map".
objpoints are from my undistorted image in 3D (with fake z-coordinate = 0)
imgpoints are the same points from the distorted image in 2D.
What i did in order to make it reproducable, i hardcoded some points into the code. I think, everybody who runs the code, can see my problem. Something is very wrong with my map, i think, but i am not sure.
#%%
import numpy as np
import cv2 as cv
import pandas as pd
import matplotlib.pyplot as plt
# Read Image
img = cv.imread("std.jpg")
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
width = gray.shape[1]
height = gray.shape[0]
#helper
def plotImage(im):
plt.imshow(im, cmap="gray")
plt.xlim(0, width)
plt.ylim(0, height)
plt.show()
def plotImageAndPoints(im, x, y):
plt.imshow(im, cmap="gray")
plt.scatter(x, y, c="red", s=16)
plt.xlim(0, width)
plt.ylim(0, height)
plt.show()
ob=np.array([[ 20., 600., 0.],
[200., 570., 0.],
[380., 540., 0.],
[140., 480., 0.],
[320., 450., 0.],
[ 80., 390., 0.],
[260., 360., 0.],
[ 20., 300., 0.],
[200., 270., 0.],
[380., 240., 0.],
[140., 180., 0.],
[320., 150., 0.],
[ 80., 90., 0.],
[260., 60., 0.],
[ 20., 0., 0.]])
# for OpenCV
objp=[ob.astype(np.float32)]
ip =np.array([[ 54.0018, 543.3303],
[200. , 586.7835],
[358.1802, 510.9069],
[125.5172, 523.4484],
[338.0666, 472.5831],
[ 51.0344, 411.7242],
[291.6206, 391.6206],
[ 8.811 , 300. ],
[200. , 248.7282],
[388.4496, 237.1836],
[116.213 , 132.426 ],
[338.0666, 127.4169],
[ 74.367 , 80.1423],
[265.691 , 37.2363],
[ 54.0018, 56.6697]])
#for OpenCV
imgp =[ip.astype(np.float32)]
# Calibration
#ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objp, imgp, gray.shape[::-1], None, None)
# Displaying required output
print(" Camera matrix:")
print(mtx)
print("\n Distortion coefficient:")
print(dist)
print("\n Rotation Vectors:")
print(rvecs)
print("\n Translation Vectors:")
print(tvecs)
# undistort
imgdist = cv.imread("dist.jpg")
plotImageAndPoints(gray, ob[:,1], ob[:, 0])
plotImageAndPoints(imgdist, ip[:,1], ip[:, 0])
dst = cv.undistort(imgdist, mtx, dist, None)
cv.imwrite('calibresult.png', dst)


undistorted with mapping points
The result i get is basically the distorted image again, not the undistorted.