I have used Feature Matching and Homography to find objects in a scene. I have followed the tutorial of the docs. I will put first code and exceptions and at the end I will post the images.
Code:
String filenameObject = args.length > 1 ? args[0] : "./test/test1.png";
String filenameScene = args.length > 1 ? args[1] : "./screenshots/test2.jpg";
Mat imgObject = Imgcodecs.imread(filenameObject, Imgcodecs.IMREAD_GRAYSCALE);
Mat imgScene = Imgcodecs.imread(filenameScene, Imgcodecs.IMREAD_GRAYSCALE);
if (imgObject.empty() || imgScene.empty()) {
System.err.println("Cannot read images!");
System.exit(0);
}
//-- Step 1: Detect the keypoints using SURF Detector, compute the descriptors
double hessianThreshold = 400;
int nOctaves = 4, nOctaveLayers = 3;
boolean extended = false, upright = false;
SURF detector = SURF.create(hessianThreshold, nOctaves, nOctaveLayers, extended, upright);
MatOfKeyPoint keypointsObject = new MatOfKeyPoint(), keypointsScene = new MatOfKeyPoint();
Mat descriptorsObject = new Mat(), descriptorsScene = new Mat();
detector.detectAndCompute(imgObject, new Mat(), keypointsObject, descriptorsObject);
detector.detectAndCompute(imgScene, new Mat(), keypointsScene, descriptorsScene);
//-- Step 2: Matching descriptor vectors with a FLANN based matcher
// Since SURF is a floating-point descriptor NORM_L2 is used
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.FLANNBASED);
List<MatOfDMatch> knnMatches = new ArrayList<>();
matcher.knnMatch(descriptorsObject, descriptorsScene, knnMatches, 2);
//-- Filter matches using the Lowe's ratio test
float ratioThresh = 0.75f;
List<DMatch> listOfGoodMatches = new ArrayList<>();
for (int i = 0; i < knnMatches.size(); i++) {
if (knnMatches.get(i).rows() > 1) {
DMatch[] matches = knnMatches.get(i).toArray();
if (matches[0].distance < ratioThresh * matches[1].distance) {
listOfGoodMatches.add(matches[0]);
}
}
}
MatOfDMatch goodMatches = new MatOfDMatch();
goodMatches.fromList(listOfGoodMatches);
//-- Draw matches
Mat imgMatches = new Mat();
Features2d.drawMatches(imgObject, keypointsObject, imgScene, keypointsScene, goodMatches, imgMatches, Scalar.all(-1),
Scalar.all(-1), new MatOfByte(), Features2d.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS);
//-- Localize the object
List<Point> obj = new ArrayList<>();
List<Point> scene = new ArrayList<>();
List<KeyPoint> listOfKeypointsObject = keypointsObject.toList();
List<KeyPoint> listOfKeypointsScene = keypointsScene.toList();
for (int i = 0; i < listOfGoodMatches.size(); i++) {
//-- Get the keypoints from the good matches
obj.add(listOfKeypointsObject.get(listOfGoodMatches.get(i).queryIdx).pt);
scene.add(listOfKeypointsScene.get(listOfGoodMatches.get(i).trainIdx).pt);
}
MatOfPoint2f objMat = new MatOfPoint2f(), sceneMat = new MatOfPoint2f();
objMat.fromList(obj);
sceneMat.fromList(scene);
double ransacReprojThreshold = 3.0;
Mat H = Calib3d.findHomography( objMat, sceneMat, Calib3d.RANSAC, ransacReprojThreshold );
//-- Get the corners from the image_1 ( the object to be "detected" )
Mat objCorners = new Mat(4, 1, CvType.CV_32FC2), sceneCorners = new Mat();
float[] objCornersData = new float[(int) (objCorners.total() * objCorners.channels())];
objCorners.get(0, 0, objCornersData);
objCornersData[0] = 0;
objCornersData[1] = 0;
objCornersData[2] = imgObject.cols();
objCornersData[3] = 0;
objCornersData[4] = imgObject.cols();
objCornersData[5] = imgObject.rows();
objCornersData[6] = 0;
objCornersData[7] = imgObject.rows();
objCorners.put(0, 0, objCornersData);
Core.perspectiveTransform(objCorners, sceneCorners, H);
float[] sceneCornersData = new float[(int) (sceneCorners.total() * sceneCorners.channels())];
sceneCorners.get(0, 0, sceneCornersData);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
Imgproc.line(imgMatches, new Point(sceneCornersData[0] + imgObject.cols(), sceneCornersData[1]),
new Point(sceneCornersData[2] + imgObject.cols(), sceneCornersData[3]), new Scalar(0, 255, 0), 4);
Imgproc.line(imgMatches, new Point(sceneCornersData[2] + imgObject.cols(), sceneCornersData[3]),
new Point(sceneCornersData[4] + imgObject.cols(), sceneCornersData[5]), new Scalar(0, 255, 0), 4);
Imgproc.line(imgMatches, new Point(sceneCornersData[4] + imgObject.cols(), sceneCornersData[5]),
new Point(sceneCornersData[6] + imgObject.cols(), sceneCornersData[7]), new Scalar(0, 255, 0), 4);
Imgproc.line(imgMatches, new Point(sceneCornersData[6] + imgObject.cols(), sceneCornersData[7]),
new Point(sceneCornersData[0] + imgObject.cols(), sceneCornersData[1]), new Scalar(0, 255, 0), 4);
//-- Show detected matches
HighGui.imshow("Good Matches & Object detection", imgMatches);
Imgcodecs.imwrite("result/SURFFLANNMatchingHomography.jpg", imgMatches);
HighGui.waitKey(0);
System.exit(0);
I get the two following exceptions:
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: OpenCV(4.7.0) /tmp/opencv-20230625-23894-1o8sdu5/opencv-4.7.0/modules/calib3d/src/fundam.cpp:378: error: (-5:Bad argument) The input arrays should be 2D or 3D point sets in function 'findHomography'
or
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: OpenCV(4.7.0) /tmp/opencv-20230625-23894-1o8sdu5/opencv-4.7.0/modules/calib3d/src/fundam.cpp:385: error: (-28:Unknown error code -28) The input arrays should have at least 4 corresponding point sets to calculate Homography in function 'findHomography'
I need to detect object that aren't that much complex: pokestops.
Some images of the object I am trying to detect:
Screenshots of where I want to detect that type of object:





