Drawing an oriented coordinate frame on 2D RGB image using OpenCV in C++?

35 Views Asked by At

for the table object shown in the image below [paste image here]

I am trying to find its orientation. I considered the object coordinate system to be aligned with object's center and oriented in the form as shown below

enter image description here

where X -- Red, Y -- Green and Z -- Blue.

This should produce rotation as depicted below

enter image description here

My question is, how can I draw this coordinate frame that will show up exactly as shown in the above visualization on the 2D image using C++ version of OpenCV4? i.e something like the following

// TODO using inkscape generate a usable image

My code so far is shown below [sample generated by ChatGPT 4.0 based on my implementation from the complete system]. The output that I currently get is shown after the code sample. The problem is the coordinate frame is generated in wrong position, oriented incorrectly and the axes are scaled out of the image as shown below

enter image description here

My code thus far is the following

#include <opencv2/opencv.hpp>
#include <Eigen/Core>
#include <iostream>
#include <vector>

int main() {
    // Google download link to this image: https://drive.google.com/file/d/1z4ZtJBDq1MZow-lBygtFw-FQI9kFp7Pc/view?usp=sharing
    cv::Mat rbg_img = cv::imread("path_to_000254.jpg", cv::IMREAD_COLOR);

    cv::Mat wrk_img = rbg_img.clone();
    
    cv::Mat K = cv::Matx33d(529.5, 0, 365,
                            0, 529.5, 265,
                            0, 0, 1);

    Eigen::Matrix3d table_rot;
    table_rot << 0.69561, 0.713416, 0.0833915,
    -0.718204, 0.689377, 0.0945636,
    0.00997506, -0.125686, 0.99202;

    Eigen::Vector2i center_cords(239, 114); // 2D coordinate of the bounding box that surrounds ```table``` object

    cv::Mat rotV;
    cv::eigen2cv(table_rot, rotV);
    cv::Mat rot2Rodrig;
    cv::Rodrigues(rotV, rot2Rodrig);

    cv::Point2f center_point(center_cords[0], center_cords[1]);

    Eigen::Vector3d tvec(0, 0, 0); // This information is not available to me, is this requried?

    cv::Mat t = (cv::Mat_<double>(3,1) << tvec(0), tvec(1), tvec(2));

    double ff = 1.0; // Not sure the purpose of this scaler term
    std::vector<cv::Point3f> points = {
        cv::Point3f(ff, 0, 0),  // X-axis end point
        cv::Point3f(0, ff, 0),  // Y-axis end point
        cv::Point3f(0, 0, ff),  // Z-axis end point
    };

    std::vector<cv::Point2f> axisPoints;
    cv::Mat K = cv::Mat::eye(3, 3, CV_64F); // Camera matrix, assuming it's initialized properly
    cv::projectPoints(points, rot2Rodrig, t, K, cv::noArray(), axisPoints);

    cv::line(wrk_img, center_point, axisPoints[0], cv::Scalar(255, 0, 0), 3);  // X axis red
    cv::line(wrk_img, center_point, axisPoints[1], cv::Scalar(0, 255, 0), 3);  // Y axis green
    cv::line(wrk_img, center_point, axisPoints[2], cv::Scalar(0, 0, 255), 3);  // Z axis blue

    cv::imshow("RGB with oriented coordinate ", wrk_img);
    cv::waitKey(-1);

    return 0;
}


0

There are 0 best solutions below