I am working on a machine learning model for detecting Keypoints for hands using depth image. So far the datasets I have seen includes labels for keypoints/skeletons in world and image view (See Shrec 17 or DHG dataset). I have seen couple of papers and their implementations that learn the world coordinates for keypoint detection. I want to understand how to map the 3D world coordinates to the depth image and check the visualization on the data and possibly extend the trained model for live predictions/visualization on Azure Kinect
How to convert world coordinates to image coordinates for datasets like Shrec 17 and DHG
340 Views Asked by Saurabh Pradhan At
1
There are 1 best solutions below
Related Questions in MACHINE-LEARNING
- Add image to JCheckBoxMenuItem
- How to access invisible Unordered List element with Selenium WebDriver using Java
- Inheritance in Java, apparent type vs actual type
- Java catch the ball Game
- Access objects variable & method by name
- GridBagLayout is displaying JTextField and JTextArea as short, vertical lines
- Perform a task each interval
- Compound classes stored in an array are not accessible in selenium java
- How to avoid concurrent access to a resource?
- Why does processing goes slower on implementing try catch block in java?
Related Questions in DEEP-LEARNING
- Add image to JCheckBoxMenuItem
- How to access invisible Unordered List element with Selenium WebDriver using Java
- Inheritance in Java, apparent type vs actual type
- Java catch the ball Game
- Access objects variable & method by name
- GridBagLayout is displaying JTextField and JTextArea as short, vertical lines
- Perform a task each interval
- Compound classes stored in an array are not accessible in selenium java
- How to avoid concurrent access to a resource?
- Why does processing goes slower on implementing try catch block in java?
Related Questions in COMPUTER-VISION
- Add image to JCheckBoxMenuItem
- How to access invisible Unordered List element with Selenium WebDriver using Java
- Inheritance in Java, apparent type vs actual type
- Java catch the ball Game
- Access objects variable & method by name
- GridBagLayout is displaying JTextField and JTextArea as short, vertical lines
- Perform a task each interval
- Compound classes stored in an array are not accessible in selenium java
- How to avoid concurrent access to a resource?
- Why does processing goes slower on implementing try catch block in java?
Related Questions in KEYPOINT
- Add image to JCheckBoxMenuItem
- How to access invisible Unordered List element with Selenium WebDriver using Java
- Inheritance in Java, apparent type vs actual type
- Java catch the ball Game
- Access objects variable & method by name
- GridBagLayout is displaying JTextField and JTextArea as short, vertical lines
- Perform a task each interval
- Compound classes stored in an array are not accessible in selenium java
- How to avoid concurrent access to a resource?
- Why does processing goes slower on implementing try catch block in java?
Related Questions in WORLD-COORDINATES
- Add image to JCheckBoxMenuItem
- How to access invisible Unordered List element with Selenium WebDriver using Java
- Inheritance in Java, apparent type vs actual type
- Java catch the ball Game
- Access objects variable & method by name
- GridBagLayout is displaying JTextField and JTextArea as short, vertical lines
- Perform a task each interval
- Compound classes stored in an array are not accessible in selenium java
- How to avoid concurrent access to a resource?
- Why does processing goes slower on implementing try catch block in java?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
You have to know the calibration matrices of the camera. The pipeline for this is the following.
3D World Coordinates --> 3D Camera Coordinates --> 2D Camera Coordinates.
The first step is called extrinsic calibration and the second step is the so-called intrinsic calibration and you need it in any case.
Example: Lets say you have a LIDAR for 3D points detection. The world coordinates you have is not with respect to LIDARs' origin. If your camera is not at the very same place as your LIDAR(which is pyhsically impossible but if they pretty close you might ignore), first you have to transform these 3D coordinates so that they are now represented with respect to the cameras' origin. You can do this with rotation and translation transform matrices if you know the position of the camera and the LIDAR.
The second step again goes by transformation matrices. However, you need to know some intrinsic parameters about the camera in use. (E.g focal length, skew) These can be computed with some experiments if you have the camera but in your case, it should rather be that these calibration matrices are provided to you together with the data. So ask for it.
You can read about all these in this link. https://www.mathworks.com/help/vision/ug/camera-calibration.html