I am trying to understand the 3D reconstruction of Object using 3D structured Lighting scanner and I am stuck at the point where a method of decoding set of camera and projector correspondences to use to reconstruct a 3D point cloud. How exactly is 3D point cloud information acquired from the information obtained from those correspondences? I want to understand the mathematical implementation, not the code implementation.
How is point cloud data acquired from the structured light 3D scanning?
580 Views Asked by techno At
1
There are 1 best solutions below
Related Questions in POINT-CLOUDS
- How can I generate a concave hull of 3D points?
- How to add another panel or window to the open3d.visualization.O3DVisualizer class? (In python open3d)
- How to input multi-channel Numpy array to U-net for semantic segmentation
- Autodesk RCP, RCS files reading
- 3D construction from set of 2D images using mobile camera
- Distance calculation between points on similar point clouds
- 2D PointCloud Visualization in Python
- Lack of precision when using the lidr package's segment_trees function
- KITTI dataset: ground truth labels (bird's eye view) match after an image generation?
- Open3d Triangle Mesh fill_holes() method leads to crash
- Predict x,y coordinates by z value in point cluster
- Kinect V1 not connecting to Kinect Studio v1.8.0
- Segmentation of a building (Pointcloud)
- How to compare 2 point-clouds?
- interactive big 2D point cloud data visualization on map with python
Related Questions in 3D-RECONSTRUCTION
- How can I generate a concave hull of 3D points?
- How can i remove background from face image using PRNet and get an image containing only face?
- Getting weird mesh while trying 3d reconstruction
- Problem in running the environment.yml file
- How to Generate 3D Hand Mesh from 2D Skeleton Image or real hand Using Python?
- Converting Front Hand View to Back Hand View or vice versa Using Python
- Surface (mesh) reconstruction from 3 curves (bifurcation)
- How does matalb function pc2surfacemesh find a suitable spherical radius for ball-pivoting algorithm?
- How to Get Started with Gaussian Splatting
- Calculate Essential and Fundamental matrices using a calibrated camera using 4 co-planar point matches
- real time 3d reconstruction using stereo cameras
- How can I merge / align point clouds from scans (Lidar) to reconstruct an object in 3D in Python?
- Does unsupervised stereo matching recover relative depth or real depth?
- COLMAP camera pose estimation issue
- Unable to locate implementation of logistic density distribution in NeuS (neural surface reconstruction method) paper repository
Related Questions in 3DCAMERA
- python, Open3D: Is there a way to set parameters of vizualization without closing the window?
- Godot 4 How to do third person 3d movement relative to camera angle
- Formula for adjusting camera's pixels cordinates do not make sense
- Limiting (clamping) a rotation vec2 in C++
- How to rotate camera on poles
- Direction based on the forward facing camera
- How do we fix our flickering depth image when using an Orbecc Astra Camera and Rviz?
- Equation of line from center of camera to plane in ThreeJS
- Structured Light scanner - camera calibration problem
- Put a cube at the near clipping plane in Unity?
- From depth 2D image to 3D point cloud with python
- How is point cloud data acquired from the structured light 3D scanning?
- JavaFX Rotating Camera Around a Pivot
- Unable to run tune_blockmatcher and image_to_pointcloud programs
- Qt3D How to move the rotation axis to the centre of the object?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
assuming you used structured light method which uses some sort of lines (vertical or horizontal - like binary coding or de-brujin) the idea is as follows:
a light plane goes through the projector perspective center and the line in the pattern.
the light plane normal needs to be rotated with the projector rotation matrix relative to the camera (or world depends on the calibration). the rotation part for the light plane can be avoided if if treat the projector perspective center as system origin.
using the correspondences you find a pixel in the image that match he light plane. now you need to define a vector that goes from the camera perspective center to the pixel in the image and then rotate this vector by the camera rotation (relative to the projector or world. again' depending on the calibration).
intersect the light plane with the found vector. how to compute that can be found in wikipedia: https://en.wikipedia.org/wiki/Line%E2%80%93plane_intersection
the mathematical problem (3d reconstruction) here is very simple as you can see. the hard part is recognizing the projected pattern in the image (easier than regular stereo but still hard) and calibrating (finding relative orientation between camera and projector).