3D Scanning with Kinect v2

Posts made in May, 2015

Angle of Depth Pixels on Kinect V2

Angle of Depth Pixels on Kinect V2

By on May 9, 2015 in Uncategorized |

What are depth pixel angles Each depth pixel has a distance value, this distance value does not provide an angle. An angle of the pixel being examined can be determined by creating a box around the pixel using adjacent pixels, an angle relative to the camera can then be determined. How to get accurate angles In order to get an accurate angle, we need to do some 3-dimensional averaging. This is because the error range of the Kinect V2 can create havoc on the individual pixel angle; so in order to smooth out that interference we average 9 pixels down to 4. In the above image: Green is the pixel we are trying to get the average of, black are the adjacent pixels, blue are the new pixels created by averaging the groups of 4 pixels together. Handling bad pixels In order to handle bad pixels, the averaging needs to take that into account. First we’ll try to grab sets of opposite corners and average those. Then we’ll average the pairs of points if there is indeed a pair. If a point is invalid then it’s not included in any of the averaging. The center point can participate in averaging 4 times, while the adjacent points can participate 2 times and the corner points just 1 time. Angles relative to camera Once the points for averaging are determined, a vector is then created at the middle pointing towards the camera which resides at coordinate (0,0,0). The 2 points of line that the vector crosses over are then used as points to calculate the depth angle of the vector relative to the camera. Angle Summary The pixel angle is the angle of depth difference in front of and behind the pixel while the pixel is facing the camera. What is this used for Well, a common problem with scanning with the Kinect V2 is that certain angles can actually be bad data; these are angles that could have good data, but if you are going to be doing multiple captures and want high quality, you should exclude angles that might be...

Read More
Bad depth data from the Kinect V2

Bad depth data from the Kinect V2

By on May 4, 2015 in Uncategorized |

The current problem we’re working on this week is finding the optimal way to remove false data contained around edges. How the Kinect V2 Sensor reads data The Kinect V2 is a Time of Flight sensor, meaning that it measures how much time it takes inferred light to travel from the sensor and back to it. From reading various articles on the subject, it seems that it captures several pulses and then computes the depth frame. Types of bad data Currently we’ve found 4 types of bad data: Edge to False Slope – If you scan one side of a box (see above picture), the box which should be square will tend to have a slope where one does not exist. The False Slopes are along angles that could exist, but in actuality do not exist. Edge to Distortion Edge – Usually if the distance to the background is great enough, then the False Slope turns into a Distorted Edge that looks like massive interference. Inferred Absorption – Some materials take longer to reflect the inferred light, this can cause that material to appear further back then it actually is. This more easily seen when a dark color object will appear indented more than a light colored object. General Distortion – because the sensor for the Time of Flight has a range of error- the depth frame will fluctuate on each frame, this can be more easily seen on flat surfaces that appear...

Read More
Note: Scan from Life is in no way affiliated with Microsoft. We are an entirely separate company that has created a product that is dependent on a Microsoft owned product.