3D Scanning with Kinect v2

Posts made in May, 2015

Project Folder Structure

Project Folder Structure

By on May 30, 2015 in Uncategorized | 0 comments

Automatic folder creation Within a project folder will be 6 sub folders, each of these folders contains a different type of data. These folders are automatically created when a new project is started. 01_Devices Files that contain information about the different scanning devices (currently only Kinect V2). This will help with color/depth alignment and determining default orientation of raw files. 02_Sync Files that are used to coordinate tasks across different devices. In here will be the current frame number being used and also temporary files that signal commands across devices on separate machines. 03_Raw The data from the Kinect V2 prior to filtering, files are tagged so it can be known which device settings should be used to process the data. The color images can be modified in this folder prior to rendering if touchups need to be done. 04_Meshes This contains the ply mesh files and MeshLab project files- so after raw files are filtered and rendered, they are then saved as mesh files in here. 05_Completed After a model has been completed and confirmed to be solid and printable, you are able to name it and it will be stored in this folder as a ply file. 06_Orders Any orders you create in the program will be stored in this folder for later...

Read More
Buffering 3D Scans

Buffering 3D Scans

By on May 27, 2015 in Uncategorized | 0 comments

Without the Buffer Before, when something was scanned with the Kinect V2, it would immediately go through several filters to clean up the data and then it would save it to the computer. This could add several seconds of waiting in between captures. Considering a scan involves processing ~45mb of data different ways and multiple times, even a faster machine would still add some delay between scans. Option added to buffer scans Instead of working on each scan right after it is captured, the scans can remain in the computer memory. In a worst case scenario a computer could hold at least a dozen scans. It’s been much easier to scan someone without waiting on the laptop to catchup. A couple seconds may seem trivial, but 10 scans with a 3 second delay means the person had to hold still for half a minute more. Another added benefit Since we’ve separated the process of scanning, and processing the scan- this means that a separate computer could be used to process the scans. Or in other words, we’ll eventually have it so multiple systems can scan...

Read More
Bad Edges from Kinect V2 Scan Data

Bad Edges from Kinect V2 Scan Data

By on May 20, 2015 in Uncategorized | 0 comments

Where bad edges happen When there is a large distance between the background and the object being scanned, along the edges will be data indicating that something is closer to the scanner. Let’s take different looks at the bad data. Because the bad data is shifting around the edge, I’ve circled in purple an example in each image. The bad edge data is being read as a part of the person, so it sees it as being attached to the actual object and not the background or off on its own. The change in distance between each frame is shown in red, this means that the bad data is not stable/consistent. So it’s changing from pixel to pixel mostly, but also sometimes it’s somewhat stable; so excluding data that changes from frame to frame wouldn’t be enough. Here you can see the change in distance around each pixel, orange is a higher change than green. So this means that the pixels are not smooth, but jump up and down next to each other. Here we can see that blue is angled away from the camera and green is angled toward the camera; where the bad edge is – is actually being read as an angle towards the camera. Current Solution The current solution was to add a filter called ‘pixel distance’; this is the total distance between the pixel and all adjacent pixels. On the left was an image taken without the filter, and on the right is with the filter applied. This filter will effectively remove areas where pixels vary greatly from each other. You can see there is still some incorrect color on the edges, but this can be resolved by removing a layer from all edges (better less data and it be correct than more data and it be...

Read More
Initial Scans with Angle Filtering

Initial Scans with Angle Filtering

By on May 15, 2015 in Uncategorized | 0 comments

Angle filtering seems to be working The important thing to notice in these scans is that the arms remain whole. Before the bad data would interfere and cause the arms to be cut up when recreating the surface. The person being scanned moved their head, so their nose and left side of the lip became misaligned. The problem with the color not matching will be handled with later tweaks, for now our focus is getting a good 3D mesh first. The images on the left are 5 combined 3D snapshots, the images on the right is the 3D model after merging. Front Back Another run in one try Just did another run to make sure it wasn’t a fluke, the images merged without doing a fine tune, so it was a very quick take. Here is the finished model (no head or legs), just the arms- which seemed to have the most...

Read More
Getting an accurate surface scan with KinectV2

Getting an accurate surface scan with KinectV2

By on May 11, 2015 in Uncategorized |

Video with brief overview What the data can usually look like The Kinect V2 measures the time it takes infrared light to leave and come back to the sensor. This process is not 100% accurate, so pixels will vary by a certain range – the bumps you see in the above picture will be different on each frame from the Kinect V2. Also, the closer to the edge of the sensor range and the less infrared light available causes an even greater error range- as can be seen by the increased amount of bumps on the picture of a flat wall above. Just averaging won’t work Sometimes the pixels from the Kinect V2 will be bad data, either in the form of a dead pixel (no data) or being wild and way off from the actual point. If you average multiple frames together, excluding the dead pixels, you’ll still include the wild pixels which can throw the average out of the error range. Staying within Error Range If a pixel stays within the error range of where the actual position is, then the pixel is good data and can be used in calculations that merge different meshes together. If the pixel is changed in a way that it might no longer fall within the error range of the actual position, then the scan will be seen as having a different real world structure when compared with another scan that has the same pixel on the far side of the error range. Averaging within the trend To handle dead and wild pixels, a pixel is examined across several frames. Dead pixels are excluded from the comparison and the remaining pixels are measured to see how far each one is from all of the other pixels. The total distance from everything else is used to determine where the ‘cluster’ of pixels is located, as the lower the number will mean the closer to the group the pixel is. The pixels are sorted and the one in the middle of the group is used as the middle distance. If a pixel’s total distance from every other pixel ends up being more than 1.5* times the middle distance, then it is excluded from...

Read More
Note: Scan from Life is in no way affiliated with Microsoft. We are an entirely separate company that has created a product that is dependent on a Microsoft owned product.