How to build a holodeck, Week 38 - Kinect textured mesh

This week, I’ve been texturing up the mesh from the Kinect texture feed. I’ve also resolved a load of issues with SharpChisel’s marching cubes.

TL;DR

Week 38 - Kinect Texture Scan

Meshing issues

Meshing performance - memory and optimisation

There’s two issues with SharpChisel’s meshing that I’ve yet to resolve - both have solutions, both are expensive. The first is that the performance leaves something be desired, especially when scanning at 1cm resolution. I wrote most of the code using generic collections, which makes it lovely to read and reason about, but the sheer number of allocations I was making was killing the GC. I’ve spent a day making some of the storage persistent arrays and switching back to old school for loops, which has really helped. scanning at 2cm or above runs realtime with very few hitches. scanning at 1cm still gives me a huge GC spike due to the sheer number of things in the scene - with the Kinect camera setup I’m testing on, the view frustum holds 2000 or so chunks (each containing 4K voxels) so there’s 8 million voxels to scan and process to generate the surface. I’ve still got a lot of improvements to make here, both with the size of the resulting mesh and the runtime performance on PC - it’s good enough to prototype with though - for now.

I don’t need this to run happily on Android, as the main benefit of this prototype is being able to send the android depth and texture feeds into the PC for processing, but the more I improve it on PC, the better local tests on Android will be, so it’s a win-win.

Mesh quality

The other issue is the quality of the mesh - missing polygons, spurious depth feed data and the like. The kinect’s generated texture is lovely, and the camera deals very well with the light levels in my test environment, which is a bonus. Most of the issues remaining with the mesh surface are down to missing data in the voxels - when I try and march a cube that has less than eight valid weights on the corners, I have to guess at any missing weights, and I’ve put in a bit of code that will always generate cubes for 1 missing weight, but my attempts at guessing 2 or more missing weights occasionally give bad results. For now I’ve turned that stuff back off. Once I start capturing depth feeds from multiple angles, that should massively improve the quality of the resulting surface, as I’ll have more valid data on the problem surfaces.

the 1cm scan looks much nicer than the 2cm scan above, but unfortunately I’m not able to upload it to Sketchfab via the Unity exporter

  • probably because something is too big (the zip archive of the Collada file is about 90 meg). I’m still not able to upload the collada file to Sketchfab directly, so i’ll need to look at fixes for that workflow at some point.

The way I currently project the textures on the mesh means I can’t triangle-strip the mesh - and this leaves tiny lines all over the place when rendering. I’m not certain I can easily fix this, but it’s visually annoying to me. There’s already a bit of hackery in there to address this, but it’s not quite up to production quality yet.

Long term, a different meshing solution on the end result of the full TSDF will probably be the correct way to approach this, so making the marching cubes result much better isn’t top priority right now - ensuring that the data set is correct in the first place is probably more important (so, removing bad weights and the like).

I also need to put a bit of a plan in place for working with non-live datasets - so buffering the feeds, persisting the raw data, loading it back in - that sort of thing. It’s not massively important right now, but it would really help debugging in the near term, and it also means I can take a raw dataset and improve the end result later - a bit like storing the negatives from your camera and making a better print. This will definitely become more important once I start thinking seriously about non-static data.

Working with data directly on the PC

Oh man, this has been so much easier to debug than it was when I created the original Tango scanner. Half of my issues were ending up in day-long debugging sessions, because the turnaround of capturing and diagnosing something was taking minutes. I can now debug single chunks in the data set, look at their SDF visually, and breakpoint the code when I think something dubious is happening. All of this means I’m back to where I was, but progress should be hugely simpler on this platform. The performance of SharpChisel may be rubbish compared to the Chisel implementation on Android, but I can worry about that later. I also have a much better understanding now of some of the artifacts that I was seeing in the Tango Chisel implementation, and hopefully I have fixes for a few of them.

Next week …

Sending the Kinect texture feed over the network, and then the Tango texture feed. It’s going to be fun!

Written on November 11, 2016