How to build a holodeck, Week 31

This week - Pointclouds and preparation for networking. I’ve also been building a new PC.

Pointclouds

My meshing process so far has used the depth image generated by the Kinect SDK. This image is 512 by 424 pixels, with each pixel representing either an invalid value (outside the view frustum, for example) or a depth range between 0.5m and 4.5m (give or take).

This gives about 217K potential depth values to scan for. On average, about 50-60K of these are invalid simply due to how the kinect camera works - anything off the main camera centre axis starts to give dodgy values, which you can see in the following gfy:

The centre of the image is pretty stable, but the corners are very ropey, fading off to virtually no data at all in the bottom corners. There’s also a lot of randomness in the image, for a whole host of reasons.

This depth image is, in effect, a very dense point cloud. And (as you can see above) it’s very noisy and full of errors. How much of this data can I simply throw away, and still get relevant results?

The first day or two this week were spent finding out. I converted the image into a point cloud, by simply sampling a random set of points from the depth image and converting into points in camera space, and then used those points to generate the voxelised TSDF.

Week 31 - Kinect depth image to 4096 point cloud

This video shows the results of taking 4096 points per pass (instead of the original 217K pixels). You can see that it converges pretty quickly on a good result due to how the weighting works. in a few seconds you can clearly see the shape of the scene, and in under 10 seconds it’s basically as good as the original “every pixel in the depth image” version was after a few seconds.

I’ve also spent a few days this week optimising (mostly batching things into small timeslices rather than major algo changes). I’m running a depth-to-voxel pass every 100ms or so, and a voxel-to-mesh pass every 110ms or so (staggered just so they don’t clash too much). the profiler shows that this is running mostly under 16ms now, with the occasional spike up to 33ms (normally when both passes hit on the same frame). This is much closer to the kind of ballpark I want to be playing in, although the side effect of only processing small chunks of the data is obviously the latency in getting accurate results. If you are doing some kind of scan of static geometry, this is probably a perfectly fine way to do it, but it’s not going to work so well for dynamic geometry.

The pointcloud scanning process is going to be much more useful for Tango, as that’s the format that the data comes in (rather than some depth image buffer like the Kinect). I’ve got this working live on device now, but it’s so slow and expensive (both performance and memory wise) that I can’t get it running long enough to get a decent video! Hopefully that will improve radically next week.

New PC

On Wednesday, my new case and motherboard arrived, which were the final parts of my new PC. Time to get building!

The spec:

  • Intel I5 6600K
  • ASRock Z170 Extreme4 motherboard (which appears to be borked, replaced with an Asus Z170 Pro)
  • 32GB Corsair Vengeance LPX 3000 Mhz DDR4 (2 * 16GB sticks)
  • 1TB Samsung 850 Evo SSD
  • EVGA Supernova 750W PSU
  • Zotac Geforce 1070 GTX FE GPU
  • Fractal Design Define R5 case
  • Cooler Master Hyper 212 Evo CPU cooler and fan

I have built a lot of PCs in my time (certainly in the hundreds now) so, in theory, I know what I’m doing, and I have a good idea of what I want out of a build - great performance (but not so bleeding edge that I’m paying a massive premium for it), quiet and stable.

I spent all of wednesday afternoon and a big chunk of the evening trying to get the damn thing to work, and I just could not get the machine to successfully pass POST. Normally, this is indicative of human error somewhere, so I stripped back to components and rebuilt the whole thing three times, each time with the same results. I made the typical mistake of doing a full tidy build before POSTing the first time, which cost me an hour or two that I didn’t need to spend - I should know better by now.

Finally I had to admit defeat, and come to the conclusion that something wasn’t playing nicely. One should be able to get a machine to POST with just a CPU and some memory sticks in the powered mobo, and I wasn’t even able to do that, which normally points the finger at one of two culprits - the motherboard, or the memory.

As the motherboard wasn’t my original choice (which was the Asus Z170 Pro), I decided to bite the bullet and put in a next-day order for the Asus motherboard which was back in stock. I’m very glad I did, as thursday evening rolled around, I built the machine back up again, and it passed POST first time.

The Fractal Design case is lovely. There’s lots of ducting and cable layout tools which mean the motherboard can be cleanly accessed. It’s quiet, and with the 212 Evo the CPU is nice and cool. I’d take a photo, but it’s basically a big black box, which is just how I like it. No bright lights, no pimpage - just a quiet unassuming monolith.

The new box is hitting a geekbench of 5083 single core, 14986 multicore. This is using Geekbench 4, and all the numbers I’m used to are geekbench 3 numbers, so I don’t know if it’s valid to directly compare them, but they are certainly the best numbers I’ve seen in a while! for reference, my old box is running 3944 / 12433, and that’s a few years old - an i7 3770K on a Z77 Asus motherboard.

By the looks of it, the new machine isn’t streets ahead of my old box, but it came in a lot cheaper (under 1.5K GBP all in, which for a bespoke build ain’t too bad). It’s got a decent set of USB 3.1 and 3.0 ports, and loads of space for upgrades and sticking things in it if I need them later - I’m expecting to use this as a backup development box, so it needs to be VR-spec and a good number cruncher.

Monitor woes

I almost forgot to mention this, but reading my first draft, I remembered spending a good chunk of Thursday trying to get my monitor to work on my original box. During testing, I’d swapped over to using my main monitor (a Benq 3200) with a displayport cable, as I wasn’t getting any video signal on my test monitor (a crappy old monitor with DVI and no HDMI or DP). At which point, the Benq stopped displaying anything via displayport (or at least, nothing that I could see at that point). It worked (connected to my old box) via HDMI and DVI, but no Displayport output worked.

This lead to pulling out all display connections (including all the various headsets) to try and diagnose what the problem was, and a lot of googling to see if anyone else had seen similar issues. Windows was showing two connections, and that the Benq was identified as a device, but I couldn’t select it as an output in the display settings, and I couldn’t get anything on the screen (the monitor itself said no input, rather than no cable connected, which was an interesting data point).

Finally, during one of my many reboot cycles, I noticed that I did get stuff on screen during the POST / BIOS time window - the screen went black only once Windows started loading. I’m still not sure why, but after trying everything I could find via search results (disabling DDC/CI, disabling monitor auto-rotation detection, ensuring I had DP 1.2 selected, and more things I can’t recall) I finally got results by selecting “mirror displays” and then separating them back out again.

Sometimes, computers are a massive pain in the arse for no good reason. This week has had two days full of that, two days I’ll never get back.

Tango pointclouds

Once I finally got my new box up and running, I switched back over to getting the Tango device feeding point cloud data into the meshing process. This is (just) working now, but has loads of problems and issues, mostly memory related. My previous pointcloud tests were using an API in the Tango SDK that is now deprecated (like so much of my “old” code - here I am, 6 months later, and my code is old enough to no longer work on device). This meant a few hours re-learning how to talk to the device to get the point cloud data back again, and lots more builds and logging until I got some form of visible result.

The Depthfeed project I’m working in has no UI to speak of, so I’ll be needing to put in some on-device buttons for turning things on and off (like the scanning process itself, connecting to a server etc). All of this is working around the actual project I’m trying to complete, but it’s all steps in the correct direction I guess.

As the goal of the next few weeks is to pipe the feeds over the network (rather than do all the heavy lifting directly on device) I’m not going to worry too much about performance or on-device meshing stability - I just need the feed itself to be stable, the meshing can come later.

The Tango device just had another SDK upgrade, which I’m hoping to avoid, but part of the upgrade was an over-the-air update to the device, and I just know I’ll accidentally accept that at some point on one of my devices, which will undoubtedly mean I need to forge ahead with the upgrade. Fingers crossed that this one works without breaking anything important along the way.

Next week …

Next week I’ll be hopefully getting some depth and pointcloud feed data coming in over the network, at which point I can start to aggregate depth feed info from multiple Kinect and Tango devices. That’s going to be interesting to me, and hopefully to you too!

Written on September 24, 2016