How to build a holodeck, week 35

This week I’ve got the networking issues mostly resolved, and point clouds coming over the network from the Tango device to the PC.

TL;DR

Week 35 - Tango point clouds over the network

Networking and serialization, redux

Another few days work on the networking and serialization systems, and I’ve managed to get a self-describing object serialized via Flatbuffers, sent over the LAN using Unity’s transport layer to my main PC for meshing and display.

The video above basically describes the results of the last couple of days. There’s a few more things I’d like to do before I move on to the hard work - currently I’m only opening one socket, so I can’t have more than one remote camera working at once. I’d like to fix that before I do anything else major, it would be awesome to have two kinects and two Tango devices feeding into the same TSDF at the same time.

There’s also a variety of niggles with how much data I’m currently sending - I’m getting a slew of errors in Unity’s NetworkTransport.Send and there’s not much on the googles about how to fix it. It’s not stopping most of the packets from being sent, though, so I can live with it for now.

I need to get some more compact representations for the depth and texture buffers - I’m looking at a max of 60K per “packet” currently, which isn’t going to be enough for an uncompressed depth or texture frame. I might fix the networking so I can send larger buffers, or I might look at compressing / chunking up the depth and texture frames, or both. None of the solutions are going to be super-quick, but they’re all pretty trivial to think about.

Flatbuffers seems pretty decent from a performance perspective, even if I have had to write even more boilerplate than the PUN Streams to get things sending correctly - the plus side of this is that I’ve now got explicit descriptions for how to send and receive stuff and the serialization and dispatch is on the order of a couple milliseconds tops, instead of the tens or hundreds I was seeing previously with JSON or BinaryFormatter - and it actually works!

Multiple cameras

Once I’ve fixed the networking to allow more than one device feed at the same time, I need to fix the issues with the cameras not being aligned. I’ve been thinking about using the point clouds themselves to do alignment, or trying to match against an existing surface, but that all seems pretty complex (and isn’t going to work if the cameras can’t see much of the same scene). My current line of thinking is that I should probably manufacture a doohicky that can be seen from the main camera and the camera I’m trying to calibrate (assuming it hasn’t got Tango tracking) and then do the math to convert from doohicky space into a shared camera space. Expect more woodwork in the near future, and possibly some painting. I’m thinking a three-rod contraption, painted red/green/blue with three orthogonal extensions of a known size that I can measure in the colour buffer and hit against in the depth buffer. The combination of that data should hopefully give me a fairly close positional and good rotational matchup.

Texturing

once I’ve got that working, it’ll be on to texturing, hopefully using my existing backprojection and atlasing code. That’s one for a few weeks away yet, but will make a huge amount of difference to how cool this looks.

Motion and persistence

Something I’ve been thinking a lot about, and still have no great solution for, is moving entities in the scene. Options include some kind of persistence measure in the TSDF voxels, sharding space into a static (the known environment) and dynamic area (the bit you walk around) and just direct-meshing the dynamic bit instead of creating a TSDF, and (going way into what-if space) object recognition to feature extract, like the Fusion4D/Holoportation tech.

This is probably the area where the most research is required, so I’m not going to rush into it … I think there’s a potential set of products here even without dynamic content, and there’s still a huge amount of work before I get to this point, but it’s nice to be thinking about it.

Next week …

More networking and hopefully the beginnings of calibration. And maybe some half-term holiday time with the family ;)

Written on October 21, 2016