Week 48 - A Reproducible Case

This week I have been mostly replaying and occluding things.

TL;DR

Week 48 - replaying feeds

Recording and playback for streams

When I was writing games, one of my favourite things to ask the QA folks was “What’s the repro case?”. Trying to fixing bugs without knowing how to make the bug happen is just a fool’s errand. Even finding a simple list of steps to follow can be pretty tricky sometimes. Last week’s quandry was “how can I reproduce the same problems reliably?”

It’s all well and good taking a scan, at which point you find the inevitable issues - but how do you get the same issues to show up in the next scan? Sometimes they will, sometimes they won’t. There’s only one guarantee: I will need to get out of my comfy chair and walk around the room staring at the walls again for the 100th time.

Well, not any more! I can now record the scan raw streams (pointcloud and texture) while I’m creating the model, and play them back on device. I can also chuck them over to my PC and play them back there, which means I can do things like step through frame by frame to see what’s changed, re-order the frames, speed up the replay to get back to a good state really quickly, that sort of thing. I’ve found and fixed a few issues already (unrelated to the occlusion issue) which have bumped up the quality slightly.

The downside to being able to replay scans is that I can see all of the problems I need to fix, and there’s a very big list. Here’s a gif of one of the most obvious problems I’m up against, which is the device drift.

I spent most of yesterday jumping into the code, trying to fix all performance problems and bugs - but that’s a distraction I need to put off until next week. Focus, man!

Fixing a hole

Carrying on from where I left off last week, I’ve now got a working occlusion mask in place for each projected texture. The occlusion mask is effectively the depth frame coming from the depth sensor, but viewed from the camera image’s point of view (and they’re not quite the same thing). Given that I want to be able to project an image from any old camera onto a mesh created from any old depth sensor, the colour image may not be remotely aligned with the depth data, so I need to create it from scene geometry.

My previous texturing process didn’t create the mesh until it was ready to be textured. That’s a great optimisation, right up to the point where I need to see the mesh in the occlusion mask before I texture it! If the mesh isn’t present, then there’s nothing to occlude against, so the occlusion mask is full of holes.

My new method allows the mesh to be created with no texture. This can then be used to create the occlusion mask, which is then used when projecting the texture back on to the mesh. I’ve still got some timing issues to fix, It’s still not remotely close to perfect, but it’s much improved over no occlusion.

Here’s a scan prior to putting in the occlusion pass:

And here’s a scan with occlusion enabled. You can see that around the box (on the floor) is much improved, as is the area behind the red chair. occurrences of randomly projected garbage are much reduced!

And to show off someone else’s work, here’s a scan of the Buddha by Travis Cheek.

Travis has done a great comparison of results from the various scanner apps available right now. I might be a bit biased, but I think Evryway Scanner is holding up pretty well so far.

Just imagine how awesome it’s going to look once I fix the colour balancing, drift and noise! And all the other things that are broken.

Next week …

Not sure - I’d like to get saving a bit more stable, which will require making some big blocking chunks of work run gracefully. I’ll probably need to fix up my work queue process for that.

Or, there’s the ever-present problems with the meshing. When is a box not a box? whenever I reconstruct a pointcloud of a box, apparently.

Or a whole host of other things that I could fix. Fun times!

Written on January 20, 2017