Don't be fooled by the hype.
In that same way that some have taken hi-res scans of the Mona Lisa in every spectrum (visible, UV, etc.), there are companies capable of taking these laser scanners and doing just the same - without the voxel bollocks.
At no point is that engine rendering "hundreds" of voxels in between every point that the laser scanner scanned. What they've done is taken several laser scanned, merged them together to get an almost-3D representation (of the backs of objects the laser can't penetrate etc.) and then found a method (dozens of "I'd do it this way"'s spring to mind as I think of it) to merge them into a set of points, with colouration that a modern graphics workstation can render a static scene from. There are ALREADY people doing this with laser scanners and running the point data to get vectors that you can then plug straight into a conventional 3D engine.
They've just hyped up their way of doing with some voxel ("3D pixel") bollocks. Watch the demos - you can't manipulate or see a single 3D pixel - because it's not there. The 3D pixel data no doubt existed from the merged laser scanner data but it's just TOO LARGE to store, and they mention that themselves. All they did was do that, then cut out the hidden pixels (hidden surface removal - where have I heard that before?), and combined it with colour data from the laser scanners to provide some kind of "colour" to the pixel (i.e. a texture).
To then get that into streamable-from-a-hard-disk format, there's either an immense amount of cheating, or an immense amount of bullshit. My guess is that they just put it into a compact format without the unnecessary information and then plug that through a very high-end OpenGL workstation to render those shots. Because, at the end of the day, they haven't made their own graphics cards - they are still rendering data the same as everyone else. And if they are "cheating", they may well be unable to do this in anywhere near real-time and every single pixel change in the scene would require whole new data to be recompressed, optimsed, polygonned, stored and sent to the card.
There's more than a whiff of bullshit, more than the presenter silly voiceover even, about what they are claiming and what they are doing. They couldn't lie. Not legally. But they aren't telling you the truth.
And, whenever I saw the "infinite detail" demos, I always wondered why they stopped at about the resolution that a normal game stops. At that point, even when they show you it zoomed it, it looks blocky and you can see individual pixels - I suspect those are individual pixels on a texture on a vectorised surface generated from their data, but nobody but them can prove otherwise. And if that's the case - people have been doing this for decades. Almost any 3D scanner project has something like this - every computer vision student has knocked something similar up in their career. How to get a 3D vector interpretation from 2D pixel data from multiple angles... it's a classic.
The proof of the pudding, as all these things, is in the eating. If this is going to revolutionise games, check the reviews of the first game that uses it. If you're right, all you lost was a few days of pre-order on a game. If you're wrong, however, you've lost nothing except a bit of pride.
You can't buy this. You can't use this. You (probably) can't write a game in this engine. So why hype it? But, increasingly, why believe in the hype while those are true?
Too much fancy posturing and hype and not enough actually getting stuff done. A handful of static scenes aren't impressive - have you ever seen ray-traced Quake or similar evolutions of existing game engines? Looks stunning. Nothing ever came of it because it wasn't what you thought it actually was. By the time PC's were powerful enough, simple 3D graphics techniques were wiping the floor with it.