Disclaimer: I'm the guy in the video.
The big difference between what we're doing, and what's been done before, is that we are using one-to-many communication between emitters and sensors, as opposed to earlier systems, which use matched emitter/sensor pairs on opposite sides of the display to generate a series of parallel lines in both the x and y directions that can be interrupted.
By reading from a large number of sensors for each infrared emitter, we generate a dense mesh of infrared light beams, which is what enables the sensor to detect multiple touches. Prior infrared systems using parallel beams suffer from ghost touch ambiguities when multiple fingers are on the display. Ours does not. This is the big differentiator between what's been done before and what we've done.
Most SMART boards and other commercial multi-touch sensors, use two cameras in the corners of a screen (some use four), and computer vision algorithms to identify and track touches on the display. Our approach is different in that it generates a more complete visual hull of the interactive area than with these types of systems. Using two cameras means you can only reliably track two touches due to occlusion issues, whereas we can detect 20+ touchpoints with high reliability.
More info can be found on our website: http://ecologylab.net/zerotouch/
The publications at the bottom of the page should help slashdot readers understand the technical innovations a little bit better.