Was that a question.
Was that a question.
I have to correct myself.
This video: http://www.youtube.com/watch?v=r7nRKU0nFxA shows it uses pattern-based triangulation. No time-of-flight involved.
You're right, it uses pattern-based triangulation: http://www.youtube.com/watch?v=r7nRKU0nFxA
Nonsense. Microsoft bought 3DV in 2008, who had demonstrated the ZCam at CES that year. The ZCam is indeed true IR-based TOF 3D, as demonstrated in http://www.youtube.com/watch?v=6hDKaMvAFzA and many other videos. The Kinect is the direct descendant of the ZCam.
Look at the iFixit teardown. The Kinect has an IR projector, a TOF 3D camera and a color webcam.
I don't have a Kinect yet and haven't seen any in-depth examination by a researcher. But normally these things work not by projecting a pattern, but rather infrared light with a large-wavelength modulation. Then each pixel detects the phase offset of the incoming light.
I don't see any innovation here. Kinect and iPad are both just evolutionary steps. None of the concepts of these devices are in any form new.
I won't dwell on the iPad, that's not the point here. But where in the world have you been able to buy a 3D camera with skeletal pose estimation that works reliably enough to play video games with it, let alone for this price? The Kinect doesn't have new technology... for cutting-edge researchers working in motion capturing, robotics and the automotive industry. For the mass market, its technology is entirely new and absolutely revolutionary.
Heck, just the TOF infrared camera in that resolution alone is something that would have cost you a sweet 10,000$ before Monday this week.
What's the point of using simulated robots in a simulated environment? What's the point of having thousands of DOF to "play with"? Currently, most robots are not application platforms but toys. This is one of the very few robots that can actually help in developing working, robust autonomous robotic applications, and they're giving it away for free. That's not to be knocked.