Researchers Teach Computers To Perceive 3D from 2D 145
hamilton76 writes to tell us that researchers at Carnegie Mellon have found a way to allow computers to extrapolate 3 dimensional models from 2 dimensional pictures. From the article: "Using machine learning techniques, Robotics Institute researchers Alexei Efros and Martial Hebert, along with graduate student Derek Hoiem, have taught computers how to spot the visual cues that differentiate between vertical surfaces and horizontal surfaces in photographs of outdoor scenes. They've even developed a program that allows the computer to automatically generate 3-D reconstructions of scenes based on a single image. [...] Identifying vertical and horizontal surfaces and the orientation of those surfaces provides much of the information necessary for understanding the geometric context of an entire scene. Only about three percent of surfaces in a typical photo are at an angle, they have found."
leaning tower (Score:3, Interesting)
Directly applicable to the car racing AI grand.... (Score:4, Interesting)
Imagine the Possibilities (Score:2, Interesting)
Typical photos? (Score:3, Interesting)
What typical photos are those? No faces, people, trees or any organic thing?
No cars? No roofs?
That's been possible for years... (Score:4, Interesting)
(MetaCreations also produced Poser, Bryce, and Carrara. - all three of which are still alive and in use by the 3D hobbyist market).
Using multiple camera angles... (Score:3, Interesting)
It uses a super-neat concept called "Geometric Hashing" which can be used to recognize an object regardless of size, rotation, or even partially-obscured regions.
Play with it yourself! (Score:4, Interesting)
Looks like some of the software they wrote to do this has been GPL'ed.
Re:Well... (Score:3, Interesting)
as I said, nontrivial.