Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software

Stanford's New Website Converts Your Photos to 3D 156

An anonymous reader writes to tell us that Stanford has a new website that not only shows you how cool their new 3-d modeling system is, but actually allows you to give it a try with your own photos. The system can take a 2-d still image and estimate a detailed 3-d structure which you can navigate. "For each small homogeneous patch in the image, we use a Markov Random Field (MRF) to infer a set of "plane parameters" that capture both the 3-d location and 3-d orientation of the patch. The MRF, trained via supervised learning, models both image depth cues as well as the relationships between different parts of the image. Other than assuming that the environment is made up of a number of small planes, our model makes no explicit assumptions about the structure of the scene; this enables the algorithm to capture much more detailed 3-d structure than does prior art (such as Saxena et al., 2005, Delage et al., 2005, and Hoiem et el., 2005), and also give a much richer experience in the 3-d flythroughs created using image-based rendering, even for scenes with significant non-vertical structure."
This discussion has been archived. No new comments can be posted.

Stanford's New Website Converts Your Photos to 3D

Comments Filter:
  • Slashdotted (Score:3, Informative)

    by d3ac0n ( 715594 ) on Monday January 28, 2008 @05:53PM (#22213652)
    Aaaaaand it's already slashdotted.

    Wow. That was fast.
  • Games? (Score:5, Interesting)

    by webword ( 82711 ) on Monday January 28, 2008 @05:54PM (#22213670) Homepage
    Wow, can you imagine how cool this would be with respect to video games? Drop in some photos, crank up the customized first person shooter, and zoooom! You could even take photos or shots from movies and do the same thing (e.g., using Star Wars stills).
    • Re:Games? (Score:4, Insightful)

      by Stripe7 ( 571267 ) on Monday January 28, 2008 @06:25PM (#22214096)
      About 20 years ago when they colorized Casablanca an office mate of mine was complaining about their ruining a perfectly good movie. I told him that he would be complaining even more when they used technology to make it in 3D. Seems that won't be too far away any more.
    • Or better - Make it a golf game... Where people could upload their face onto the player!!!
    • Re:Games? (Score:4, Insightful)

      by _KiTA_ ( 241027 ) on Monday January 28, 2008 @06:33PM (#22214196) Homepage

      Wow, can you imagine how cool this would be with respect to video games? Drop in some photos, crank up the customized first person shooter, and zoooom! You could even take photos or shots from movies and do the same thing (e.g., using Star Wars stills).


      There can be NO END to the verys to describe how much of a very, very, VERY bad idea making a CounterStrike map of your school/mall/town/etc would be.

      • While the above comment may have been flagged as "Funny," I must say that there's something to what he says... I'm too lazy to dig up the specific articles, but folks have gotten in trouble in the past for doing just that (generally their school).
      • Now that you mention it, it occurs to me how seriously cool that would be.

        I mean, you've always wanted to demolish that eyesore dump across the street? Now with the magic of 3d tech, you can!

        And think of other, non-violent applications- the next Tony Hawk Pro Skater could be in my home town, the next Amped could be on my home mountain (though I'll never forgive them for Amped 3)...
      • Re: (Score:3, Insightful)

        by argent ( 18001 )
        Right, convert it into a set of sculpties in SL instead, and add prim parodies of neighbors that bug you.
      • Re:Games? (Score:4, Insightful)

        by Scrameustache ( 459504 ) on Tuesday January 29, 2008 @12:48AM (#22217962) Homepage Journal


        Wow, can you imagine how cool this would be with respect to video games? Drop in some photos, crank up the customized first person shooter, and zoooom! You could even take photos or shots from movies and do the same thing (e.g., using Star Wars stills).


        There can be NO END to the verys to describe how much of a very, very, VERY bad idea making a CounterStrike map of your school/mall/town/etc would be.
        The crazy old men from florida have won :(
        • by _KiTA_ ( 241027 )
          >> There can be NO END to the verys to describe how much of a very, very, VERY bad idea making a CounterStrike map of your school/mall/town/etc would be.

          The crazy old men from florida have won :(

          Perhaps, but remember, the children they're currently insulting the intelligence of on a near daily basis will be tomorrow's crazy old men from Florida. Going the long view, society trends towards liberation -- of ideas, of people, of religions, of morals.

          I wanted to say "society trends towards liberalism" bu
      • There can be NO END to the verys to describe how much of a very, very, VERY bad idea making a CounterStrike map of your school/mall/town/etc would be.

        Good point. Although personally, I find that using terms like "very" actually detract from the meaning of the idea in a sentence. If you simply state it's a bad idea, there are no other "padding" words to get in the way of that meaning. He was angry. -- short, and to the point. Stated simply as a matter of fact. He was very, very, very, very, very angry



    • Wow, can you imagine how cool this would be with respect to video games?

      It's getting there to an extent. The newest game using an ID engine, Enemy Territory: Quake Wars has an SDK where map-makers can load data from Google Earth to create terrain for their map [etqwmapping.com].

      I'm excited because I design skate parks and I frequently try to mimic popular real-world skate spots. A tool like this could allow me to import a photo of a plaza in Barcelona and get it into my CAD application without everything being guestimat
  • by pwnies ( 1034518 ) * <j@jjcm.org> on Monday January 28, 2008 @05:55PM (#22213678) Homepage Journal
    Could this type of technology be used for robots to allow them to identify what the 3d layout of the world around them is? Seems like a pretty powerful tool in that area.
    • Re: (Score:2, Informative)

      by grub ( 11606 )

      Could this type of technology be used for robots to allow them to identify what the 3d layout of the world around them is?

      Some (most?) robots already use dual cameras for true depth perception.

      • Re: (Score:3, Informative)

        The problem is that binocular vision get's less accurate at longer distances. Also, for whatever reason, the robot might not be able to use two "eyes". Either way, another method of approximating distance would come in useful for anything that gets a lot of every day use.
        • by Anonymous Coward
          granted, radar doesn't work so great for transparent surfaces to get the depth cue from -behind- that surface, while lidar gets a little iffy if it's -too- transparent to get the depth cue of that very surface. Combination of both - voila.
          • by Hays ( 409837 ) on Monday January 28, 2008 @06:46PM (#22214454)
            Radar and Lidar are good for some applications, but they're fundamentally quite different. They're both active sensing technologies- they send out energy in part of the electromagnetic spectrum and then look in that narrow range of the spectrum and see what bounces back. This means that you have trouble seeing things farther away since you'd have to throw more and more energy to keep your samples uniformly bright or uniformly spaced. And it means your power requirements are much higher.

            I think the most interesting part of computer vision is that which deals with passive sensing, such as this work. It senses the electromagnetic radiation that comes from our sun, or moon, or man-made sources. By using the same spectrum that our eyes use it should be able to get a qualitative understanding of the world similar to what humans can achieve.

            Also, as humans we've built the world to be visually interpreted at the EM frequencies that we sense. This means our signs are readable in those frequencies, our indoor lighting works in those frequencies, etc... By sensing in those frequencies you make sure you don't miss anything that humans can see.
        • Re: (Score:3, Insightful)

          I can think of a very good use for this: laparoscopic "keyhole" surgery. One of the difficult things about that sort of operation is depth perception: until you've tried to do it, it isn't at all obvious just how difficult it can be to get a 5mm scissor blade over e.g. a blood vessel at the right angle. If a computer could analyse the image and add some depth perception cues it could really speed up the surgery and make a difference when something's going wrong and needs to be sorted out fast.
          • There are now robots that show two images (left and right) for stereoscopic keyhole surgery. The surgeons are still there doing all of the work. I know somebody who does this in the US as a Urologist. The machines cost about $5 million a pop. One dude from Singapore need an op on his prostrate and didn't want to use a machine that had been used before. He bought the surgery company a new machine for his op and then donated it to them afterwards.
    • Re: (Score:1, Insightful)

      by Anonymous Coward
      Judging from how fast that website is melting, I seriously doubt this is realtime-capable. Also, just using binocular vision is *way* easier and more accurate. Compare two pictures taken from two known angles and you get a faster, more accurate picture.
    • Re: (Score:3, Informative)

      by disckitty ( 681847 )
      Judging from the google cached pages, it looks like that's precisely what his research is for. Google cached pages: here [64.233.169.104], and here [64.233.169.104], and here [64.233.169.104]
    • by blair1q ( 305137 )
      yeah, if you wanted your robot's 3-d system to suck

      much cheaper and faster to use stereoscopic vision
      • This sort of thing would work, though. I navigate quite ably without binocular vision- I was born with eyes that, for some reason, are slightly out of whack and so I can't fuse the images. As someone else mentioned, stereographic vision gets less accurate at further distances (assuming you don't up the resolution). Combining the two could be very useful.
    • The ability to simply perceive a 3-D world is less useful to a robot than it may seem. Raw sensory data doesn't provide any means to plan action. The real task is how to make sense of the data that is coming in. Having data is useless unless you can interpret it.
    • by Nysem ( 1226462 )
      This may or may not be the same kind of technology that was mentioned in another slashdot article, about a robot that could create a 3d environment of what it sees and identify certain things (Bombs, survivors in a building, etc). I'm quite impressed. This kind of technology eliminates the danger associated with a lot of tasks that would normally dangerous if not unmanned.
    • by jellie ( 949898 )
      Yes, that was one of their demos. I heard their talk back in September. I work in eye research, so depth perception and visual cues are important to many of the researchers here. They mounted a camera and part of the computer system on an RC car and allowed it to travel randomly through a wooded area. The RC car traveled at a decent speed, but could determine which trees were closest and would avoid them. From the video, it looked quite promising.

      The other demo involved an actual robot that they trained to
    • Could this type of technology be used for robots to allow them to identify what the 3d layout of the world around them is?

      Maybe it can be used to finally settle the question of whether Earth is round or flat! Just feed it a picture of Earth from space and see what it comes up with...

  • by eclectro ( 227083 ) on Monday January 28, 2008 @05:57PM (#22213722)
    I tried it - it converts your face into a Mars flyby.
  • by virgil_disgr4ce ( 909068 ) on Monday January 28, 2008 @06:00PM (#22213776) Homepage
    Dammit, and all this time I've been decrying the impossible magical 3-d photo processing in Blade Runner! Curse my skepticism!

    --Tedb0t
    • be raster anyway. Spatial data would simply make more sense even if you were creating a flat print (as long as you have the sensor/processing/memory power).
    • by johneee ( 626549 )
      I always thought that too, but once I looked at that scene closely on the DVD, you can see that it doesn't do that.

      He's looking at a reflected image in a mirror with several panes on it, each of which show a slightly different angle of the scene.

      The only impossible thing in there is the enhancement. Which I agree is impossible, but the (apparent) 3d thing isn't.

      I think. I could just be making this up because I like the movie.
  • Photosynth (Score:4, Informative)

    by blankinthefill ( 665181 ) <blachancNO@SPAMgmail.com> on Monday January 28, 2008 @06:11PM (#22213920) Journal
    While I know you're all Microsoft haters, bear with me for a minute. This sounds a lot like this Photosynth [youtube.com] demonstration. The relevant part of the video starts at about 3:50, but the whole video is really interesting and I would suggest watching it.
    • Re:Photosynth (Score:5, Informative)

      by nguy ( 1207026 ) on Monday January 28, 2008 @06:22PM (#22214066)
      Photosynth takes multiple shots, this apparently takes a single shot. And although Photosynth is some nice engineering, (1) it wasn't all developed at Microsoft, and (2) it relies on decades of research work done elsewhere.

      Microsoft does invest a lot of money in research. But what they are spending pales in comparison to all the work by other people that they are building on.
    • Photosynth needs multiple photographs taken from different angles.
    • by pembo13 ( 770295 )
      It's a myth that the majority of Slashdotters are Microsoft haters. Maybe it just used to be that way.
    • No mater my hate for Microsoft, I can not bring myself to feel anything but love for a video originating from the TED Conference! [ted.com]
    • Re: (Score:3, Informative)

      by pcgabe ( 712924 )
      Photosynth doesn't make anything 3D. It combines flat photos, and while you can move around and see photos attached at different angles, each of those views MUST be a photo on its own. The more pictures you add, the more angles you can look at, but Photosynth isn't making anything 3D.

      These two packages are quite, QUITE different.
  • by pauldy ( 100083 ) on Monday January 28, 2008 @06:14PM (#22213964) Homepage
    This would be sweet if they took all the imagery from google maps/streets and build out little virtual cities with no headed pedestrians and 5 legged dogs.
  • Not so new (Score:5, Interesting)

    by dfunked ( 1049576 ) on Monday January 28, 2008 @06:18PM (#22214022)
    Several years ago I worked at a german university where recognizing of human faces was researched. We also did 3D reconstruction of faces, which was useful for training some algorithms. Although the technique is very different, 3D reconstruction from 2D images is not that new. Some examples can still be seen here: link [archive.org]
    • Re: (Score:2, Funny)

      by yuriyg ( 926419 )

      3D reconstruction from 2D images is not that new
      That technology is of course superseded by 1D-to-3D reconstruction

      Sorry, couldn't resist... uh-oh there goes my karma!
    • Warning, link partly hangs my Firefox (works okay in Opera). There are probably some finer details as to when it does/doesn't have issues.
  • Escher must be laughing in his Grave.
  • by Kozz ( 7764 ) on Monday January 28, 2008 @06:29PM (#22214140)
    Since both the processing engine and the article are hosted on the same server, I can't even read about it. Anyone got a mirror to some sample input/output?

    (No goatse renderings, please)
    • They've taken the image processing section down "for maintenance". You can now read the article and look at the pictures, just not convert photos to 3d.
  • A bit more DYI [acvt.com.au] but cool.

  • by Anonymous Coward on Monday January 28, 2008 @06:45PM (#22214430)

    Other than assuming that the environment is made up of a number of small planes, our model makes no explicit assumptions about the structure of the scene;

    Darn. My photos tend to be mostly of helicopters and boats.

  • Someone try to upload a mobius strip. I want to see whats on the other side.
  • The 3D site is still down. In the mean time you can use: http://simpsonizeme.com/ [simpsonizeme.com]
  • A quick search on youtube revealed this video [youtube.com] which seems to be of the software in question.

    The summary mentions prior work by Hoiem at CMU (slashdotted here [slashdot.org]), a video of which can be also seen on Youtube. [youtube.com]

    I'm not sure I'm very impressed by the Stanford videos. In the few examples of non-vertical surfaces, you can see quite a few artifacts.
    • While they aren't perfect, I think saying that it's not impressive is a bit harsh.

      One thing I noticed on one of them. It would be terribly frustrating for me to see one of these objects and not be able to look around the hidden corners.

      There's one image where you come to the beginning of a wall and you're forced to go only on one side of it.
  • Cue Manga artists invasion in 3... 2... banzai!
  • Since the conversion tool itself has been taken offline (the rest of the site seems to work fine now), I can't check, but...


    What would happen if you were to feed some hand-drawn images to this thing? I don't mean paintings as such, line-art. Anime. That sort of stuff.

  • How can I get the 3D data generated from my uploaded 2D image into KML format, so I can upload that into Google Earth? Some VRML to KML converter?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...