Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Toys

Minolta 3D Camera 150

Bookwyrm writes "This was just an interesting technology toy/tidbit I ran across. Metacreations and Minolta have teamed together to develop what appears to be a modified digital camera that allows you to take '3D' images. The camera stores/digitizes the image data in such a way that Metacreations' software can (re)construct a 3D model of objects in the picture along with their textures. While mildly neat in itself, it would be interesting to consider how far you could develop this technology. Could you do real-time 3D capture using a video camera with these techniques (and sufficient computer power)?"
This discussion has been archived. No new comments can be posted.

Minolta 3D Camera

Comments Filter:
  • by Anonymous Coward
    I used to take a standard 35mm camera on my mountain backpacking trips, but was always disappointed in the lack of depth in the resulting slides. The spectacular sense of space, the exhilaration from standing near a ledge with a 3000' drop, all this was impossible to convey in a normal photograph. I knew I wanted to take stereoscopic (3D) photographs but I was dissatisfied with the quality one could obtain with commercial 35mm stereo cameras, such as the famous Stereo Realist from the 1950s. I realized that to get the image quality I desired, I would have to use 120 roll film. I also wanted a horizontal rectangular format, so I couldn't merely link two Rolleis or Hasselblads (with their square formats) together. I finally decided to build my own camera for taking pairs of "ideal format" (60 X 70 mm) transparencies. I have used this camera successfully for more than twenty-five years. I look at the pairs of transparencies in a custom-built viewer, and I have also built a full projection system. CLICK ON ANY PHOTO BELOW TO ENLARGE IT This is my camera, built into a 10x5x3" electronic chassis box. I used a matched pair of 65mm f8 Schneider Super Angulon lenses with focusing mounts. To advance the film, I cannibalized a view camera roll film holder for the winding mechanism. For previewing in 3D, I mounted a pair of optical viewfinders designed for 28mm wide-angle lenses on a 35mm rangefinder camera (approximately the same angle of coverage as my 65mm lenses). Simultaneous exposures of the left and right images were achieved with a dual cable release. An inside view of the camera shows the film plane. My viewer is a modified Wheatstone design, incorporating mirrors at 45 degrees to accommodate the large transparencies, which are mounted in "ideal format" Perrot-Color glass/metal mounts. The large diameter viewing lenses allow any person to view comfortably, without the need for an interpupillary adjustment. Focus is fixed and set for distance corrected vision. Eyeglass wearers can view the entire field comfortably. Inside the viewer can be seen the front surface mirrors and small incandescent lamps. The dual projector employs polarizing filters with crossed axes for image selection (independent presentation of the left and right images). An aluminized Kodak Ektalite screen is used to achieve high brightness while preserving polarization.
  • by Anonymous Coward
    Just check the webpage if you want your questions answered..

    The camera costs $4,495.00 and appears to only have software for windows (I'd expect MetaCreations to have ported it to MacOS by now, but oh well)

    The MetaFlash software that goes with the camera looks neat, http://www.metacreations.com/products/metaflash/
  • This was my first thought when I saw the post on the 3D camera. I think it's cool that others are on this wavelength (post #8 was on this topic as well). But I wonder about the export formats and their interoperability with various 3d game engines. I really don't know too much about those engines, so maybe someone could fill me in: is there a conversion util for VRML (or any of the other formats the camera outputs) to any of the 3D engines? I think that this would be a great tool even outside of the game setting. Imagine IRCing in a virtual version of your bedroom, or the playboy mansion, or an asteroid floating around near a nebula.
  • There is a 3d lens attachment for camcorders available at http://www.i-glasses.com/ [i-glasses.com].
  • For you investors out there, check out the details on there stock, it is just rocking. Projected to be 20 shortly, and around 80-100 end of year. I bought some shares with my LNUX proceeds. Jeff Knox Anyone heard of DTEK?
  • I'm sorry, I assumed such a high-tech crowd knew how to use search engines or simply take a stab and try http://www.eyetronics.com [eyetronics.com]...

    I apologize if I seem snotty, I've got 12 hours of work to do before tomorrow morning, and I'm stuck in a meeting destined to last until at LEAST then :)

    Cheers,

  • It is interesting that Metacreations is in this deal with minolta. It seems like a perfectly good fit of a great graphics software company and a great camera company. But Metacreations has had it's own difficulties lately. Metacreations is a company with really neat products and what seems to me almost no good marketing skills. This has led to a sustained loss [newsalert.com] every quarter last year. They are often over shadowed by the big boys in Microsoft and Adobe. Well, it seems obvious to me that they should consider opening up their software at least in the graphics genre. If they had the gimp on their side...

    Even nicer would be if an open source company bought them flat out and opened them up. This would be chump change for RedHat but also very possible for Corel.

    I know you all have heard it a thousand times before. Open it up! In fact I will probably be moderated down because it is so off topic. But first read what I have to write. Why did Netscape open it up? Because they couldn't support a free browser like Microsoft could indefinitely. What compelling reason would one have to buy Metacreations software over Adobe's products? Features? Certainly not because of that, very few people buy software for the features. Cost? Well it is a concern but only to people not making money at doing graphic work. What else then is going to bouy metacreations? Their e-commerce? Maybe, but even then this product line has nothing to do with their graphic software. So spin it off!

  • This seems like something that 3d gaming people
    would jump to have - Imagine being able to take a picture of a real-life room, and then plugging this camera into your computer and using a converter. Quake 1/2/3 levels of real buildings in seconds! I wonder how well this camera works with large objects like rooms or groups of people though. Anyway, you hear alot about people going out for game companies and visiting dumps to get pictures of stressed metal and such.. This could improve the realistic quality of games alot.

  • Okay, if that thing scans depth then the software would actually be simpler than what I described, but it would probably also involve a lot less manual work.

    I still don't understand how this depth scanner actually works though. If they're using ordinary visible light for the depth scanning process, wouldn't the colour of the scanned surface affect the perceived distance? (heck, even non-visible light would be affected by "colour")
  • Don't they usually have two lenses for capture, and a third lense, between the other two, for a view-finder?
  • You could make real time 3D models of any size. It is a factor of camera population and hardware resources.

    You can do anything, if you really want to.
  • Yes they have. But they needed special film and you had to send them away to a special place for processing. But the quality was so-so, but it was a nice idea and neat for the first few pictures but got annoying very quickly.

    Being able to process the pictures on the computer looks like it has some nice potential, but not for your average joe.

  • ... would be making models+skins of yourself for Quake! (and your gf naked :)
  • Haven't 3D cameras been around for years? I could've sworn I read about them a good long while ago.

    --neil

  • Only if you wanted an "actor" with an effective vertical resolution of 90cm or less with a margin of error of 1mm or greater.

  • I don't see any reason that (given a fast enough computer) you couldn't have real time interpolation of a video image. in video, since each frame is a seperate picture, what is to stop you from stitching the pictures as they come out... you may need a couple of pictures from around the bject initially, but once you have the intital object, you could move the 3d image as the image in the video moves just by tacking some prominent points on the original 3d model to the video.

    for example: you create the three d image (say of a head) then you video the image... your incredibly fast (say an octium 6000mhz)looks at the image each frame and updates the movement of three d object relative to the change in the video... as each eye moves relative to each other, you move/rotate the three d object relative to those two points, the same thing might apply to a jint except you would need at least two points on each joint
    sorry I can't explain it better, but give enough speed, I see no problem creating a virtual image given enough speed...

    Edward
  • http://www.cs.cmu.edu/afs/cs.cmu.edu/user/webb/htm l/iccv-stereo.html

    describes a system that I saw demoed in, um, 1993. The system computed an image at 10-15 frames per second, and each pixel of the image had a brightness proportional to the distance. I stood in front of the camera, and a human-shaped outline appeared on the screen. I put my hand out, and that part of the image got brighter, with my hand brigher than my arm.

    The algorithm took several hundred megaflops. Lots of today's chips should be able to run that at a useful frame rate.
  • by cr0sh ( 43134 )
    I didn't think so, either (that it used photogrammetry), but I wanted to present the info on it so that people would know about it.

    The software developed for this device may be new, but getting depth information from a greyscale image isn't - such techniques are also discussed in cartography textbooks, as well were used in the 3D imaging of the "face" on Mars (along with photogrammetry, I believe, since they had two different images as well - however, I think greyscale extraction was more useful, since the two images were taken at different times of "day").

    Regarding using beams of light and special software - I wonder if you could build your own rig using a scanned laser, a high quality video camera, and some digitizing software? Set the object on a turntable, set it rotating, and scan the laser in a vertical "stripe" across the surface. Do a real-time video capture, strip out the individual frames, then do the greyscale analysis on those frames, along the line of the laser path. The various values could then be used to calculate "depth" values along the surface of a cylinder, say (or, have some way to keep track of the angle of the object on the turntable - maybe a potentiometer hooked to a joystick port, or a mouse wheel sensor, something like that - then use that to calculate the radian offset around the Y axis).

    This device seems way overpriced for what it does - it should be able to be done much cheaper. Maybe I should break out a Quickcam, an old recordplayer, a key chain laser pointer, and some duct tape - and see what happens!
  • https://merchant.metacreations.com/store/product.a sp?registered=0&dept_id=9&pf_id=46 [metacreations.com]

    Camera for Minolta 3D 1500 Windows - $4,495.00 - In Stock

    Ouch.
  • I imagine the developers of this are spending most of their interviews/press releases repeating variations on the following: "No, damn it, p0rn had nothing to do with it!"

    [Yes, it was immature to say that. Very immature, thank you.]

  • You can go out and buy a handheld laser scanner from Polhemus, that will let you scan 3D objects to a high level of precision in point-cloud form, as well as capture a colour texture map, a bump-map for extra realism, and have all your scans automatically aligned in 3D for the fastest possible processing into a mesh.

    You can then either triangulate the point cloud for a dense triangular mesh, or fit splines or maybe a subdivision surface cage to it.

    Appply the bump/colour maps, and voila, an excellent, truly 3D representation of the object being scanned.

    This technology, of course, costs a lot more than the camera from minolta, but gives results that are probably about 100x better over a wide variety of objects.

    Someone should come up with a cheap-ass laser scanner.. whats involved in making one of these?

  • This camera seems to do what researchers at Berkeley have been doing [berkeley.edu] for a while now, but its getting to consumer level. The culmination of this technology is to be able to give the computer a regular 35mm photograph and let it reconstruct perfect models and textures of everything it sees.

    The Berkeley researches have this working, only it takes more steps and more time. Alot of these techniques were used in The Matrix for stuff like the bullet time shot and the scene in the beginning where the girl does the levitating crane-kick. See this picture [berkeley.edu] for another example if you have no idea what I'm talking about. And its all done in BSD so, yeah, Linux could probably do it too, even if this Minolta camera doesn't let you (they won't be the only ones with a cool-ass-3d-camera-you've-gotta-have).

    These techniques are the future, not just for games, but for anything 3D. Its still polygon meshes too, so all of our other techniques for working with polygons (clipping, colision detection, transformations, etc..) still work fine just little to no modelling time. (but then of course, how do you do the really cool stuff like alien worlds or evil monsters?)

    Kinda reminds me of an old Doom map modelled after the bethesda movie theater close to my house. =]

    Dave
  • If you think about it.. The price isn't really that bad. If some rich kids were to go out and buy these and start developing maps with object created in it - i think that the price would come down to about $1K since people of the "home use" market are using it. Who knows.. I'm just rambling.
  • We reported about this along time ago on geeknews (http://geeknews.net/cgi-bin/fooboard.pl?944436957 ), but it's still cool non-the-less. The bigger version of this is really cool. I can see this camera really being used in new Quake maps. The only problem would be that would need to tone down the poly count.. Just read the link above that I added and you'll see what we had to say.
  • where's the web page for it???
  • People keep talking about how great this would be for creating Quake levels and such, and how easy it would be to digitize whole rooms. THE CAMERA CANNOT DO THIS! It can ONLY create 3D models of relatively small objects. Something the size of a human would be pushing its limits. The website explicitly states the camera cannot create models of rooms or buildings. The thing this will be really big for is merchants wanting to put 3D models of the products they want to sell onthe web for consumers to look at.

  • And that's exactly what sucks about this thing. It's a combination of a camera that already exists and software that already exists (but this version only works on Windows).

    I just downloaded it this morning, so I haven't made anything fabulous with it yet, but there's seemingly decent GPL software out there that's supposed to do this with your photos already. It's called "Panorama Tools," and it includes a "stitcher" and plug-ins for Photoshop and all the other stuff you need to make VRML-type environments. I didn't take down the address of its site, but you can find a link at jmac.org, in the open source Mac software section. Supposed to work on Linux and GIMP too, but to a lesser extent (of course). I was just testing it when I saw this story, so I'm not sure if it's a perfect implementation, but within five minutes I was (mis-)using it to make some otherwise-time-consuming backgrounds that look like Cocteau Twins album covers (which is cool, but obviously not what it's made for). Looks like they're in need of development help and/or cash, too.

  • Yes there have been 3d cameras that produce 3d images. But to my knowledge, this is the first that can be used to provide a 3d model of the subject (using the software) which then can be used in 3d Apps (depending on the format). So there is a difference.
  • I can't find a link, but a team from Louvain in Belgium developed software that would take a video feed, work out depth cues (moving around helped), create a mesh and overlay the texture from the video feed. (not in real time though) If you wanted a complete object you had to walk around it of course, but it worked well for pan shots too. You ended up with something like the 3D views of the Mars Pathfinder project. They had another system for stills - you would take one regular photo of e.g. someone's face and then another while shining a grid pattern onto their face using an overhead projector. Not as well defined as using a laser to measure distances or anything, but pretty good for knockabout use.
  • I saw in a recent (January) Discover magazine that someone had come up with software to generate 3d images from multiple flat images. The only thing they had used it for was pictures of themselves for IRC. No chance of realtime tat way, but MUCH cheaper andstill pretty cool.
  • Actually it's not. Paul Debevec's software is a semi-automatic (ie. lots of human intervention) way of building *efficient* (low polygon count) CG from photographs. The Minolta camera generates high vertex count height fields completely automatically. The rendering idea is the same - projective texture mapping.
  • This is not intended to create 3d pictures, or stereographic photos.

    By using this camera, and the assosciated software you "stitch" together multiple views, from different angles to develop a 3d model of the object.

    This is not about making pretty pictures, it is about taking existing real world items into a digital format that can then be used to do further development.

    An example might be an architect is hired to add to an existing building, or develop the grounds surrounding it, the building was designed and built in the 1920's so all of the designs are ink on vellum or linen.

    That makes it hard to easily make a 3d model of the place. So he takes a few shots of the building from ground level maybe a couple from surrounding buildings, stitches points that are common from one photo to another, and voila there is a model he can take into 3d Studio or similar, and do his planning.

  • The file format for MetaFlash Studio, which is the software used by the camera, is called MetaStream and is an open file format, you can get it here [metacreations.com].You need to fill out a registration form, but only name, e-mail, platform, and what type of content you develop are mandatory.

    Also on that page is some sample source for a reader viewer. The biggest problem with the sample code is that it does not include any vertex ordering. So it doesn't know what order to draw the vertices.

    But even at that there is nothing keeping someone with some knowledge of Netscape plug-ins from creating a viewer for Linux. Or even a reader/writer to convert VRML to MTS (thier file format).

  • I work in television and work with Virtual Sets on occasion. In case anyone doesn't know what that is, it's a computer generated set that real objects are chroma-keyed into. The systems track camera moves and zooms to match the background set with the real people (and sometimes objects). All I can say is: this is cool. I can think of dozens of cools applications for our CG department already.
    ----------
  • I seem to recall that our depth-perception abilities result from the fact that we have two eyes. So you can prettily render something all you wan,t but to be able to recosntruct the 3D image truly, you need two different shots. Mind you, if taken at the same time, action will be preserved, which could be very cool for sports and suchlike.
    ===
    -Ravagin
  • Anoter example: it might well be excellent for capturing human faces, which is a tricky £$%£$^%$ to make by hand.

    Hmm, i don't know how well this would work, though... because the dips(? depressions?) in the face may not be correctly captured by the camera. Especially if the bottom of the "dip" isn't visible except in one of the camera views. Esp. the part around the eyes and nose -- there are a lot of dips/depressions that are obscured by the cheeks, etc., when viewed from the side, so these features may be mis-reconstructed as flat features since the camera only captures them in the front view.

    In fact, if you photograph an opened tin can (with the bottom still there), chances are the model will produce an unopened tin can shape, because only one camera view actually sees into the can; the other views are obscured. Well, this is a contrived example, but still...

    Well, maybe there's a way to do it. I'm not sure...

  • Metacreations has essentially dropped their graphics business to become an "e-commerce visualization internet company". They've laid off a good portion of their staff and have not given any statements about where the future lies for their major graphics apps. User beware.

    BTW, there's plenty of "Eyecandy over functionality" in unix. Remember what enlightenment started out like?
  • If you took two shots from two distinct points which have the same position with respest to each other as human eyes, you will be able to preserve the depth information in the picture. There would be one picture for each eye to look at. This type of photography has been around probably since the 19th century. However, the image is not 3D in that you can't move a bit to one side and see what is behind a person for example. For this you need at least 3 cameras and some smart software I think.
  • Marsokhod was the name of the Russian rover on board the Russian Mars 96 mission, which, er, landed in the Pacific shortly after takeoff.
  • using three video cameras from different angles (they should not be collinear). There is software that can generate a full 3D scene, which you can move through. The French Canal+ did it a lot during the Atlanta Olympics to show how athletes moved, etc. I am not sure if it can be done in real time (they had delay before showing the 3D) with high image quality though. But I assume the software for it would be fairly standard, even though computationally intensive, once you have established the exact positions of the cameras.
  • I have been working with such a camera for about 5 months now and I can tell that it's not that useless as some said. Unfortunately it has a major disadvantage : it users a laser beam to "see" the object and scanning a fairly normal object takes .6 seconds. Which does not allow real-time image capture.
    However I can give anyone a hint about it utility. Imagine an intelligent system that looks for/identifies/retrieves objects in an environment. Yes, like a robot that is asked to fetch an object (say a book). You can do it in several ways, but one of them implies storing a database of book features. These features constitute a description of the object you need.
    Well, as you may have guessed, these features are obtained from such a camera.
    There are other possible usages, like CAM (I guess no-one noticed that the software that comes with it generates a mesh of the object
    Everything I said seems to be very nice - don't worry. Just look at the price and you won't be excited any longer.
  • Probably not. At least not in the near future. They support only Windows and SGI. And the support for SGI will probably be suppressed.
  • Combine the 3d real time camera with Jerry's Blind Eye (see BBC news articles on the Blind-man-can see) for some REAL FUN!
  • I seem to remember reading on /. a while back about a company in New York that was developing a multi-layer 3D "monitor", combine this with a true 3D imaging system, which this camera and others like it are a precursor to, and things should get interesting. (btw, if anyone remembers who it is that is making the monitors, let me know)
  • A friend and I were sitting around watching the NFL playoffs a few weeks ago, and we started talking about how we could use multiple cameras and some interferometry to calculate exact positions of objects for instant replay. Obviously, a 3D camera could aid massively in doing this. Has anyone tried real-time 3D rendering using multiple cameras on a particular event? It sure won't help the Tampa Bay Buccanneers, but it might help in later years . . .
  • Well, beat me with a trout! I really should have checked out their site before blathering.
    Why aren't they making software anymore, when their software is some of the best available (read: poser 4, bryce, kai's tools, etc) for graphics and 3D imagery? I can't imagine how this would make them any more money or give them any better repuatation than the great one they have.

    As for the other replies which talk about the eyecandy interface, it doesnt actually slow down that much. KPT6 on a Pentium 60 is smooth and efficient (read: realtime KPT Gel and transparency/lensing effects. There must be some ASM in there :).
    Because all widgets on X are done through external libraries (ex QT, GTK, Motif), the Metacreations programs would run as fast, becuase they would simply have their own widget library which made cool curvy semi-tansparent toolbars with shadow effects.

    With respect to your reply, in the open letter MetaCreations says that they will continue to support the products they have.
    If they aren't making any more new products, where did this camera come from?


    --
    Talon Karrde
  • The stereoscopic effects of taping two cameras together, however, are vastly less impressive than rotatable, scalable, importable, 360 degree, 3D models.
  • After seeing what the new Avids do with this capability, I would really like to have a camera that could take a 3D still. Talk about making it easier to get those impossible camera angles ..
  • All I could think about when I looked at that coconut monkey was a similar piece of junk I bought as a 5 year old for a dime. (Yes I still have it.) On the practical side it might be useful for "enhancing" the Ebay experience.
  • Get small people!.
  • This is a web-pasting fromhttp://www.ghouse.com /daniel/stereoscopy/equipment/index.html [ghouse.com]. Unfortunally this URL does not cover digital stereophotography.
  • Am I the only one who thinks this thing is rather reminiscent of the "Esper" machine in Blade Runner, which reconstructed 3D environments from pictures and helped Decker find things that weren't even visible in the original photos? Obviously in a very embryonic stage here though. I always wondered, however, where the Esper got its information, since it seemed to use regular 2D photos, even really old ones (which obviously had no possibility of a "holographic" information layer of some kind).
  • Anyone remember that 80s movie, WarGames? Hire gamers to pilot real war vehicles. Is this perhaps the method in which the information can be given from the battlefield? 3D gaming that isn't gaming. You're not playing Quake, you're playing Kuwait.
  • ...Metacreation's stuff is already very stable and useable...

    Stable? Sure. Usable? Sure. Efficient? No. Imagine the memory requirements for their programs with their 100% eyecandy interfaces... probably (wild guess) 10mb at the least just for the interface and the little buttons. Just give me Lightwave's interface and I'm happy...

    It'd be kind of funny seeing what their programs would look like in Linux though, I'm not a Linux user myself but I've used it before, and MetaCreations software is the EXACT opposite of Linux: Eyecandy over functionality.

  • I bet you are right. Yet another way to lure more lamers onto the net and hog up bandwidth.
  • Some forms of 3D video are already possible.
    One commercially available package can be
    found at this link: [eyetronics.com]

    http://www.eyetronics.com/main.html [eyetronics.com]

    Check out "Applications", then go to the bottom
    of the page and follow the link for more details
    at the bottom of "Professional 3D services" (easy, no?).

    There you can find examples of the acquisition
    and results of their technique for 3D video. It's
    just a start, but 3D video is definitely being
    developed.

  • misha__ wrote:

    >Some forms of 3D video are already possible.
    >One commercially available package can be
    >found at this link:

    >http://www.eyetronics.com/main.html [eyetronics.com]

    >Check out "Applications", then go to the bottom
    >of the page and follow the link for more details
    >at the bottom of "Professional 3D services" (easy, no?).

    I'm sorry. I meant "Products", not "Applications."

  • there are several research efforts in this direction, most notably from CMU's vision group - "Virtualized reality" - where upto 50 video streams are captured from different positions. Stereo techniques can then be used to build complete 3D surface models, and create a virtual world, which can be freely navigated. The major problem, right now, is doing this in real-time. Last I heard, they recorded the video on 50 different processors and did the reconstruction offline. Here's a link: http://www.cs.cmu.edu/~virtualized-reality/
  • This looks like really cool stuff. I'm just curious as to how well it actually turns out, and how useful is this technology. Is there any really worthwhile endeavors (besides recreation), that this would apply to directly, like in a research or lab setting?
  • That Company is DMA (Dimensional Media Associates). www.3dmedia.com (i work there).
  • Lots of misconceptions floating about here. The software is new. It has nothing to do with what you were talking about, and this is not photogrammetry either. The camera takes two pictures: one for the texture and one as a greyscale depth field. The depth is created by scanning over the surface with beams of light (hence the max distance of 90cm) and reads the depth offset in the scanlines as they travel over the surface. This creates "facing" surfaces which are modifed and stitched together into one model. The polygonal resolution can be optimized through software to any appropriate detail level.
  • That sounds like an impressive set-up. If you "already did this" do you have any of the resulting 3D Models to show us?
  • Well, not really - I was an intern at the MetaCreations office where this technology was developed.

    Let me tell you: This is some awesome shit.

    DISCLAIMER: I no longer work for the company, and the following information is gathered from what I have found over the web and from asking people at the company.

    History

    The technology was initially pondered by a russian physicist - Alexander "Sasha" Migdal. He came to the United States a long time ago and did work at Princeton University in various fields (mostly in physics, I believe). After a while, he formed a company with his friends from Russia called "Real Time Geometry." Sasha is an insanely smart man. A little eccentric, but smart :)

    RTG pioneered the technique of being able to dynamically set the number of polygons you want to render a model with. For instance, you could have a massive model of a helicopter render with full detail when it's close to the camera, and have it render with less detail when it's far away from the camera. This technology is now part of MetaCreations' MetaStream [metastream.com]

    The company was bought out by "MetaCreations" in (I think) 1997 (or thereabouts). MetaCreations was the merger of MetaTools and Fractal Design.

    After this was when the technology that we're discussing now was beginning to be implemented.

    Process

    Although I have not performed the procedure myself, I have seen it done on many types of objects, from pottery to toys to PEOPLE'S FACES.

    The object is placed in front of a black background with several lights around it with the aim at neutral lighting. The black background prevents a shadow from being interpreted as part of the object. The camera is usually placed about 3 meters away (not precise, just average or so...). For "in studio" objects, a laser was used to accurately calculate the distance to the subject. The technology has been refined a lot (obviously) and just when I was about to leave, they introduced this deal with Minolta in an All-Hands meeting.

    Now, I see a post of 5, Interesting that states that one of the shots is a piece of pottery - how simple is that!

    Well, it's not. The reason?? TEXTURES. The 3D imaging RECREATES the model so as to preserve not *only* the size/shape of the object, but ALSO the *look* of the object under certain circumstances - for instance, certain lighting environments.

    That's why a pot ain't so easy. While the shape might be "easy" (you try extrapolating 3D data from 2D data), the texturing is even more difficult. I can remember seeing models where everything was great, except maybe when you look into the pot and you see a hole at the bottom and you think "Hurm, we hadn't thought of that, had we?"

    In any case, that's the process. Now how does dynamic resolution and 3D imaging come together? Simple: The fact is that many objects (people for instance) have *curved surfaces*. Within the realm of polygonal 3D modelling, you *have* to throw out data, it's just not gonna all fit. While the camera/software figures out the 3D models, it is very difficult to render them in real time... MetaStream does a wonderful job of rendering huge objects in real time, even on a shitty computer.

    Now, in this wonderful time of the web and stuff, MetaCreations (I think) is positioning this software/hardware for two things:

    1. Family Fun (share 3D images with your friends!)
    2. and E-Commerce (see what you want to buy in full 3D)


    Of course, that means you need small files - full 3D models and textures the size of a GIF or two? Yep. It's pretty cool stuff. From what I know, it's a wavelet compression technique that compresses both the textures and the model data. Most models (of people's faces, toys, pots, whatever) are in between 50 and 200 K, which is pretty remarkable for the quality that you get from MetaStream.

    Several web sites have already implemented this technology, and make quite good use of it. Here's a sampling:

    • LEGO's MindStorms Site [legomindstorms.com] uses it to display what kind of robots you can make
    • AutoByTel [autobytel.com] uses it to display cars
    • And more... they are listed on MetaCreations' site


    Sorry for the long post, but I hope I cleared up some information.

    PS - Hi to Sasha, Victoria, Dmitry, Victor, Baga and everyone else! :)
  • I forgot to say Hi to Miguel :)
  • I have worked with this package as well as other competing products and am quite impressed with the level of quality vs. effort required for this form of 3D imaging. The funny thing is I am sitting in a meeting as I type presenting the various options for this form of image capture as my company's technology lead, and I just was doing my rounds (on my new WaveLAN 11MB Silver wireless card!) on the Web when I saw this article!

    If you want Linux compatibility in fact, plus a much better, less restrictive product overall, checkout Eyetronics... in fact, last time I spoke to a developer there he said that Linux was his PREFERRED platform for his software. The main benefits of Eyetronic's technology are the following:

    1. It's format agnostic... choose the 3D format YOU want to deliver (or archive) in.
    2. It's camera-agnostic... you choose the camera YOU want to capture in. We have used everthing from a Digital 8 consumer video camera to high-end stationary Sony cameras with 100 MB+ image output.
    3. The only requirement is their software, ShapeSnatcher and ShapeMatcher (I know, bad names!), a standard slide that works in any slide projector (I'd recommend something with a quality glass lens for low distortion), and a block of wood with a printed pattern on it (you can make yourself from included PostScript files) for calibration of the system.
    4. The software is DAMN impressively, well designed, rock-solid, and highly trainable... you could teach a monkey to build high-polygon models with gorgeous, perfectly aligned textures.
    5. The only downside is cost (and for you guys, if you haven't guess yet, it is NOT Open Source and probably won't be anytime soon)... total cost of software is about $7,000 I believe, though you don't necessarily need both components.

    If you want more info feel free to ask.. I've demoed and used most of the available 3D capture technologies, and for non-critical work (engineering, etc.) these new breed of photographic solutions seem the best. And there aren't as many kinks or hitches as you might think, you'd be suprised what these guys have done with image- and contour-analysis and a lot of intelligence on their part.

    I may be wrong, but I believe Eyetronics started as a university project in Sweden or Denmark... probably Denmark.

    Btw, before I get flamed for being a fraud, I work for a market-leader in ecommerce-oriented 3D imaging, but this is as close to my real identity as I can post under. If you can figure out who I work for, bully on you, but it isn't "0110".

    :)

  • Link 3D photo technology with this [biomodel.com] and those Natalie Portman statues could become a reality.

    Add a trouser-full of grits (don't forget to tie-off the cuffs first) and you've got yourself a party!

    ; )

  • Robert Curwin pointed out Vanguard [ox.ac.uk] in another comment [slashdot.org].

    Also, I read about 3DBuilder [3dcafestore.com] a long time ago; it looks semi-automated.

    Some more random digging uncovered an index of VR research [culture.com.au]. A month or two ago i was looking for information on panoramic photography and I read a summary of someone's thesis (IIRC); he automated the compilation of the best affine transformations on frame-to-frame video, then statistically analyzed those transformations to yield great detail. I can't find that right now. :(

  • I dunno, probably interferometry techniques applied to diffuse reflection. Unlikely to ever be as accurate as in the movie, however. (accuraccy in "unseen" regions would also degrade exponentially with distance)

    Probably be a matter of iteratively refining a volumetric model from initial heuristic "guesses", I suppose. Most wrong guesses would be detectable as the results would visibly lose self-consistency after the first few iterations.

    (also a good way to detect doctored images, although I daresay there are easier and more efficient ways of detecting those)
  • I've heard of software that works much like they describe on their site. The software I've heard of has been around for a while, and the way it works is you have to specify "tacks" on the same point of the object on each photograph. The software can then solve for a 3D coordinate for each of those tacks.

    Essentially, you have a vector go from the "eye point" in each photo through each tack in that photo. You then solve for where the vectors for each tack come as close as possible to intersecting at the same point. (by finding a least squares solution to a system of linear equations) This is a bit of an over simplification, because the position of the "eye" in each photo is a variable as well.

    Textures are generated by actually taking pieces of each photo between the tacks, scaling and stretching them appropriatly, and then blending them together.

    It's all a pretty neat process, but to use in a real-time setting you'd need multiple cameras, and some sort of AI that would place the tacks. As it is, the process has a fairly large manual component. Doing that with every frame of a video would be extremely tedious. (but could probably be similfied by the fact that each "tack" probably doesn't move very much from frame-to-frame)

    I can't figure out what that extra piece of hardware is for though. This type of software normally works with ordinary photos. Even scanned polaroids or hand-drawn artwork (if reasonably accurate) would work. Does anyone know what that hardware does? Does it actually somehow scan "depth" information? If so, how?
  • Marsokhod is also the name of the rover we bought from the Russians, which I worked on two summers ago at NASA/AMES. this one hasn't gone anywhere except for field testing -- except for in the nicely realistic 3d models of Mars on the nice SGI boxen.

    in russian, Marsokhod just means "Mars rover," just like their lunar rovers, which were named "lunokhod" 1 and 2, so I'm not all that surprised it's not unique.

    Lea

  • very simple -- more pictures... there are systems that are pretty good at inferring what they can't see, but for a high level of detail in spots you can't see from one angle, you really need to get them from another...

    it's something you don't complain about in 2d... can't really complain about it in 3d either!

    :)

    Lea
  • If you spend the $5000 for the camera, you can probably afford to spring another $1500 for an additional machine to run the software.

    There are a lot of applications that are orders of magnitude more expensive than the hardware and OS that they run on.

  • It appears that you need to take multiple pictures for the effect. That seems to kill any "action" images right there.
  • my GAF viewmaster, gosh, with the Grand Canyon and Aircraft carrier reels I love it!

    Agent 32
  • Numerous techniques for 3D shape recovery have been developed. Some are based on finding correspondences between points in different views of the object; you can think of it as a sophisticated form of triangulation (and originally, it was done by hand). Some use video sequences and extract shape from motion. Some use multiple cameras for stereo. You can get 3D shape from shading, from texture, from focus, and from lots of other image properties. Those are all passive techniques. Passive autofocus systems, as found in many AF SLR cameras, can be thought of as a very simple form of passive 3D shape recovery.

    There is also a wide range of active techniques. In those techniques, you don't just use a camera, but you also use some kind of light source. Structured light-based 3D recovery can be done in real time and there are lots of approaches to that as well. You can think of active autofocus systems, found on many P/S cameras, as structure light systems.

    Both software-only and software/hardware combinations for 3D shape recovery from images are commercially available, and some are also available for free as research code. Still, don't expect this to be easy or completely automated.

  • is a neat toy but is basically a rework of the engine inside Canoma (produced by MetaCreations). It takes what would be a 2D image and interpolates the dimensions by calculating shadows and lightpaths and such. You can use Canoma to make a 3D rendering of your living room from a photograph. I think they made it a little cooler by taking multiple pictures of something and combining all of them to do real good modeling. I'm sure a printer could be developed to print images holographically but I'm not sure if this is really viable for 3D apps until a good 3D display is developed.
  • Check out www.stereovision.net for more info.

    Better yet, check out www.stereovision.net [stereovision.net] for more info.

  • This technology doesn't sound applicable to video techniques. Minolta's FAQ indicates that "at least" six shots of an object are typically necessary to build a 3D image from it. It sounds like the camera takes 2D photographs from different angles, and Metastream's software interpolates from those photographs to determine the object's solid structure.

    It sounds unlikely to be a useful technique to apply to video; you'd have to have six videocameras recording the same scene from different angles. I'm not even sure that the state of the art begins to touch the problems of recording video in three dimensions, storing the data, and playing it back.

    I wouldn't hold my breath waiting for Quake environments built from this technology either. They're building a 3D model of an object based on external photographs; doing the same thing with internal photographs is a very different ballgame.
  • The method usually used to generate 3D models from multiple photos is called photogrammetry - and is used in aerial imaging to extract elevation from multiple shots of terrain. It is explained in just about any good cartography textbook.

    Essentially, for consumer use, the camera is flipped horizontal with the subject in front of it. Everything proceeds according to Zagadka's description, pretty much.

    Incidentally, there is an old copy of Byte magazine, from the late 1970's describing how to extract the 3D information from multiple shots, with included BASIC code to calculate the 3D vertices from the 2D inputs. Pretty cool - crazy though that only NOW are we actually using this at the consumer level, even though an article in a well known computer magazine has languished for nigh 20 years!
  • Maybe you could have two simultaneous camera feeds to achieve the same effect.
  • For 3D reconstruction from video sequences, no special hardware is required. See Vanguard [ox.ac.uk] at Oxford University for example.

    Rupert.

  • Um... what the hell?

    First there's that post that gets moderated up to 6... now this one's at -5. What's going on?
    --
  • I guess this camera and software combo are the comercialization of this research [berkeley.edu]. Go there if you want to know how it all works and how cool it can be.

    Dave
  • Imagine the potential for videogames. If this is capable of producing a textured 3d model, just think how realistic the caves in Tomb Raider MCMLI could look. Or how easy it would be to create a 3d model of a room, complete with textured furniture, walls, etc.. Take a few from a couple of different angles and you have a photorealistic model. I wonder.. Could I virtually paste myself into the video feed from my office and look present?
  • Well true 3D video is going to take a while to come by. But for right now you can watch movies in a 360 dgree interactive enviroment. see it at http://www.behere.com/ [behere.com]. Only if someone can figure out how to do this with IPIX (http://www.ipix.com [ipix.com]) technology.
  • It would take some big schpense to develop and manufacture, but it seems possible...

    Take a camera, and give it a bit of sonar-like ability to determine the distance between the camera and whatever scene you have it aimed at. The camera builds a wire mesh of the objects in front of it, then breaks the image data down into textures, which it then wraps the mesh with.

    Sonar is of course out of the question, but I'm sure there's better technology out there. I mean, I just come up with 'em, I don't implmenent 'em.
  • We reported about this along time ago on geeknews (http://geeknews.net/cgi-bin/fooboard.pl?944436957 ), but it's still cool non-the-less. The bigger version of this is really cool. I can see this camera really being used in new Quake
    maps. The only problem would be that would need to tone down the poly count.. Just read the link above that I added and you'll see what we had to say.


    I seriously doubt that the quake market could sustain a company's entire line of digital cameras. I would like to know a few things.

    1. The cost? I don't want to have to mortage my house just to pay for one.

    2. Interface? I would like this to just plug into a standard serial or parallel port. Failing that perhaps something like just taking the film and allowing for floppy film based things.

    3. Linux compatability? I could always use a machine that actually worked with linux and that worked with linux apps. I don't want to buy either an expensive commercial 3d app or to have to upgrade my pc just to use this.
  • Yeah, more useless tech. Like when the first PC's came out. They were useless then for the average user. (Some would argue they still are useless to the average user...) This is simply technology which needs to mature. There are many areas where this could have a big impact once they have it up to speed.
  • Several companies already produce shutterglasses that give "true 3d" to any 3d accelerated software out there. Heck, Elsa sticks a pair in with almost every card they sell.

    Check out www.stereovision.net for more info.
  • Except this appears to allow you to create a digitized 3-D surface, rather than just a stereoscopic image.

    Theoretically, the output of this camera would allow you to use the image in a rendering application to produce an "actor" in a setting. You can't do that with a disposable camera stereoscopic image without additional work, information, and calculation.

  • System Requirements: [minolta.com] Windows 95 OSR2 (Ver. 4.00.950b) or later, Windows 98, Windows NT 4.0.

    I don't use windows much, and not at all at home. So this new "technology" isn't of much use to me.
  • I think perhaps you're being too critical. The camera is obviously not meant to be the end-all be-all of 3d modeling. It's meant to provide a relatively cheap, very simple method of creating 3d models of real world objects. If someone really wants high quality, there are plenty of other (far more expensive) options. But for a small business, this is a great way of setting their products apart from the rest.

    I worked for a company that made sensors and parts for many research and engineering corporations. They wanted to be able to put 3d models of their products on the CD version of their catalog. With hundreds of thousands of items to be modeled, however, they couldn't afford the cost of either having it professionally scanned or hiring a computer modeler. They would love a camera like this.
  • Well, the site say 95, 98, or NT. Windows, that is.

    Not linux...

    Why? Becuase MetaCreatons does not develop Linux software yet. And the camera works with Metacreations software.
    So, let's get some of you people out there telling MetaCreations that you would buy their software if they released a Linux version.

    BUY?
    Yep. Metacreations is not going to open source their software, becuase they make lots of money off of it. But, they might just make a linux version. Metacreation's stuff is already very stable and useable, and a linux port would probably inherit those features. A Linux version would be great...

    In other related news, somewhere on the BeOS site it says that MetaCreations is porting some of their stuff (no specifics and this may just be rumour) to BeOS. Once that hurdle is jumped, a port to Linux shouldn't be too hard at all.
    --
    Talon Karrde
  • Last months Stereo3D-newsletter [stereo3d.com] mentions a digital 3D camera that has been around since 1997. There's a cool picture too.
  • But what you would get would be a 3d view of one side of the object. For example, if you took a picture of a soccer ball, it would tell you that the particular angle you were looking at it from was rounded, but it could well stretch off behind like a cylinder or pipe.

    The camera could well be used from different sides (eg take a pic from the top and bottom, then the front, back and left sides), but since software exists that lets you do this already (including Metacreations' own Canoma software, but its not too good) it seems a bit gimmicky. Then again, I'm sure it would help novice 3D modellers get decent-looking objects if the detection mechanism was of a high enough resolution to capture a lot of detail.

    Anoter example: it might well be excellent for capturing human faces, which is a tricky £$%£$^%$ to make by hand. So all in all, could be useful for some people some of the time. Bit like most things :o)
  • by SIGFPE ( 97527 ) on Thursday February 03, 2000 @09:19AM (#1308212) Homepage
    You underestimate how hard this is! The camera will just give you a heightfield relative to the camera - with plenty of error. A table top will appear as a jagged surface. Apply CG lighting to that and it will look like a mess. A table will in fact look like a cuboid if you photograph it from above and don't get the occluded legs. To get something that works from other viewpoints means combining different viewpoints together - not an easy task. The data you get will be a high res cloud of points. Far too much data to work with efficiently. And it doesn't tell you which vertex is connected to which. How do you tell whether the step from the table top to the floor is a continuous surface or a discontinuity when looked at from above. For furniture it's probably much easier to build your own CG table by more traditional methods and use ordinary photos for texture (for example the Trinity's bullet-time 'kick' scene in "The Matrix" was built this way). For caves it's probably much easier to procedurally synthesise the rock texture and displacement.
  • NASA does it too. the mars rover Marsokhod has steroscopic vision, and some very nice SGI's make 3d models out of it that you can move Sojourner and other rovers around in...

    Lea
  • by Rombuu ( 22914 ) on Thursday February 03, 2000 @10:14AM (#1308214)
    <i>I don't use windows much, and not at all at home. So this new "technology" isn't of much use to me. </i>

    Who cares what operating system you use? You think /. should not have any stories about stuff that won't run on your damn computer?

    Get over yourself.
  • by Animats ( 122034 ) on Thursday February 03, 2000 @09:29AM (#1308215) Homepage
    3D depth extraction can be done in real time. See Point Grey [ptgrey.com]. [Their site's down today; I hope they didn't go out of business.] They build a nice hardware/software system with three cameras arranged in a triangle. Three-camera stereo works much better than two-camera; most of the ambiguous cases go away. Their hardware is overpriced and their software is closed-source, but maybe somebody will deal with that. The algorithm isn't that complicated, but it's really expensive computationally. Their first implementation used a DSP, a hardware convolver chip, and a Transputer, but they've since moved to more standard hardware

    Canoma [metacreations.com] is a re-implementation of some work done at U.C. Berkeley in the mid-90s. The Berkeley group liked to do big things like buildings, and modelled the central part of the Berkeley campus. They got their aerial photographs using a camera on a kite; there's an architecture prof at Berkeley who's developed good techniques for doing this. Much cheaper than a helicopter.

    Both Canoma and Metaflash are semi-automatic systems. The user has to manually identify corresponding points and edges between multiple images. This can be a lot of work. One more generation and somebody will have this fully automated.

  • by Kon ( 134742 ) on Thursday February 03, 2000 @08:59AM (#1308216)
    It has a range of 90cm. At 20cm it has a accuracy discrepancy of 1mm. At 90cm it is probably close to 1cm. It can't take pictures of areas larger than 90cm distance.

    The screenshots neatly show reconstruction of a simple piece of pottery. Jesus, but if that isn't the simplest 3d object then I must be smoking something.

    You'll get better stereoscopic results taping two $14 disposable cameras together! (I've done it, it works, just get the focal distance right).

    Another example of useless technology. And I cringe at all the thousands of useless vertices this solution will create in 3d models. No thanks!

    Oh, and note the accuracy discrepancy of 1mm is from a photo of a ping pong ball. Like we all need pictures of perfect round circles :P

One man's constant is another man's variable. -- A.J. Perlis

Working...