
Minolta 3D Camera 150
Bookwyrm writes "This was just an interesting technology toy/tidbit I ran across.
Metacreations and Minolta have teamed together to develop
what appears to be a modified digital camera that allows you to take
'3D' images. The camera stores/digitizes the image data in such
a way that Metacreations' software can (re)construct a 3D model
of objects in the picture along with their textures. While mildly neat in itself, it would be interesting to consider
how far you could develop this technology. Could you do real-time
3D capture using a video camera with these techniques (and sufficient
computer power)?"
I did this already (Score:1)
Re:Requirements (Score:1)
The camera costs $4,495.00 and appears to only have software for windows (I'd expect MetaCreations to have ported it to MacOS by now, but oh well)
The MetaFlash software that goes with the camera looks neat, http://www.metacreations.com/products/metaflash/
Re:Old, but still cool. (Score:1)
3d camcorder. (Score:1)
And there stock kicks ass (Score:1)
Re:But.. (Score:1)
I'm sorry, I assumed such a high-tech crowd knew how to use search engines or simply take a stab and try http://www.eyetronics.com [eyetronics.com]...
I apologize if I seem snotty, I've got 12 hours of work to do before tomorrow morning, and I'm stuck in a meeting destined to last until at LEAST then :)
Cheers,
off topic: metacreations going opensource (Score:1)
It is interesting that Metacreations is in this deal with minolta. It seems like a perfectly good fit of a great graphics software company and a great camera company. But Metacreations has had it's own difficulties lately. Metacreations is a company with really neat products and what seems to me almost no good marketing skills. This has led to a sustained loss [newsalert.com] every quarter last year. They are often over shadowed by the big boys in Microsoft and Adobe. Well, it seems obvious to me that they should consider opening up their software at least in the graphics genre. If they had the gimp on their side...
Even nicer would be if an open source company bought them flat out and opened them up. This would be chump change for RedHat but also very possible for Corel.
I know you all have heard it a thousand times before. Open it up! In fact I will probably be moderated down because it is so off topic. But first read what I have to write. Why did Netscape open it up? Because they couldn't support a free browser like Microsoft could indefinitely. What compelling reason would one have to buy Metacreations software over Adobe's products? Features? Certainly not because of that, very few people buy software for the features. Cost? Well it is a concern but only to people not making money at doing graphic work. What else then is going to bouy metacreations? Their e-commerce? Maybe, but even then this product line has nothing to do with their graphic software. So spin it off!
Applications to Gaming? (Score:1)
would jump to have - Imagine being able to take a picture of a real-life room, and then plugging this camera into your computer and using a converter. Quake 1/2/3 levels of real buildings in seconds! I wonder how well this camera works with large objects like rooms or groups of people though. Anyway, you hear alot about people going out for game companies and visiting dumps to get pictures of stressed metal and such.. This could improve the realistic quality of games alot.
Re:More Info - Photogrammetry (Score:1)
I still don't understand how this depth scanner actually works though. If they're using ordinary visible light for the depth scanning process, wouldn't the colour of the scanned surface affect the perceived distance? (heck, even non-visible light would be affected by "colour")
Re:First Use (Score:1)
Yes (Score:1)
You can do anything, if you really want to.
Re:Maybe I'm mistaken, but.. (Score:1)
Being able to process the pictures on the computer looks like it has some nice potential, but not for your average joe.
the most useful application for this... (Score:1)
Maybe I'm mistaken, but.. (Score:1)
--neil
Re:Read the small print guys (Score:1)
Re:3D video unlikely (Score:1)
for example: you create the three d image (say of a head) then you video the image... your incredibly fast (say an octium 6000mhz)looks at the image each frame and updates the movement of three d object relative to the change in the video... as each eye moves relative to each other, you move/rotate the three d object relative to those two points, the same thing might apply to a jint except you would need at least two points on each joint
sorry I can't explain it better, but give enough speed, I see no problem creating a virtual image given enough speed...
Edward
Re:3D extraction from video (Score:1)
describes a system that I saw demoed in, um, 1993. The system computed an image at 10-15 frames per second, and each pixel of the image had a brightness proportional to the distance. I stood in front of the camera, and a human-shaped outline appeared on the screen. I put my hand out, and that part of the image got brighter, with my hand brigher than my arm.
The algorithm took several hundred megaflops. Lots of today's chips should be able to run that at a useful frame rate.
OK... (Score:1)
The software developed for this device may be new, but getting depth information from a greyscale image isn't - such techniques are also discussed in cartography textbooks, as well were used in the 3D imaging of the "face" on Mars (along with photogrammetry, I believe, since they had two different images as well - however, I think greyscale extraction was more useful, since the two images were taken at different times of "day").
Regarding using beams of light and special software - I wonder if you could build your own rig using a scanned laser, a high quality video camera, and some digitizing software? Set the object on a turntable, set it rotating, and scan the laser in a vertical "stripe" across the surface. Do a real-time video capture, strip out the individual frames, then do the greyscale analysis on those frames, along the line of the laser path. The various values could then be used to calculate "depth" values along the surface of a cylinder, say (or, have some way to keep track of the angle of the object on the turntable - maybe a potentiometer hooked to a joystick port, or a mouse wheel sensor, something like that - then use that to calculate the radian offset around the Y axis).
This device seems way overpriced for what it does - it should be able to be done much cheaper. Maybe I should break out a Quickcam, an old recordplayer, a key chain laser pointer, and some duct tape - and see what happens!
Price (Score:1)
Camera for Minolta 3D 1500 Windows - $4,495.00 - In Stock
Ouch.
*snicker* (Score:1)
I imagine the developers of this are spending most of their interviews/press releases repeating variations on the following: "No, damn it, p0rn had nothing to do with it!"
[Yes, it was immature to say that. Very immature, thank you.]
Laser scanners do this, just much more accurately (Score:1)
You can then either triangulate the point cloud for a dense triangular mesh, or fit splines or maybe a subdivision surface cage to it.
Appply the bump/colour maps, and voila, an excellent, truly 3D representation of the object being scanned.
This technology, of course, costs a lot more than the camera from minolta, but gives results that are probably about 100x better over a wide variety of objects.
Someone should come up with a cheap-ass laser scanner.. whats involved in making one of these?
Cooler than you think.. (Score:1)
The Berkeley researches have this working, only it takes more steps and more time. Alot of these techniques were used in The Matrix for stuff like the bullet time shot and the scene in the beginning where the girl does the levitating crane-kick. See this picture [berkeley.edu] for another example if you have no idea what I'm talking about. And its all done in BSD so, yeah, Linux could probably do it too, even if this Minolta camera doesn't let you (they won't be the only ones with a cool-ass-3d-camera-you've-gotta-have).
These techniques are the future, not just for games, but for anything 3D. Its still polygon meshes too, so all of our other techniques for working with polygons (clipping, colision detection, transformations, etc..) still work fine just little to no modelling time. (but then of course, how do you do the really cool stuff like alien worlds or evil monsters?)
Kinda reminds me of an old Doom map modelled after the bethesda movie theater close to my house. =]
Dave
Re:Requirements (Score:1)
Old, but still cool. (Score:1)
But.. (Score:1)
Argggh - Read the Info! (Score:1)
Re:This is not to make a neat photo album (Score:1)
I just downloaded it this morning, so I haven't made anything fabulous with it yet, but there's seemingly decent GPL software out there that's supposed to do this with your photos already. It's called "Panorama Tools," and it includes a "stitcher" and plug-ins for Photoshop and all the other stuff you need to make VRML-type environments. I didn't take down the address of its site, but you can find a link at jmac.org, in the open source Mac software section. Supposed to work on Linux and GIMP too, but to a lesser extent (of course). I was just testing it when I saw this story, so I'm not sure if it's a perfect implementation, but within five minutes I was (mis-)using it to make some otherwise-time-consuming backgrounds that look like Cocteau Twins album covers (which is cool, but obviously not what it's made for). Looks like they're in need of development help and/or cash, too.
Re:Maybe I'm mistaken, but.. (Score:1)
Yes, 3D capture from 1 camera video has been done (Score:1)
3d with normal digital camera (Score:1)
Re:Image Based Rendering (Score:1)
This is not to make a neat photo album (Score:1)
This is not intended to create 3d pictures, or stereographic photos.
By using this camera, and the assosciated software you "stitch" together multiple views, from different angles to develop a 3d model of the object.
This is not about making pretty pictures, it is about taking existing real world items into a digital format that can then be used to do further development.
An example might be an architect is hired to add to an existing building, or develop the grounds surrounding it, the building was designed and built in the 1920's so all of the designs are ink on vellum or linen.
That makes it hard to easily make a 3d model of the place. So he takes a few shots of the building from ground level maybe a couple from surrounding buildings, stitches points that are common from one photo to another, and voila there is a model he can take into 3d Studio or similar, and do his planning.
Re:Why it won't work in Linux for a while yet (Score:1)
The file format for MetaFlash Studio, which is the software used by the camera, is called MetaStream and is an open file format, you can get it here [metacreations.com].You need to fill out a registration form, but only name, e-mail, platform, and what type of content you develop are mandatory.
Also on that page is some sample source for a reader viewer. The biggest problem with the sample code is that it does not include any vertex ordering. So it doesn't know what order to draw the vertices.
But even at that there is nothing keeping someone with some knowledge of Netscape plug-ins from creating a viewer for Linux. Or even a reader/writer to convert VRML to MTS (thier file format).
Virtual Sets (Score:1)
----------
2x perspective (Score:1)
===
-Ravagin
Re:Hmm, sounds ok... (Score:1)
Hmm, i don't know how well this would work, though... because the dips(? depressions?) in the face may not be correctly captured by the camera. Especially if the bottom of the "dip" isn't visible except in one of the camera views. Esp. the part around the eyes and nose -- there are a lot of dips/depressions that are obscured by the cheeks, etc., when viewed from the side, so these features may be mis-reconstructed as flat features since the camera only captures them in the front view.
In fact, if you photograph an opened tin can (with the bottom still there), chances are the model will produce an unopened tin can shape, because only one camera view actually sees into the can; the other views are obscured. Well, this is a contrived example, but still...
Well, maybe there's a way to do it. I'm not sure...
Re:Why it won't work in Linux for a while yet (Score:1)
BTW, there's plenty of "Eyecandy over functionality" in unix. Remember what enlightenment started out like?
Re:2x perspective (Score:1)
Re:3D extraction from video (Score:1)
I've seen similar 3D done ... (Score:1)
Some info about it (Score:1)
However I can give anyone a hint about it utility. Imagine an intelligent system that looks for/identifies/retrieves objects in an environment. Yes, like a robot that is asked to fetch an object (say a book). You can do it in several ways, but one of them implies storing a database of book features. These features constitute a description of the object you need.
Well, as you may have guessed, these features are obtained from such a camera.
There are other possible usages, like CAM (I guess no-one noticed that the software that comes with it generates a mesh of the object
Everything I said seems to be very nice - don't worry. Just look at the price and you won't be excited any longer.
Re:Is there gonna be any Linux support? (Score:1)
Neuromancer (Score:1)
3D output?? (Score:1)
What about sporting events? (Score:1)
Re:Why it won't work in Linux for a while yet (Score:1)
Why aren't they making software anymore, when their software is some of the best available (read: poser 4, bryce, kai's tools, etc) for graphics and 3D imagery? I can't imagine how this would make them any more money or give them any better repuatation than the great one they have.
As for the other replies which talk about the eyecandy interface, it doesnt actually slow down that much. KPT6 on a Pentium 60 is smooth and efficient (read: realtime KPT Gel and transparency/lensing effects. There must be some ASM in there
Because all widgets on X are done through external libraries (ex QT, GTK, Motif), the Metacreations programs would run as fast, becuase they would simply have their own widget library which made cool curvy semi-tansparent toolbars with shadow effects.
With respect to your reply, in the open letter MetaCreations says that they will continue to support the products they have.
If they aren't making any more new products, where did this camera come from?
--
Talon Karrde
Stereoscopy vs. 3d models (Score:1)
Mildly neat? I WANT this! ;-) (Score:1)
EBAY - Goofy coconut monkey now in 3D (Score:1)
Re:Argggh - Read the Info! (Score:1)
Re:I did this already (Score:1)
"He say you Blade Runner!" (Score:1)
ANOTHER APPLICATION -- WarGames? (Score:1)
Re:Why it won't work in Linux for a while yet (Score:1)
Stable? Sure. Usable? Sure. Efficient? No. Imagine the memory requirements for their programs with their 100% eyecandy interfaces... probably (wild guess) 10mb at the least just for the interface and the little buttons. Just give me Lightwave's interface and I'm happy...
It'd be kind of funny seeing what their programs would look like in Linux though, I'm not a Linux user myself but I've used it before, and MetaCreations software is the EXACT opposite of Linux: Eyecandy over functionality.
Re:First Use (Score:1)
3D video (Score:1)
One commercially available package can be
found at this link: [eyetronics.com]
http://www.eyetronics.com/main.html [eyetronics.com]
Check out "Applications", then go to the bottom
of the page and follow the link for more details
at the bottom of "Professional 3D services" (easy, no?).
There you can find examples of the acquisition
and results of their technique for 3D video. It's
just a start, but 3D video is definitely being
developed.
Re:3D video (Score:1)
>Some forms of 3D video are already possible.
>One commercially available package can be
>found at this link:
>http://www.eyetronics.com/main.html [eyetronics.com]
>Check out "Applications", then go to the bottom
>of the page and follow the link for more details
>at the bottom of "Professional 3D services" (easy, no?).
I'm sorry. I meant "Products", not "Applications."
Re:What about sporting events? (Score:1)
Cool technology...but how good is it actually (Score:1)
Re:3D output?? (Score:1)
Re:More Info - Photogrammetry (Score:1)
Re:I did this already (Score:1)
I worked on this (Score:2)
Let me tell you: This is some awesome shit.
DISCLAIMER: I no longer work for the company, and the following information is gathered from what I have found over the web and from asking people at the company.
History
The technology was initially pondered by a russian physicist - Alexander "Sasha" Migdal. He came to the United States a long time ago and did work at Princeton University in various fields (mostly in physics, I believe). After a while, he formed a company with his friends from Russia called "Real Time Geometry." Sasha is an insanely smart man. A little eccentric, but smart
RTG pioneered the technique of being able to dynamically set the number of polygons you want to render a model with. For instance, you could have a massive model of a helicopter render with full detail when it's close to the camera, and have it render with less detail when it's far away from the camera. This technology is now part of MetaCreations' MetaStream [metastream.com]
The company was bought out by "MetaCreations" in (I think) 1997 (or thereabouts). MetaCreations was the merger of MetaTools and Fractal Design.
After this was when the technology that we're discussing now was beginning to be implemented.
Process
Although I have not performed the procedure myself, I have seen it done on many types of objects, from pottery to toys to PEOPLE'S FACES.
The object is placed in front of a black background with several lights around it with the aim at neutral lighting. The black background prevents a shadow from being interpreted as part of the object. The camera is usually placed about 3 meters away (not precise, just average or so...). For "in studio" objects, a laser was used to accurately calculate the distance to the subject. The technology has been refined a lot (obviously) and just when I was about to leave, they introduced this deal with Minolta in an All-Hands meeting.
Now, I see a post of 5, Interesting that states that one of the shots is a piece of pottery - how simple is that!
Well, it's not. The reason?? TEXTURES. The 3D imaging RECREATES the model so as to preserve not *only* the size/shape of the object, but ALSO the *look* of the object under certain circumstances - for instance, certain lighting environments.
That's why a pot ain't so easy. While the shape might be "easy" (you try extrapolating 3D data from 2D data), the texturing is even more difficult. I can remember seeing models where everything was great, except maybe when you look into the pot and you see a hole at the bottom and you think "Hurm, we hadn't thought of that, had we?"
In any case, that's the process. Now how does dynamic resolution and 3D imaging come together? Simple: The fact is that many objects (people for instance) have *curved surfaces*. Within the realm of polygonal 3D modelling, you *have* to throw out data, it's just not gonna all fit. While the camera/software figures out the 3D models, it is very difficult to render them in real time... MetaStream does a wonderful job of rendering huge objects in real time, even on a shitty computer.
Now, in this wonderful time of the web and stuff, MetaCreations (I think) is positioning this software/hardware for two things:
Of course, that means you need small files - full 3D models and textures the size of a GIF or two? Yep. It's pretty cool stuff. From what I know, it's a wavelet compression technique that compresses both the textures and the model data. Most models (of people's faces, toys, pots, whatever) are in between 50 and 200 K, which is pretty remarkable for the quality that you get from MetaStream.
Several web sites have already implemented this technology, and make quite good use of it. Here's a sampling:
Sorry for the long post, but I hope I cleared up some information.
PS - Hi to Sasha, Victoria, Dmitry, Victor, Baga and everyone else!
Re:I worked on this (Score:2)
But the competition is even cooler! (Score:2)
I have worked with this package as well as other competing products and am quite impressed with the level of quality vs. effort required for this form of 3D imaging. The funny thing is I am sitting in a meeting as I type presenting the various options for this form of image capture as my company's technology lead, and I just was doing my rounds (on my new WaveLAN 11MB Silver wireless card!) on the Web when I saw this article!
If you want Linux compatibility in fact, plus a much better, less restrictive product overall, checkout Eyetronics... in fact, last time I spoke to a developer there he said that Linux was his PREFERRED platform for his software. The main benefits of Eyetronic's technology are the following:
If you want more info feel free to ask.. I've demoed and used most of the available 3D capture technologies, and for non-critical work (engineering, etc.) these new breed of photographic solutions seem the best. And there aren't as many kinks or hitches as you might think, you'd be suprised what these guys have done with image- and contour-analysis and a lot of intelligence on their part.
I may be wrong, but I believe Eyetronics started as a university project in Sweden or Denmark... probably Denmark.
Btw, before I get flamed for being a fraud, I work for a market-leader in ecommerce-oriented 3D imaging, but this is as close to my real identity as I can post under. If you can figure out who I work for, bully on you, but it isn't "0110".
:)
Nirvana for Portman-o-philes (Score:2)
Add a trouser-full of grits (don't forget to tie-off the cuffs first) and you've got yourself a party!
; )
Re: 3D extraction from video (Score:2)
Also, I read about 3DBuilder [3dcafestore.com] a long time ago; it looks semi-automated.
Some more random digging uncovered an index of VR research [culture.com.au]. A month or two ago i was looking for information on panoramic photography and I read a summary of someone's thesis (IIRC); he automated the compilation of the best affine transformations on frame-to-frame video, then statistically analyzed those transformations to yield great detail. I can't find that right now. :(
Hmm... (Score:2)
Probably be a matter of iteratively refining a volumetric model from initial heuristic "guesses", I suppose. Most wrong guesses would be detectable as the results would visibly lose self-consistency after the first few iterations.
(also a good way to detect doctored images, although I daresay there are easier and more efficient ways of detecting those)
The software part isn't new... (Score:2)
Essentially, you have a vector go from the "eye point" in each photo through each tack in that photo. You then solve for where the vectors for each tack come as close as possible to intersecting at the same point. (by finding a least squares solution to a system of linear equations) This is a bit of an over simplification, because the position of the "eye" in each photo is a variable as well.
Textures are generated by actually taking pieces of each photo between the tacks, scaling and stretching them appropriatly, and then blending them together.
It's all a pretty neat process, but to use in a real-time setting you'd need multiple cameras, and some sort of AI that would place the tacks. As it is, the process has a fairly large manual component. Doing that with every frame of a video would be extremely tedious. (but could probably be similfied by the fact that each "tack" probably doesn't move very much from frame-to-frame)
I can't figure out what that extra piece of hardware is for though. This type of software normally works with ordinary photos. Even scanned polaroids or hand-drawn artwork (if reasonably accurate) would work. Does anyone know what that hardware does? Does it actually somehow scan "depth" information? If so, how?
Re:3D extraction from video (Score:2)
in russian, Marsokhod just means "Mars rover," just like their lunar rovers, which were named "lunokhod" 1 and 2, so I'm not all that surprised it's not unique.
Lea
Re:Hmm, sounds ok... (Score:2)
it's something you don't complain about in 2d... can't really complain about it in 3d either!
:)
Lea
Re:Is there gonna be any Linux support? (Score:2)
There are a lot of applications that are orders of magnitude more expensive than the hardware and OS that they run on.
is this really all that useful? (Score:2)
This'll never replace (Score:2)
Agent 32
numerous techniques for 3D shape recovery (Score:2)
There is also a wide range of active techniques. In those techniques, you don't just use a camera, but you also use some kind of light source. Structured light-based 3D recovery can be done in real time and there are lots of approaches to that as well. You can think of active autofocus systems, found on many P/S cameras, as structure light systems.
Both software-only and software/hardware combinations for 3D shape recovery from images are commercially available, and some are also available for free as research code. Still, don't expect this to be easy or completely automated.
MetaFlash... (Score:2)
Re:d3 modelling techs could get screwed.. (Score:2)
Better yet, check out www.stereovision.net [stereovision.net] for more info.
3D video unlikely (Score:2)
It sounds unlikely to be a useful technique to apply to video; you'd have to have six videocameras recording the same scene from different angles. I'm not even sure that the state of the art begins to touch the problems of recording video in three dimensions, storing the data, and playing it back.
I wouldn't hold my breath waiting for Quake environments built from this technology either. They're building a 3D model of an object based on external photographs; doing the same thing with internal photographs is a very different ballgame.
More Info - Photogrammetry (Score:2)
Essentially, for consumer use, the camera is flipped horizontal with the subject in front of it. Everything proceeds according to Zagadka's description, pretty much.
Incidentally, there is an old copy of Byte magazine, from the late 1970's describing how to extract the 3D information from multiple shots, with included BASIC code to calculate the 3D vertices from the 2D inputs. Pretty cool - crazy though that only NOW are we actually using this at the consumer level, even though an article in a well known computer magazine has languished for nigh 20 years!
Re:is this really all that useful? (Score:2)
For video, see Vanguard (Score:2)
Rupert.
Re:*Bad thing* *capitalistic* *unforgiving* PUFF!! (Score:2)
First there's that post that gets moderated up to 6... now this one's at -5. What's going on?
--
Image Based Rendering (Score:2)
Dave
d3 modelling techs could get screwed.. (Score:2)
Re:3D video unlikely (Score:2)
more is possible, I believe. (Score:2)
Take a camera, and give it a bit of sonar-like ability to determine the distance between the camera and whatever scene you have it aimed at. The camera builds a wire mesh of the objects in front of it, then breaks the image data down into textures, which it then wraps the mesh with.
Sonar is of course out of the question, but I'm sure there's better technology out there. I mean, I just come up with 'em, I don't implmenent 'em.
Requirements (Score:2)
maps. The only problem would be that would need to tone down the poly count.. Just read the link above that I added and you'll see what we had to say.
I seriously doubt that the quake market could sustain a company's entire line of digital cameras. I would like to know a few things.
1. The cost? I don't want to have to mortage my house just to pay for one.
2. Interface? I would like this to just plug into a standard serial or parallel port. Failing that perhaps something like just taking the film and allowing for floppy film based things.
3. Linux compatability? I could always use a machine that actually worked with linux and that worked with linux apps. I don't want to buy either an expensive commercial 3d app or to have to upgrade my pc just to use this.
Re:Read the small print guys (Score:2)
Re:d3 modelling techs could get screwed.. (Score:2)
Check out www.stereovision.net for more info.
Re:Read the small print guys (Score:2)
Theoretically, the output of this camera would allow you to use the image in a rendering application to produce an "actor" in a setting. You can't do that with a disposable camera stereoscopic image without additional work, information, and calculation.
Is there gonna be any Linux support? (Score:2)
System Requirements: [minolta.com] Windows 95 OSR2 (Ver. 4.00.950b) or later, Windows 98, Windows NT 4.0.
I don't use windows much, and not at all at home. So this new "technology" isn't of much use to me.
Re:Read the small print guys (Score:2)
I worked for a company that made sensors and parts for many research and engineering corporations. They wanted to be able to put 3d models of their products on the CD version of their catalog. With hundreds of thousands of items to be modeled, however, they couldn't afford the cost of either having it professionally scanned or hiring a computer modeler. They would love a camera like this.
Why it won't work in Linux for a while yet (Score:2)
Not linux...
Why? Becuase MetaCreatons does not develop Linux software yet. And the camera works with Metacreations software.
So, let's get some of you people out there telling MetaCreations that you would buy their software if they released a Linux version.
BUY?
Yep. Metacreations is not going to open source their software, becuase they make lots of money off of it. But, they might just make a linux version. Metacreation's stuff is already very stable and useable, and a linux port would probably inherit those features. A Linux version would be great...
In other related news, somewhere on the BeOS site it says that MetaCreations is porting some of their stuff (no specifics and this may just be rumour) to BeOS. Once that hurdle is jumped, a port to Linux shouldn't be too hard at all.
--
Talon Karrde
since 1997 (Score:2)
Hmm, sounds ok... (Score:2)
The camera could well be used from different sides (eg take a pic from the top and bottom, then the front, back and left sides), but since software exists that lets you do this already (including Metacreations' own Canoma software, but its not too good) it seems a bit gimmicky. Then again, I'm sure it would help novice 3D modellers get decent-looking objects if the detection mechanism was of a high enough resolution to capture a lot of detail.
Anoter example: it might well be excellent for capturing human faces, which is a tricky £$%£$^%$ to make by hand. So all in all, could be useful for some people some of the time. Bit like most things
Re:d3 modelling techs could get screwed.. (Score:3)
Re:3D extraction from video (Score:4)
Lea
Re:Is there gonna be any Linux support? (Score:4)
Who cares what operating system you use? You think
Get over yourself.
3D extraction from video (Score:5)
Canoma [metacreations.com] is a re-implementation of some work done at U.C. Berkeley in the mid-90s. The Berkeley group liked to do big things like buildings, and modelled the central part of the Berkeley campus. They got their aerial photographs using a camera on a kite; there's an architecture prof at Berkeley who's developed good techniques for doing this. Much cheaper than a helicopter.
Both Canoma and Metaflash are semi-automatic systems. The user has to manually identify corresponding points and edges between multiple images. This can be a lot of work. One more generation and somebody will have this fully automated.
Read the small print guys (Score:5)
The screenshots neatly show reconstruction of a simple piece of pottery. Jesus, but if that isn't the simplest 3d object then I must be smoking something.
You'll get better stereoscopic results taping two $14 disposable cameras together! (I've done it, it works, just get the focal distance right).
Another example of useless technology. And I cringe at all the thousands of useless vertices this solution will create in 3d models. No thanks!
Oh, and note the accuracy discrepancy of 1mm is from a photo of a ping pong ball. Like we all need pictures of perfect round circles