Oh, I knew they could extract (very limited) parallax information from the plenoptic image data, I just didn't know they had coded that into their software (they didn't have it the last time I checked, they were only doing refocusing).
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
You could... if your lens was about the size of a galaxy.
I stand corrected. Last time I'd checked out their software all it could do was refocus. Once they finally support simultaneous refocusing and wiggling (which is technically possible, by limiting the amount of each)... their cameras will still be just as useless.
No, it isn't. The only information you can get is the one from the light hitting the lens. That's effectively limited to parallax information between the edges of the lens (in reality, less than that, but let's pretend). In other words, as I wrote above, "unless the lens is wider than the distance between two eyes, you can't really use this to create realistic stereoscopic images at a macroscopic scale".
Having two lenses is not a requirement to capture stereoscopic images. It can be done with a single (big) lens, and two slightly different sensor locations. But you're limited by the distance between those two sensors, and a single large lens isn't necessarily cheaper or easier to use than two smaller ones.
What this system does is use the out-of-focus areas as a sort of "displaced" sensor - like moving the sensor within a small circle, still inside the projection cone of the lens - and therefore simulating two (or more) images captured at the edges of the lens.
But, unless the lens is wider than the distance between two eyes, you can't really use this to create realistic stereoscopic images at a macroscopic scale. The information is simply not there. Even if you can extract accurate depth information, that is not quite the same as 3D. A Z-buffer is not a 3D scene; it's not sufficient for functional stereoscopy.
Microscopy is a different matter. In fact, there are already several stereoscopic microscopes and endoscopes that use a single lens to capture two images (with offset sensors). Since the subject is very small, the parallax difference between the two images can be narrower than the width of the lens and still produce a good 3D effect. Scaling that up to macroscopic photography would require lenses wider than a human head.
No. Lytro's software allows refocusing in post (at a huge cost in terms of resolution). It does not try to extract any parallax information from the image.
> Being a Kernel Developer is a lot like being a Navy Seal [...]
Being a Kernel Developer is a lot like making love to a beautiful woman. First you PEEK, then you POKE. You think you're doing great, but suddenly she tells you that you're too BASIC, and gives you a C. Treating her like an object can be a plus (or two), but if you get linked to her publicly you might have to commit. And if you fail an interrupt and some of your bugs make it into the kernel, you'll end up supporting that mistake for the rest of your life.
No, they probably aren't not.
Most people have 60 fps display equipment in their homes. Many even have 100 or 120 fps display equipment.
Not sure if you're trolling or just very ignorant.
Any good 35 mm film camera in the market can do up to 120 FPS, usually 240 (and these aren't even specialized slow motion cameras). Slow motion is far easier and cheaper to do with film than digital sensors. All you need to to is speed up the camera motor, and compensate the exposure by using higher-sensitivity film.
Because some idiots think the stuttering look of lower FPS gives it a more "film-like" look, which looks more intellectual.
It's even worse when the original animator rendered it at 60 fps and someone decides to change it later, because then they make the 30 fps version by deinterlacing, which means they don't just lose fluidity, they also lose vertical resolution, and you end up with something that stutters and looks pixellated or blurry.
If you remove the polarizing filters both eyes will see both images and you lose the 3D effect (you just get ghosting). The polarizing filters (on the projectors and glasses) are what makes sure each eye only sees images from the correct projector, they're not related to the projection speed.
Alternating frames requires active shutter glasses, which are more expensive. And, indeed, that's how active shutter 3D works, but, until now, one eye was seeing the film 1/48th of a second behind the other, since the two cameras were typically in sync to make post-production easier. With 48 fps cameras, active shutter systems will finally be able to feed each eye 24 "correct" frames per second (i.e., one eye will see frame 1L, then the other eye gets frame 2R, then 3L, 4R, etc.). Of course, if they just speed up the current system, they'll be doing 96 updates per second and one eye will still be slightly behind the other (but now just be 1/96th of a second), but my point is that 48 fps cameras have an advantage for active shutter stereo 3D even if that final movie is played at 24 fps.
Here's the problem with Teller's claim:
"As a direct and proximate result of such unfair competition, Plaintiff [Teller] has suffered, and will continue to suffer, monetary loss and irreparable injury to his business, reputation, and goodwill."
I give him the last one; his goodwill definitely comes out of this injured, but how exactly has Teller suffered "monetary loss and irreparable injury to his business [and] reputation" ? Teller wasn't selling a kit with the trick, so he's not going to lose any "potential sales". No one seriously believes that people planning to go see or hire Penn & Teller will change their minds and hire Gerard Bakardy instead, and Bakardy made it perfectly clear that his trick was inspired by Teller's, so there's no damage done to Teller's "reputation", either (i.e., there's no suggestion that Teller may have copied Bakardy).
In other words, this smells like a pure "copyright troll", trying to deny someone else a chance to do something similar even though that person is in no way a competitor or a threat.
Can you please show us, on this doll, where the hardware engineer touched you?
Apple's Final Cut Studio costs $1.2k (and includes not only video editing but also DVD / BD authoring, sound mixing, compositing and muti-format compression).
Adobe's CS Production Premium costs $1.4k (and includes all the above plus Photoshop, Illustrator, and a few other well-established applications).
Avid's Media Composer costs 2.3k (that's about $2.2k for the Avid logo and $100 for the software - still slightly overpriced).
All three packages above are production-proven, well-established in the professional market, supported by most relevant equipment manufacturers, and have hundreds of high quality plug-ins available from 3rd parties. And you say you're trying to sell (unknown) "video editing software" for $10k? Good luck with that.
Even assuming you're including some high-end compositing software (not that you'd need to; After Effects has come a long way), you can get Production Premium + Nuke (or Fusion) for $6.3k, and that would give you access to both AFX and OFX plug-ins. You could even throw in 3DS Max or Maya ($3.5k) and still be under $10k.
Did this article somehow get lost in the depths of the Slashdot queue for 20 years?