Comment Woaaaaaah (Score 4, Insightful) 133
What a nice, pointless, ad!
What a nice, pointless, ad!
You could also not have to do it at all and use an EDOF system (such as shown in this demo). Its just not a software solution and has to be constructed from the beginning with the lens and the camera (you voluntary insert aberrations that will make the system blurry "the same way" in some larger range, but this blur is easily invertible by a simple image Wiener deconvolution).
(and through a turbulent atmosphere)
This is the low-end of the TOP-500 in terms of compute power.
I mean in the image compression literature... You need to know where to look for defects, so its good to have a standard set of images. Papers will usually have several images, one of the usual one is Lena.
Yes, but we are not using it for the colors. We are using for the textures.
I can construct a 2D image that has proper DOF cues.
Yes, you can construct it but it your eye is still focusing on a plane and the depth cues you are feeding it (them) do not match this. If you "force" your vision on something blurry, the device better has some way to find out this and tell it to the rendering engine. Retinal tracking allows the rendering engine to know what part of the scene is observed (in the center of the FoV) which helps for finding the actual depth and focusing parameters.
Yet, whatever tricks you use, the crystalline lens will always come back at the same position to have the in-focus image while you will perceive a change in DoF. This is another mismatch some people will perceive and there is no way to correct for it in stereo-vision. LFD might be slightly better at this but holography is the ultimate solution here.
I don't think that a stereo-based device you can use, with discomfort, for may be one hour and before getting really nauseous will have a good commercialization potential.
Each retina collects photons on a surface and with a single eye you get a 2D image*. Your brain combines the images from your eyes in very complex ways to create a 3D internal model, but as far as what needs to get shined into your eyes, it's just the 2D image constructed on your retina that matters.
That is incorrect. There are numerous 3D depth perception cues, among which are stereo-vision, depth of field (things far from what you are looking at appear blurry) and prior knowledge of the objects size (knowing the average size of a car, you know that if you see it "small", then it must be far away). With only one eye, the last two are perfectly valid. The very last one is very simple to reproduce but the depth of field is far from being trivial to implement. For VR head set such as the Oculus, you would need retinal tracking, map to the corresponding depth of the observed object and adapt the rendering of the whole scene to this depth of field (with of course, very small latency), see http://3dvis.optics.arizona.edu/research/research.html. Having different cues in a system can cause serious discomfort to a large portion of the population.
but the chances are it's not actually holographic.
You can be certain about it. There is no real-time holographic display as of today. For LFDs (Light Field Display), NVidia had a prototype a few years back and it is reasonable to think that Magic Leap is pursuing something similar. Yet, I don't think the technology is mature enough to be able to generate dense light fields needed for high quality scene rendering.
Exactly, I am wondering if this qualifies as false advertising : using the name of an advanced technology that would be enabling primary functions of the product but is not actually present.
Nope, you need the reference phase to still be coherent with the observed object (temporal and spatial), so the interference is only possible between two parts of the same wave (of light), different in space (think two pinholes in a plane through which you collect the light) and/or in time (think delay line, let one part of the line you collected run a longer distance). The first is the famous Young's double slit experiment, the second is the Michelson interferometer.
Also, for reference : frequency of the visible EM fields is in the order of 300THz (300,000GHz).
This telescope operate in the radio bands (sub-millimeter) and not in the visible. That's why it is easy to make interferometry over very long base line. In the visible domain this is very tricky to realize over a couple of 100m (such as with the VLTI).
You can think of it as completing piece by piece the Fourier transform of the image you want to observe. Every pair of telescope gives you a measurement in the so-called UV plane (spatial frequencies). The furthest the observations point are (the telescopes) the smaller details you can get. Except this is only valid if you can measure the amplitude and phase of the electromagnetic radiation (or find a way to reconstruct it in some way). This is easy in the radio bands. But this oscillation is just too fast with visible wavelength and thus, we can not record and adjust offline, we have to interfere the waves right away...
It is easier to write an incorrect program than understand a correct one.