(and through a turbulent atmosphere)
This is the low-end of the TOP-500 in terms of compute power.
I mean in the image compression literature... You need to know where to look for defects, so its good to have a standard set of images. Papers will usually have several images, one of the usual one is Lena.
Yes, but we are not using it for the colors. We are using for the textures.
I can construct a 2D image that has proper DOF cues.
Yes, you can construct it but it your eye is still focusing on a plane and the depth cues you are feeding it (them) do not match this. If you "force" your vision on something blurry, the device better has some way to find out this and tell it to the rendering engine. Retinal tracking allows the rendering engine to know what part of the scene is observed (in the center of the FoV) which helps for finding the actual depth and focusing parameters.
Yet, whatever tricks you use, the crystalline lens will always come back at the same position to have the in-focus image while you will perceive a change in DoF. This is another mismatch some people will perceive and there is no way to correct for it in stereo-vision. LFD might be slightly better at this but holography is the ultimate solution here.
I don't think that a stereo-based device you can use, with discomfort, for may be one hour and before getting really nauseous will have a good commercialization potential.
Each retina collects photons on a surface and with a single eye you get a 2D image*. Your brain combines the images from your eyes in very complex ways to create a 3D internal model, but as far as what needs to get shined into your eyes, it's just the 2D image constructed on your retina that matters.
That is incorrect. There are numerous 3D depth perception cues, among which are stereo-vision, depth of field (things far from what you are looking at appear blurry) and prior knowledge of the objects size (knowing the average size of a car, you know that if you see it "small", then it must be far away). With only one eye, the last two are perfectly valid. The very last one is very simple to reproduce but the depth of field is far from being trivial to implement. For VR head set such as the Oculus, you would need retinal tracking, map to the corresponding depth of the observed object and adapt the rendering of the whole scene to this depth of field (with of course, very small latency), see http://3dvis.optics.arizona.edu/research/research.html. Having different cues in a system can cause serious discomfort to a large portion of the population.
but the chances are it's not actually holographic.
You can be certain about it. There is no real-time holographic display as of today. For LFDs (Light Field Display), NVidia had a prototype a few years back and it is reasonable to think that Magic Leap is pursuing something similar. Yet, I don't think the technology is mature enough to be able to generate dense light fields needed for high quality scene rendering.
Exactly, I am wondering if this qualifies as false advertising : using the name of an advanced technology that would be enabling primary functions of the product but is not actually present.
Nope, you need the reference phase to still be coherent with the observed object (temporal and spatial), so the interference is only possible between two parts of the same wave (of light), different in space (think two pinholes in a plane through which you collect the light) and/or in time (think delay line, let one part of the line you collected run a longer distance). The first is the famous Young's double slit experiment, the second is the Michelson interferometer.
Also, for reference : frequency of the visible EM fields is in the order of 300THz (300,000GHz).
This telescope operate in the radio bands (sub-millimeter) and not in the visible. That's why it is easy to make interferometry over very long base line. In the visible domain this is very tricky to realize over a couple of 100m (such as with the VLTI).
You can think of it as completing piece by piece the Fourier transform of the image you want to observe. Every pair of telescope gives you a measurement in the so-called UV plane (spatial frequencies). The furthest the observations point are (the telescopes) the smaller details you can get. Except this is only valid if you can measure the amplitude and phase of the electromagnetic radiation (or find a way to reconstruct it in some way). This is easy in the radio bands. But this oscillation is just too fast with visible wavelength and thus, we can not record and adjust offline, we have to interfere the waves right away...
The White House did not explain why it has taken three months to disclose the episode. Obama said that the operation was conducted after hundreds of hours of surveillance had convinced American officials that they were targeting an Al Qaeda compound where no civilians were present, and that “capturing these terrorists was not possible.” The White House said the operation that killed the two hostages “was lawful and conducted consistent with our counterterrorism policies” but nonetheless the government is conducting a “thorough independent review” to determine what happened and how such casualties could be avoided in the future.
They are more art than science, providing an illusion of reality.
Nope, they are coded with the relation color = abundance of atomic component. Colors are a stimulus, they do not exist outside of all of our brains. What is real is the wavelength, and that, for instance, the transition of an electron from the 3rd to 2nd layer of the structure of the Hydrogen atom will emit a photon at 656nm, which we call red.
One disadvantage of the FITS format is that raw images typically need to be manipulated to show anything.
Nothing to do with the FITS format. That's the same type of information all RAW formats have : unprocessed data, as close as possible to the signal coming from the sensor after quantization, with ideally no processing, offset or other adjustments performed.
It made for great imagery, but wasn’t a true representation of how Jupiter looks.
Our vision is also subjective, it permanently adapt to lightning and ambient color conditions. There is no such thing as a true image representation. Especially in the mentioned case (a magazine), where it is desirable to have an image which pops the eye rather than a blob of washed out colors.
So what's the news here?