Yes, why don't many more devices use the Pixel Qi display? You know, the one that's a normal color LCD when backlit, or a monochrome very-low-power LCD when front-lit (ie, by ambient lighting). Seems like it would be ideal for phones and smart watches.
With these somewhat asymmetric FOVs, a single number doesn't provide enough information to understand what you're getting.
What's needed now is the "inside angle" and the "outside angle", where:
- inside angle = how much either eye can see toward the other eye
- outside angle = how much either eye can see away from the other eye
(in either case, measure the angle from "straight ahead" over to the cut-off point where you can no longer see anything)
In a symmetric system, both of these numbers are the same (or pretty close, anyway); you'd just add the two to get regular FOV.
You don't want the inside angle too small, or else you'll feel like you've got a huge nose (or your hand between your eyes).
Making the inside angle large is complicated by the fact that the displays will run into each other.
Making the outside angle large is easy by comparison.
You can learn lots of common sense from the internet:
- Whatever you see there, learn *not* to do!
Since mostly we post the epic failures of others, this technique will increase survival skills dramatically.
Please use only the oven specifically designated for this purpose. Thank you.
I'm sure you could tell the difference between high-bitrate content made to make 4K look good vs. ordinary compressed HD content.
However, if you were to watch the same content with appropriately-high bitrates for 4K and HD, you probably wouldn't see the difference.
Why would they try to make both sets look as good as possible if the point is to sell the more expensive one?
I think he meant 6K bits, not bytes, since 16 lines x 64 columns x 6 pixels/character = 6K binary pixels.
You didn't catch the part where you can attach a "screen" directly to the goggles to achieve VR.
Let's see if we can clear up a few things. Imagine looking at your monitor.
The pixel in the upper left corner is emitting a hemisphere of light. Or rather, it's emitting a bunch of rays of light that spread out in a hemisphere. Under ideal circumstances, it's the same color and intensity for any of those rays, though we know from experience that it tapers off and sometimes changes color as you see it from greater angles. But for most of the "straight on" angles, they're about the same.
A subset of that hemisphere of rays is entering one of your pupils. If you consider the shape of that subset, it forms a cone, with the base at that pixel on the monitor, and the extent formed by the circle of the pupil. All those rays of light will (assuming your eye is focused on the monitor) focus to a point on the associated retina.
The individual rays in that cone are close to, but not quite, parallel to each other. The farther away your monitor is, the more parallel they are, and the closer the monitor is to you, the more the rays are spreading out. Each eye's lens takes care of focusing the parallel or spreading out rays back to a point on its retina. Note that if the rays are spreading out too much (ie, the monitor is too close to your face), you cannot refocus the rays back to a point. You'd need additional optics to help achieve this. (This is why Oculus needs a big fat lens in front of each screen.)
For the purposes of this explanation, we'll simplify a bit and consider a bundle of rays that are parallel. Given this simplification, the only distinction between the pixels on the monitor (aside from their color and intensity) is that they arrive at your pupil from different directions.
In fact, you can replace the monitor with physical objects that are reflecting light, and the same principles apply. Going a step further, you can see that it doesn't really matter how those bundles of rays are generated; the only thing that matters is how they enter the pupil. The direction (ie, angle) that they enter from determines the location, and the color and intensity determine what you see there.
So let's take away the monitor, and instead imagine other ways that you can generate different parallel ray bundles directed at your pupil. The original "virtual retinal display" from the University of Washington was based on the following principle:
1) Generate a single collimated beam of light rays. Collimated means that all the rays within the beam are parallel (or close to it). Beam, in this case, does not mean a tiny dot, but rather a beam with some girth to it (on the order of a centimeter).
2) Use one or more tiltable mirrors to shine this beam at different angles at your pupil. By redirecting the beam in a raster-scan fashion, you can trace out a complete image.
3) For each different direction scanned (ie, each pixel), you also need to change the color and intensity of the beam appropriately (to correspond to the pixel you see from that direction).
Note that the beam has to be spread out significantly from a single point, such that when redirecting it from one extreme to other that it will still hit your pupil. Light that doesn't enter your pupil is wasted.
This is just one method. The subject of today's article appears to use a DMD array instead of one or two scanning mirrors. Assuming that the DMD mirrors can scan in a 2D fashion, then it's really the exact sample principle.
Note that there are many other ways to achieve the same ends. If you have a point light source, you could use a parabolic mirror to generate a large collimated beam. Provide some way to scan that beam, and voila. You might also note that spherical mirrors approximate a parabola, except for arbitrary directions. Provide a way to scan the light source, and voila.
As you can see, the trick is mainly in the scanning, since all the rest is "easy".
While the video offers lots of BS, the possibility of retinal burn is probably zero:
1) They use an LED, not a laser diode.
2) The light from the LED is spread over a DMD (digital micro-mirror device); it is not a line/dot.
I'd imagine the worst that you'd see if something locked up is a solid color virtual screen.
Surely you don't want any *artificial* light that those other HMDs offer.
(My BS meter was pegging out while watching that video.)
Just think of the revenue potential! This could be hotter than tobacco!
Notice how the latest Xbox 360 has only 1 big chip in it. That's because it uses integrated graphics.
I thought "A Clockwork Orange" was translated amazingly well into a movie.
Wet from water is a different matter than wet from an acidic liquid like coffee or soda.
The Occulus has a ~100% overlap factor, meaning that the same arc of FOV is presented to both eyes. Put another way, the left and right sides of both eye views are the same.
This device has less than 100% overlap. I'm guessing it's around 60% from looking at the monitor images. When the overlap decreases too much, it gives you the impression of having a very large nose that blocks each eye from seeing part of the other eye view. This can be annoying.
The overlap factor for real people varies, of course, due to facial structure. But you don't really want a device that has a much smaller overlap than your actual body has.
It is extremely difficult to maintain a large overlap factor as FOV is increased. The right side of the left display will encroach into the space needed for the left side of the right display. Avoiding this requires making the displays smaller and closer to the eye, which increases the demands on the optical system to refocus the image. In addition, there is less space for eye glasses, and other useability parameters may also be reduced (although weight can be decreased). At some point, you can no longer look at lens-based optics, and have to take a different approach altogether.
Note also that increasing the FOV tends to make the rendering a more difficult job as well. Fortunately, this isn't as big an issue these days.