Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Sensory deprivation tanks (Score 1) 332

I remember reading abobut this in Feynman's autobiography. IIRC he wanted to experience some halucinations without subjecting his brain to any chemicals. I've always wanted to try it, but have never had access to a sensory deprivation tank. Fortunately there were plenty of chemicals.

You remember correctly. In Surely You're Joking, Mr. Feynman!" [fsu.edu] on page 128 in the chapter titled "altered states" he recounts his experiments with sensory deprivation. There was some chemical usage, too:

I must have gone about a dozen times, each time spending about two and a half hours in the tank. The first time I didn't get any hallucinations, but after I had been in the tank, the Lillys introduced me to a man billed as a medical doctor, who told me about a drug called ketamine, which was used as an anesthetic. I've always been interested in questions related to what happens when you go to sleep, or what happens when you get conked out, so they showed me the papers that came with the medicine and gave me one tenth of the normal dose.

(It is unclear to me whether that was a one-time thing, or whether he used the ketamine for all his subsequent visits.)

I *highly* recommend reading the entire work!

Comment Supplementary information (Score 3, Interesting) 60

What they have demonstrate is how a graphene structure can be made into a tunable oscillator by constructing a rather crude but working FM 'radio-transmitter' using one.

You are correct. And crude is an apt choice of wording... From the supplementary information (scroll to the bottom), there are links to: pdf containing data on setup, testing, and characterization as well as a .wav file (confusingly labeled "movie"). It appears to be a sample of a transmitted sound sample of "Gangnam Style!"

The sound quality of this sample is more on the order of a noisy AM radio broadcast, but given the technology being used, quite impressive, nonetheless.

FWIW, there is a (somewhat) better write-up at redorbit.

And, yes, the 100MHz in TFS refers to the carrier frequency, which is but one of several that they tested. But, it also happens to be in the FM radio band and hence the (attention-grabbing) title.

Comment Daily Canary Counts? (Score 1) 93

Section 215 includes the lovely clause that you are not allow to mention that you have received one. The fact that Apple is saying they haven't in interesting because if they stop saying there is a very clear inference that can be drawn. Think of it as a canary - when you see that line dropped in subsequent reports you can assume Apple has received one, even though they won't be able to say so.

The canary approach, yes. I've heard of libraries doing something along these lines, too. I was wondering: "can this could be taken one step further?" From TFA:

So Apple's report shows that although it received 1,000-2,000 requests for user data so far in 2013, the number that it complied with is listed as 0-1,000.

What if they issued such a report every day? On the date that(s) that the reported range changes, one can gain some finer granularity as to just how many were received. If they report "0-999" up until the day before yesterday, and then (yesterday) report "1000-2000", then there's a pretty good chance that the actual number is a lot closer to 1000 than 2000. Similarly, if they report "0-1000" for the first 90 days, and then "1000-2000" on days 91-180, etc, then one could see they were likely to receive on the order of 4000 by year's end; a last-day-of-the-year reported count is very likely close to 4000, no matter if it is reported as "3000-4000" or "4000-5000".

Yes, that assumes a linear distribution, that each day marks the receipt of the same number of requests. There's certainly going to be days with more requests than others. Still, as a first approximation, it does seem to me to provide additional information.

Comment Further reading: A Tour of the Worm (Score 1) 51

Thanks for posting that synopsis of what happened. I'd not seen it before!

For further reading, I highly recommend: A Tour of the Worm by Donn Seeley, Department of Computer Science, University of Utah. The Chronology section reads like something out of a crime thriller and ably recounts what was observed, when, where, and the steps taken to identify, isolate, and repair affected systems. From the introduction:

November 3, 1988 is already coming to be known as Black Thursday. System administrators around the country came to work on that day and discovered that their networks of computers were laboring under a huge load. If they were able to log in and generate a system status listing, they saw what appeared to be dozens or hundreds of "shell" (command interpreter) processes. If they tried to kill the processes, they found that new processes appeared faster than they could kill them. Rebooting the computer seemed to have no effect--within minutes after starting up again, the machine was overloaded by these mysterious processes.

To put this in context: Windows 2.1 was released on May 27, 1988; current PCs ran on 80386 processors (originally released in 1985) as the 80486 was not released until 1989 and the first (stable) systems started appearing in 1990. IIRC, mainstream desktop PCs ran at 20-25MHz and had 1-2MB of RAM.

I was working at Pr1me at the time and witnessed some of the upheaval first-hand. Fortunately for us, our systems were not infected, but they were impacted by the initial disconnecting of our systems from the net as a precaution. When it was learned that our systems were safe from infection, things were still slow as the net recovered from the tremendous load the infected systems placed on it.

Comment Flowers for Algernon (Score 2) 251

Slightly off topic, maybe, but I was immediately reminded of the book: Flowers for Algernon.

It was required reading in one of my classes back in high school. I found the story to be quite thought-provoking; made me realize how ephemeral intelligence could be. It was humbling for me to realize how much one accident could dramatically change my life. Yet, I cannot live in constant fear of its happening, but instead just try to do as best I can with what I have this day. To try and help others. To hope that, in the end, the world might be a little bit better for my having been a part of it.

Comment Re:Corrective lenses adaptation? (Score 1) 55

Thanks for the reply! No cataracts (yet).

As a consumer, I generally avoid release 1.0 of anything. I realize you mentioned CrystaLens and not Lasik (spelling?) laser surgery A relative who had Lasik done reported nighttime halos from oncoming car headlamps. I'll let others get the bugs out of the process and find out what long[er]-term consequences may arise. My eyes are well-corrected with conventional glasses, so I can afford to wait.

Had not heard of this, but will keep it in mind for when the need arises. Your experience is helpful and I appreciate your passing it along!

Comment Re:Corrective lenses adaptation? (Score 1) 55

I don't think that kind of correction can be done in software. correction for eyes pointing at wrong place yes, focus no.

Yes, I can see that now. I was kind of hoping that *I* was misunderstanding something and that it was, indeed, possible.

anyhow, oculus comes with couple of lenses for people with different eyesight and I'd imagine the consumer model will as well.

I didn't know that! Thanks!

I didn't think the lens quality to be the problem with oculus though, the problem with rift right now rather than the blue/redshift from the lens is that the resolution is rather small and stretched over a large area(that the fov is large is a plus though), so you see the pixels(it's like playing at 320x200 all over again).

We HAVE come a long way from the old CGA graphics! We've come a long ways, but it seems like it's still a long ways until we have totally immersive displays. Time will tell. Thanks for the reply!

Comment Re:The point of Rift is COTS (Score 1) 55

The idea behind the Rift is to produce a HMD that does NOT cost 1'300$ to build. It does this by using cheap of-the-shelf parts. [brevity snip]

Thanks! Early adopters are willing to pay more for something, but it helps to get the price down as soon as possible so as to build sales volume. They've put their money where they get the bet bang or the buck, and lenses are not it.

The bad news is that your problem can't be fixed in software. You're near-sighted + astigma, meaning that your eye fails to focus on the picture (and can't focus on a single point at all, actually). Software fixes are for distortion, meaning that the eye is capable of focusing on a pixel, but it gets the wrong pixel in that position. The type of eye-sight problem that *could* be fixed in software is eye-mobility problems, where on of the eye isn't able to focus in the correct directions and thus gets "shifted" view, giving a doubled picture. This kind of problems are fixed with "prismatic" type of lens, this kind of problem could be fixed by shifting the image on the Rift in opposite direction.

Failure to focus correctly... got it, thanks!

The good news is: Well read again the first paragraph: Rift use plain simple cheap lens. Just swap the lens with another set of cheap lens which are adapted to your near-sightness and voilÃ, glass-free 3D.

That makes sense. I'd assumed that it would be difficult to fit the OR over my glasses. Your idea of using a different lens gives me hope. On the other hand, since I am near-sighted and the astigmatism isn't that bad, I could probably wear it without my glasses. Those who are far-sighted, though, have no such luck, and the idea of replacement lenses makes perfect sense!

Comment Re:Corrective lenses adaptation? (Score 1) 55

Sadly, you can't apply that much correction in software. Warping can be done to a certain extent, but you cannot fix chromatic aberrations, which are inherent to any wide-angle lens, and other such optical effects. Even quality lenses would not eliminate everything and will still cause uneven pixel density across your field of view.

My bad. I focused on the word "cheap" in "relatively cheap lenses", and assumed that was the cause of (at least some) of the problems they were trying to fix in software.

The Oculus Rift, like most VR head gear, is based off two small screens, one per eye. Those screens are rectangular and that's it. If you output a rectangular image, the image, once warped through lenses, will not have the same discretization: some pixels will be much larger than others. This cannot change even with higher quality lenses, and not distorting the image would completely eliminate the immersion and appeal. What happens instead is that the software outputs pre-warped images to mostly correct the problem, giving more pixels to certain areas of the image so that once warped everything is more or less equally dense. This comes at the cost of software preprocessing and not fully utilizing the available pixels on the screen (the OR's projected images aren't rectangular, so there are "wasted" pixels in the corners).

Thanks for that clear explanation! I get it, now. Pixels in the corners are further away from the eye than pixels in the center, so they look smaller, and the image looks distorted. They need to pre-distort the sizes of the pixels on the display so that they all look the same size when they get to the eye.

There was an article on another technology which holds a lot of promise, I think. It was demoed this summer by NVIDIA and is based off the principle of a lightfield. Instead of outputting two flat planes, the system outputs a higher dimensional image (can be 4D or 5D depending on the tech, I forget) which is used by a series of layered OLED displays to reproduce not only binocular vision, but also different depths. This allows for all sorts of nice new things, such as correcting vision in software from your prescription, or giving the eyes the ability to focus on different depths of the image. This is different from current technology, which only uses the brain's ability to interpret two images rendered from slightly different points as a 3D space. It should also help with the headaches and sickness people are getting from current 3D glasses and VR. The disadvantages are numerous, though: it takes a lot more computing power (we're talking about adding extra dimensions and fundamentally changing the rendering pipeline), it takes a lot more pixels to produce a small resolution image (even 1080p screens don't actually produce 1080p, a lot of the resolution is used to provide the depth) and it's a lot more expensive overall. Yet, I think it holds a lot of promise and I hope to see it in the future.

I had not heard of that. Given from what I've read that there are already tremendous problems with getting displays to refresh quickly enough to avoid problems when one's head moves, I can imagine that this is, indeed, a long ways off. Still, it's something to look forward to. Thanks for the look into the future!

Despite having been tinkered with for decades, VR is still very much in its infancy. I think we'll see rapid evolution in the next decade or two as technology catches up with the dreams of people regarding virtual reality.

I share your hope to someday see this. I remember playing Lunar Lander on an ASR33 teletype dialed into a PDP-8 back in the time of Star Trek's original series. So much of what I thought of as science fiction, then, has come to reality. We've come so far and I can't wait to see what else the future has in store!

Comment Re:Corrective lenses adaptation? (Score 1) 55

Because the corrections are needed no matter how good your lenses are - it's a remapping of pixels so those pixels appear in the correct place in your plane of vision, and doesn't have anything to do with compensating for low quality lens

Yes, I get that, now. The corners of a flat display is further away than the center and the pixels there subtend a smaller arc on the eye than those at the center would. Duh!

I'd imagine a learning session where certain scenes (e.g. grids) are displayed and the system would apply software corrections under my control until it looked good to me.

Not possible, I'm afraid. Whatever the screen puts out is going to get blurred by your astigmatism and short-sightedness, and only a physical lens can pre-correct it for your eyes.

Duh. There is no "pre-blurring" or correction that would result in the right image appearing on my eye. My eye's focus is off and no matter what comes in, it will still land on the wrong place to make a clear image. My bad. Thanks for the explanation!

Comment Re:Corrective lenses adaptation? (Score 1) 55

It's been decades since my last physics course that dealt at all with optics, so a course that dealt specifically with optics is not a bad idea. We only touched upon ideal lenses, reflection, and refraction. Never touched on aberrations or distortions.

Liquid lenses would be nice. Bifocals are a pain. Full-lens near and far vision would be wonderful!

Comment Re:Corrective lenses adaptation? (Score 1) 55

Thank-you for your thoughtful reply! It's been decades since my last physics class that dealt at all with optics, and we never got into the various distortions and aberrations. It was also in light of a theoretically perfect lens (or two) and a bit with reflection and refraction. Your explanation of the wavefront delays made perfect sense - I think I see it now (pun intended!).

Comment Corrective lenses adaptation? (Score 1) 55

From TFA (emphasis added):

Consumer, wide-angle HMDs use relatively cheap lenses to bring the screen into focus for the human eyes and enable the wide field of view. The drawback is that there are spatial distortions and chromatic aberrations. Luckily, those can be compensated for in software and this e.g. in the Oculus SDK done as a post-processing pixel shader. The default implementations are warping the image in a barrel-distorted way and are using bilinear texture access to do so. The result is that the processed image is getting blurred.

My first thought was "Why not use better quality lenses?" Sure, they'd be more expensive, but there is an expense involved in the software having to correct Every Single Frame. Why not fix it once, at the source, and obviate the need for continuous real-time updates?

But my next thought was more positive. I wear glasses (near-sighted and have astigmatism, too). It would be *so* nice if there were a way to correct for that in software so I could wear VR goggles without my glasses. I'd imagine a learning session where certain scenes (e.g. grids) are displayed and the system would apply software corrections under my control until it looked good to me. That could be saved as a profile and then be loaded whenever I used their VR display.

The next step, of course, would be to find a way to leverage this "training" into my next eye exam!

Slashdot Top Deals

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...