Amica. They've been pretty solid.
Amica. They've been pretty solid.
My homeowners insurance charges something like $10 per YEAR for computer insurance that also includes... smartphones. With a $50 deductible and $1000 per incident. My some dropped my wife's Samsung Somethingorother in the pool and the insurance paid out ~$500 for a new phone. Way cheaper than any other plan I have ever seen for phones. It also covers laptops, and all devices in the house are covered under the single $10 payment.
It's hard not to have a few hacked servers when you comprise 1/255 (approx) of IPv4 space with everything sitting on an enormous pipe. Plus there's such a high flux of students coming, setting up servers (sometimes in closets), and leaving that there is a nightmare of unpatched everything there. Plus school is a place where you are supposed to learn, and a lot of learning comes from making mistakes.
You can download the article from Arxiv for free here: http://arxiv.org/abs/1103.3643
Basically, the imaging resolution of a lens (typically) has to do with its numerical aperture (NA). A small lens far away has terrible resolution, and vice-versa. The trouble with really high NA lenses is that they are hard to make without distortions. It's easy to make spherical shapes, but aptly named spherical distortion starts to ruin your image once the NA gets high. So what they've done is taken a ground glass surface and put it really close to the object, so that the "scattering lens" subtends close to 2pi steradians. Then they use a spatial light modulator (transmissive LCD screen) to control the phase of their laser beam across many domains to sort of pick out the random scattering elements on the frosted screen that give them the best image. Sort of. There is much additional trickery, but I think that's the jist of it.
While CPU power seems to double every 18 months or so, for the past (almost) 20 years hard drive size has doubled every 14 months*. Eventually hard drives will be so large that CPUs will never be able to access all the information. I guess then the key is being able to find the information you want to access, which is why I suppose it would be good to buy GOOG even now.
* 40 MB in 1991, 3 TB in 2010. This trend has held true at many points in between.
Well I didn't say my code was *well* written. Apparently there's a lot of trickery with copying global memory to cached memory to speed up operations. Cached memory takes (IIRC) one clock cycle to read or write, and global GPU memory takes six hundred cycles. And there's all this whatnot and nonsense about aligning your threads with memory locations that I don't even bother with.
The Tesla 1060 is a video card with no video output (strictly for processing) that has something like 240 processor cores and 4 GB of DDR3 RAM. Just doing math on large arrays (1k x 1k) I get a performance boost of about a factor of forty over a dual core 3.0 GHz Xeon.
The CUDA extension set has FFT functionality built in as well, so it's excellent for signal processing. The SDK and programming paradigm is super easy to learn. I only know C (and not C++) and I can't even make a proper GUI, but I can make my array functions run massively in parallel.
The trick is to minimize memory moving between the CPU and the GPU because that kills performance. Only the brand newest cards support functionality for "simultaneous copy and execute" where one thread can be reading new data to the card, another can be processing, and the third can be moving the results off the card.
One way that the video people can maybe speed up their processing (disclaimer: I don't know anything about this) is to do a quick sweep for keyframes, and then send the video streams between keyframes to individual processor cores. So instead of each core gets a piece of the frame, maybe each core gets a piece of the movie.
The days of the math coprocessor card have returned!
Wow, you are correct. When I downloaded the paper and scanned the references, I somehow didn't see the reference listed.
I guess I have to change my point to "how is this new now?" But I guess a nice experiment in any case.
n = (index of refraction)
h = (well, hbar, the planck constant)
k = (photon wavenumber)
This paper from MIT showed conclusively through experiment (almost 4 years ago) that in a refractive material the medium temporarily gives up its momentum to the photon, so that the momentum of the photon in the medium is nhk.
It's too bad that this new experiment didn't cite the prior art.
When the bosses talk about improving productivity, they are never talking about themselves.