Comment Not transparent... (Score 4, Informative) 191
Not transparent... but "augmented".
(misleading title, sloppy journalistic work... as always)
Not transparent... but "augmented".
(misleading title, sloppy journalistic work... as always)
On a side note : IMO, You should have started indexing your kids at 0...
An Algorithm To Prevent Slashdot's Bennett-Haselton-Degeneration...
Yeah, we need one...
They acquire only for a very very short lapse of time (in the order of a ps) and perform compression before the acquisition (compressed-sensing).
They cannot record longer than this because of how slow the sensor in the back of the streak camera is.
Ok, let's say that you want to build a 1 "mega-pixel" camera (1000x1000 pixels, for instance). You have the optics but not the sensor array. Instead, you only have a single photo-diode... which is basically a single pixel.
First approach : you decide to scan the image plane with this photo-diode, trading spatial resolution for time. You move the photo-diode to where the first pixel in the top-left corner of the sensor should be, integrate (collect the photons) for some time, then move to the second pixel position. After making 1 million of such movements/integrations, you have fully sampled the image plane and have a complete 1 "mega-pixel" image.
Problem : this is slow as hell, you need to move the photo-diode up to some accuracy, etc.
Second approach : instead of moving the photo-diode you will modulate the incoming signal (photons) and integrate everything to this detector. You take a small video projector and open it to find a component called a DMD which is an array of controllable bistable micro-mirrors. Basically, displaying an image on the video projector is turning this surface as a transmissive gray-scale pattern (note that it is not actually transmitting light, just reflecting). You put it in the image plane (at the position of the sensor array) and you use a lens to focus all of the light coming out of the DMD surface onto the photo-diode.
Now, instead of scanning, you just have to display a pattern consisting of a "black" frame (fully "blocking") except only one "white" pixel ("transparent") and integrate as usual. As you know which patterns was used for each integration and can, as previously, rebuild the image.
Second approach, first improvement : instead of lighting pixel per pixel you can use specific patterns. The basic idea is to integrate photons coming from multiple pixels at the same time and reconstruct with a specific algorithm. The idea is to express the problem as a linear equation A x = y where x is the input image, A is the measurement operator = a matrix representing the system and y is the measured vector. In the previous case, you were measuring pixel per pixel which is equivalent as modelling A as the identity matrix (ones on the main diagonal, zeros everywhere else and so y = x). Imagine now that you use another matrix / another way to combine multiple pixels, such that each row of A is pattern you have to display on the DMD and the matrix row is still square and full-rank (a well defined system). In the end you can still reconstruct x from y with A' y = x (where A' is the inverse of A) and get back your image.
Why would you do this? Well, instead of getting a bunch of photon from a tiny opening you will be measuring many more photons which is a good thing as our real-world detector is noisy. You will thus increase the signal to noise ratio.
Second approach, third improvement : the main problem of the previous system is that, to obtain a 1 mega-pixel image, you still need to do 1 million projections/measurements which is a lot, and makes the whole process slow. But, you know for a fact that images are compressible signals (JPEG is a proof of that) which means that you can represent any 1 mega-pixel image signal into a much smaller vector size. This is because natural images are not random structures and possess some level of coherency = redundancy between pixels. So instead of making as many projection as they are pixels (a square matrix), you will do less, say by a factor between 4 and 10. The matrix A becomes rectangular and you have to use a more complex reconstruction algorithm (non linear, or based on a convex optimization system) which takes into account prior knowledge you would have of natural images (think of it as external constraints that will help you make the system sufficiently well behaved).
This is basically how single-pixel cameras work (with compressive sensing)...
I'll pass for the bonus point.
When they are released on an intervention, they start flashing their Blue Screen of Death and Red Ring of Death...
Especially considering a 1 mega-pixels image in 8 bits gray-scale. That's 1 MB worth of information. Considering 8 letters in average per word (including the various punctuation characters) and 250 words per page in whatever-16-bits character encoding, the image weighs the same as a book of 200 pages.
To the systems programmer, users and applications serve only to provide a test load.