Fly Eyes for Spying Cameras 47
Roland Piquepaille writes "Even with sophisticated cameras, we can sometimes get poor pictures. This usually happens because cameras use an average light setting to control brightness. When parts of a scene are much brighter than others, the result is that you don't catch accurately all the parts. According to National Geographic News, by mimicking how flies see, Australian researchers can now produce digital videos in which you can see every detail. This technique could be used to develop better video cameras, military target-detection systems and surveillance equipment. Read more for additional pictures and references about these future surveillance cameras."
articles missing lots of details. (Score:5, Informative)
We'll address dynamic range [wikipedia.org], since I know more about this aspect. The first page of the (first) article says he used "off-the-shelf components such as resistors, capacitors, and light sensors to build an electronic model". And then a sentence or so later says, "This would allow the camera to capture more complete images--such as, for instance, both the face of a person standing in front of a sunlit window and the scene outside." If you don't know much about digital imaging, let me just say that this is roughly the equivalent of "I used wheels and spark plugs to build a car and I now hope to win the Indy 500." The article is SORELY lacking in any real information about how he intends to extend dynamic range by using technology gleaned from flies.
There are several very real and working principles by which dynamic range can be extended, both unique to chip architecture (such as dual slope sampling) and implementable on a variety of chips (such as dual electronic shuttering). These are the types of things that it would have been cool for the article to discuss (imo). The second article at least includes a quote from him stating that fly eyes can adjust exposure independently.. this is a beneficial thing, and several CMOS imagers already exist that do this as well (i.e. dual slope operation, etc). You can also individually shutter pixels, or expose multiple frames per $interval (each with a different electronic shutter length) and then composite them.. however this last technique creates smear, which can be less than ideal, depending on your needs. I also know of a couple of patents for bayer masks that adjust individual pixel exposure in realtime (similar to those sunglasses that get darker or lighter) in order to compress dynamic range before it hits the CMOS/CCD.
One of the issues the articles really didn't get into at all, is storage of data. Higher dynamic range images require more storage space (as their bitspaces increase), and right now the major limitation in digital cinema and other similar realms is not imaging so much as writing all of the data to disk.. storage media speed (or cost/speed ratio, if you like) needs to do some catching up.
Re:articles missing lots of details. (Score:4, Interesting)
That's likely what could be nicely improved with the right electronics: the smear would be at worst equal to smear of the longest exposition shot.
You'd need a shot that doesn't reset current state of exposition of the sensor between readouts. Instead of:
start, wait 1/120 s, stop, save, reset
start, wait 1/60 s, stop, save, reset
start, wait 1/30 s, stop, save, reset
but one which does:
start, wait 1/120 s, save,
wait 1/120 s (total 1/60 from start), save
wait 1/60 s (total 1/30 from start), save
stop, reset.
Still, displaying the result remains a problem. Real World is a medium of incredibly wide range of luminescences. Screen, paper, plasma TV, all have the dynamic range much smaller. You can squeeze the range of data you gathered into range of the device (and get horrible contrast), you can vary ranges of displayed areas (which creates bloom effects, looks cool, but for data processing - can't see shit, captain), extract variable info from the image (good for image processing but looks like shit for people), splice it into several images of various luminances (so why compounding it into one in the first place?) or... wait for a better display medium. Yeah, sucks.
High Dynamic Range display (Score:1)
or... wait for a better display medium
I feel compelled to share a fairly old link to BrightSide (http://brightsidetech.com/ [brightsidetech.com]). They manufacture High Dynamic Range (HDR) TVs (and other related stuff), so here's the display medium.
Only really relevant if you have a lot of cash to spend, but it'll be interesting to see how long it will take for the big vendors to catch on and make this technology mainstream...
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I don't know how it works, but Sony has been advertising the ability to do dynamic range compression in their studio/EFP/ENG cameras to bring out details of people's faces in front of brightly lit backgrounds since at least the late 1990s. I have one of their promotional videotapes back in TN.
For their example, they showed a race car driver inside a car with tinted windows, lit by available light, with the window behind the guy rolled down on a bright sunny day.
The problem is not that this can't be don
Re: (Score:2, Informative)
Multi-contrast zone recording (Score:4, Interesting)
So what do you do? Well, since it's digital, take more pictures! expose the frame for a certain set of contrast zones and then repeatedly take the same shot with different contrast settings. Digitally combine the pics in Photoshop to render a frame with full contrast from the blackest black to the whitest white. The pictures look a little weird because we usually aren't able to see that much contrast rendered in Nature due to limitations of our eyes, but the results are pretty astounding.
Re: (Score:2, Informative)
Re: (Score:3, Informative)
Normal LCDs are low dynamic range. You need $$$ to get a real HDR LCD.
http://www.brightsidetech.com/ [brightsidetech.com]
And the demos are all simulated, since you can't view real HDR without an HDR monitor, AFAIK.
Re:Multi-contrast zone recording (Score:4, Insightful)
Must be working for CIA (Score:2)
literally?.. (Score:1)
And Brinkworth plans "to shrink the prototype and place it on a microchip that could go between a camera's sensor and its digital converter."
next problem: how to shrink a fly
A question: How many times did the house fly inspired us? Could I still kill them?
My solution (Score:3, Interesting)
Seriously. With bracketing you simply take multiple shots at different exposures in quick succession. Most modern cameras with computer controls offer automated bracketing functions. And for compositing afterwards there's a nifty program called Photoshop...
Re:My solution (Score:5, Insightful)
After capturing the image, you need to display it somehow, or else there's not much point to the exersize.
Current screens and prints have a tiny dynamic range, on the order of 1000:1
So, once you've captured that image, where the brigthest pixel is a million times brigther than the darkest pixel, how are you going to show it ?
There's only one answer: compress the range, that is, map your numbers (in range 1 - 1000000) to much smaller numbers.
Problem is, now you've got terrible contrast in the midtones. The problem is that compressing the range compresses this part of the range too. So, assuming the monitor can display 1000 different brigthnesses, you end up with a picture where the brigthest pixel in a face is say 404 and the darkest pixel is say 397. Which makes the face essentially monotone.
Display solution has a name: Tone Mapping (Score:4, Informative)
A good place to start looking into this field is the Wikipedia entries http://en.wikipedia.org/wiki/Tone_Mapping [wikipedia.org] and http://en.wikipedia.org/wiki/High_dynamic_range_im age [wikipedia.org].
Re: (Score:3, Informative)
Re: (Score:2)
Despite print (and display) having only on the order of 1000:1 contrast-ratios, it'd be a tremendous advantage to have a digital camera that could capture a *LOT* more than that.
This would allow you to select your exposure later -- before *printing* (or displaying) rather than at the moment you take the photo. Bracketing can do this, sort of, but it's definitely a hack.
In essence, it'd allow you to first take a photo. And then *afterwards* experiment
Re: (Score:2)
Of course, you would have a hard time calculating the range, if the brightest tone of the face is Not Found (ducks).
Aside from that, the 1000 different brigthness values could be selected accordingly to a logarithmic scale, more suited to the human eye, and adapted to its peculiarities. A digital sensor just uses a linear scale, and just
Re: (Score:2)
Re: (Score:1, Informative)
The term you're looking for is "plenoptic camera".
Re: (Score:2)
A tad bit off... (Score:3, Funny)
they have a million fucking eyes!
(Try not to take this post too seriously.)
Re: (Score:1)
Compound Eyes:
apposition eye: An eye type consisting of multiple ommatidia (lenses) which are each seperated by pigment cells, which surround them individually. Having the lenses seperate in this way creates the problem of poor photon reception, which led to the development of the superposition eye. The apposition eye is found in diurnal
Re: (Score:1)
Note that this is only how I imagine the thing could work; I'm as ignorant as anybody here about WTF TFA is poorly trying to say. I'm not sure how a fly's eyes actually
Re: (Score:1, Interesting)
Re: (Score:1)
That's incorrect. The optic nerve only transmits the images to the brain. The human eye's retina is most likely better than film or CCD about this*, but it's your iris [wikipedia.org] that adjusts the light entering your eye, just like a camera's f-stop.
The difference is that with your eye you only look at what you're focusing
FujiFilm SuperCCD (Score:2, Informative)
Re: (Score:2)
Sounds like a recipe for bleed over if ever I heard one.
Re: (Score:2)
Dodge and burn? (Score:1)
All the comments I read so far seem to think this is about increasing the dynamic range of the sensors - I didn't get that from the article at all. It seems to be something that goes after a conventional sensor.
It sounds to me the software equivalent of the old (analog) darkroom technique of "dodge and burn" where, when you printed a negative, you manually exposed dark bits of the image more and light bits less but selectively covering bits of it.
Not sure what the motion detection bit is about though, t
Re: Increasing dynamic range at sensor (Score:2)
Other equally non-technical press articles say that they're going to try to put an ASIC "between the camera lens and the image sensor." Now, I'm assuming they're not actually gonna block the light path, but that what they mean is that they're going to use some circuitry to control t
It's a difference movie! (Score:3, Interesting)
(you look at changes from one frame to the next, and make a movie of those changes).
There's nothing new about this -- scientists have been using it for years (if not decades) for instruments that they don't have enough data to fully calibrate (eg, those on spacecraft, where they might not be able to focus on fixed targets to calibrate it in its environment). It's also useful to tell when only small portions of the image are changing, or it's changing very slightly in relation to the whole image.
Here are some examples:
Roland fan club (Score:2)
More Roland stories here [slashdot.org] and here [slashdot.org].
Progress (Score:3, Funny)
Using fly eyes on cameras (Score:2)
compulsory lyrics (Score:1)
you just can'st slow things down, baby..."
"...with all those eyes, they're crowdin up my human face
and all those eyes, TAKE AN OVERLOAD"
Uses (Score:2)
Uses are listed in reverse chronological order, of course.
-f
How can a fly's eye lens be news?!? (Score:1)
Nikon was the manufacturer of the steppers I worked on.
I would think fly's eye lenses would have made it into all sorts of other imaging equipment by now. Did it just take this long for someone to figure out that it would help in oth
It really works! (Score:2)