Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Fly Eyes for Spying Cameras 47

Roland Piquepaille writes "Even with sophisticated cameras, we can sometimes get poor pictures. This usually happens because cameras use an average light setting to control brightness. When parts of a scene are much brighter than others, the result is that you don't catch accurately all the parts. According to National Geographic News, by mimicking how flies see, Australian researchers can now produce digital videos in which you can see every detail. This technique could be used to develop better video cameras, military target-detection systems and surveillance equipment. Read more for additional pictures and references about these future surveillance cameras."
This discussion has been archived. No new comments can be posted.

Fly Eyes for Spying Cameras

Comments Filter:
  • by adam ( 1231 ) * on Monday September 11, 2006 @05:39AM (#16079662)
    I find The first FA to be poorly written. It jumps between focusing [pun not intended] on two completely different concepts: dynamic range, and motion detection. The second article is slightly better.

    We'll address dynamic range [wikipedia.org], since I know more about this aspect. The first page of the (first) article says he used "off-the-shelf components such as resistors, capacitors, and light sensors to build an electronic model". And then a sentence or so later says, "This would allow the camera to capture more complete images--such as, for instance, both the face of a person standing in front of a sunlit window and the scene outside." If you don't know much about digital imaging, let me just say that this is roughly the equivalent of "I used wheels and spark plugs to build a car and I now hope to win the Indy 500." The article is SORELY lacking in any real information about how he intends to extend dynamic range by using technology gleaned from flies.

    There are several very real and working principles by which dynamic range can be extended, both unique to chip architecture (such as dual slope sampling) and implementable on a variety of chips (such as dual electronic shuttering). These are the types of things that it would have been cool for the article to discuss (imo). The second article at least includes a quote from him stating that fly eyes can adjust exposure independently.. this is a beneficial thing, and several CMOS imagers already exist that do this as well (i.e. dual slope operation, etc). You can also individually shutter pixels, or expose multiple frames per $interval (each with a different electronic shutter length) and then composite them.. however this last technique creates smear, which can be less than ideal, depending on your needs. I also know of a couple of patents for bayer masks that adjust individual pixel exposure in realtime (similar to those sunglasses that get darker or lighter) in order to compress dynamic range before it hits the CMOS/CCD.

    One of the issues the articles really didn't get into at all, is storage of data. Higher dynamic range images require more storage space (as their bitspaces increase), and right now the major limitation in digital cinema and other similar realms is not imaging so much as writing all of the data to disk.. storage media speed (or cost/speed ratio, if you like) needs to do some catching up.

    • by SharpFang ( 651121 ) on Monday September 11, 2006 @05:56AM (#16079702) Homepage Journal
      or expose multiple frames per $interval (each with a different electronic shutter length) and then composite them.. however this last technique creates smear,

      That's likely what could be nicely improved with the right electronics: the smear would be at worst equal to smear of the longest exposition shot.

      You'd need a shot that doesn't reset current state of exposition of the sensor between readouts. Instead of:
      start, wait 1/120 s, stop, save, reset
      start, wait 1/60 s, stop, save, reset
      start, wait 1/30 s, stop, save, reset

      but one which does:
      start, wait 1/120 s, save,
      wait 1/120 s (total 1/60 from start), save
      wait 1/60 s (total 1/30 from start), save
      stop, reset.

      Still, displaying the result remains a problem. Real World is a medium of incredibly wide range of luminescences. Screen, paper, plasma TV, all have the dynamic range much smaller. You can squeeze the range of data you gathered into range of the device (and get horrible contrast), you can vary ranges of displayed areas (which creates bloom effects, looks cool, but for data processing - can't see shit, captain), extract variable info from the image (good for image processing but looks like shit for people), splice it into several images of various luminances (so why compounding it into one in the first place?) or... wait for a better display medium. Yeah, sucks.
      • or... wait for a better display medium

        I feel compelled to share a fairly old link to BrightSide (http://brightsidetech.com/ [brightsidetech.com]). They manufacture High Dynamic Range (HDR) TVs (and other related stuff), so here's the display medium.

        Only really relevant if you have a lot of cash to spend, but it'll be interesting to see how long it will take for the big vendors to catch on and make this technology mainstream...

        • Very interesting idea (though I doubt it would be of much use in daylight, with sun shining on the screen surface...) - I just wonder what media devices (gfx cards, video players) and software/media support these devices.
      • Um.. that is impossible to do in a CCD. The sensor is read by advancing the charges across the sensor surface itself. Reading the sensor is a destructive process. I do not know enough about CMOS imagers comment on whether those could accomplish your goal.
        • by dgatwood ( 11270 )

          I don't know how it works, but Sony has been advertising the ability to do dynamic range compression in their studio/EFP/ENG cameras to bring out details of people's faces in front of brightly lit backgrounds since at least the late 1990s. I have one of their promotional videotapes back in TN.

          For their example, they showed a race car driver inside a car with tinted windows, lit by available light, with the window behind the guy rolled down on a bright sunny day.

          The problem is not that this can't be don

    • Re: (Score:2, Informative)

      by sm62704 ( 957197 )
      A better FA can be found here. [newscientisttech.com] This article [livescience.com] is sort of related, and interseting.
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Monday September 11, 2006 @05:41AM (#16079666)
    The problem, in short, is that digital sensors have pretty terrible contrast limitations. Film does too, to some extent, but with many years of experience these problems have been dealt with. You can only capture menaingful data within certain contrast zones. A good sensor may have 4 usable zones of contrast while your consumer digicam can probably only handle 2 and a half or three stops worth of contrast.

    So what do you do? Well, since it's digital, take more pictures! expose the frame for a certain set of contrast zones and then repeatedly take the same shot with different contrast settings. Digitally combine the pics in Photoshop to render a frame with full contrast from the blackest black to the whitest white. The pictures look a little weird because we usually aren't able to see that much contrast rendered in Nature due to limitations of our eyes, but the results are pretty astounding.
    • by Anonymous Coward on Monday September 11, 2006 @06:07AM (#16079736)
      Almost everything you wrote there is incorrect. The dynamic range of digital cameras isn't quite that bad and actually not significantly worse than film. The difference is mostly how film and CMOS sensors degrade when they're overexposed. You could use Photoshop to create HDR pictures, but there are better tools for the job. These pictures, or rather the low dynamic range pictures that are created from them, look odd due to limitations of the display systems, not our eyes. The algorithms, which compress the dynamic range into the range that a typical monitor or, even worse, a print can handle, mimic the way we adapt to high dynamic ranges in reality, but since a picture has no time dimension, they have to do spatially what we do over time, which creates the weirdness.
  • Anyone else notice the BUG EYE commerical during the MythBusters for CIA.GOV employment? (When you are not skipping over the commerical with a DVR!)

  • If I read the title right, I found this in the FA.

    And Brinkworth plans "to shrink the prototype and place it on a microchip that could go between a camera's sensor and its digital converter."

    next problem: how to shrink a fly

    A question: How many times did the house fly inspired us? Could I still kill them?
  • My solution (Score:3, Interesting)

    by Bombula ( 670389 ) on Monday September 11, 2006 @06:05AM (#16079726)
    Crazy as it sounds, I solve this problem with both my film and digital cameras using an amazingly sophisticated trick called bracketing.

    Seriously. With bracketing you simply take multiple shots at different exposures in quick succession. Most modern cameras with computer controls offer automated bracketing functions. And for compositing afterwards there's a nifty program called Photoshop...

    • Re:My solution (Score:5, Insightful)

      by Eivind ( 15695 ) <eivindorama@gmail.com> on Monday September 11, 2006 @06:18AM (#16079753) Homepage
      This solves only half the problem.

      After capturing the image, you need to display it somehow, or else there's not much point to the exersize.

      Current screens and prints have a tiny dynamic range, on the order of 1000:1

      So, once you've captured that image, where the brigthest pixel is a million times brigther than the darkest pixel, how are you going to show it ?

      There's only one answer: compress the range, that is, map your numbers (in range 1 - 1000000) to much smaller numbers.

      Problem is, now you've got terrible contrast in the midtones. The problem is that compressing the range compresses this part of the range too. So, assuming the monitor can display 1000 different brigthnesses, you end up with a picture where the brigthest pixel in a face is say 404 and the darkest pixel is say 397. Which makes the face essentially monotone.

      • by hparker ( 41819 ) on Monday September 11, 2006 @10:07AM (#16080593)
        Actually, many people have studied the problem of displaying high dynamic range (HDR) images on lower dynamic range devices. In fact, its a whole field of study: Tone Mapping. Many PhD's in Computer Graphics have been given to those finding solutions to this problem. Modern movie computer generated special effects are made indistinguishable from reality based on these solutions. The solutions are all based on the characteristic of human vision that eyes are great detectors of local differences, but poor detectors of differences separated in either space or time.

        A good place to start looking into this field is the Wikipedia entries http://en.wikipedia.org/wiki/Tone_Mapping [wikipedia.org] and http://en.wikipedia.org/wiki/High_dynamic_range_im age [wikipedia.org].

      • Re: (Score:3, Informative)

        by raynet ( 51803 )
        But you don't have to compress the full dynamic range just part of it (if you want that sort of pictures) or use "smart" compression that preserves contrast between objects in the image while extending the visible dynamic range (see http://www.cs.huji.ac.il/~danix/hdr/results.html [huji.ac.il]). Also, having the "same" image with different exposures allows you to render an image that has as little under and/or over exposure as possible (see http://www.openexr.com/samples.html [openexr.com]).
        • by Eivind ( 15695 )
          Especially the latter point is a very very good one.

          Despite print (and display) having only on the order of 1000:1 contrast-ratios, it'd be a tremendous advantage to have a digital camera that could capture a *LOT* more than that.

          This would allow you to select your exposure later -- before *printing* (or displaying) rather than at the moment you take the photo. Bracketing can do this, sort of, but it's definitely a hack.

          In essence, it'd allow you to first take a photo. And then *afterwards* experiment

      • by orasio ( 188021 )
        So, assuming the monitor can display 1000 different brigthnesses, you end up with a picture where the brigthest pixel in a face is say 404 and the darkest pixel is say 397.


        Of course, you would have a hard time calculating the range, if the brightest tone of the face is Not Found (ducks).

        Aside from that, the 1000 different brigthness values could be selected accordingly to a logarithmic scale, more suited to the human eye, and adapted to its peculiarities. A digital sensor just uses a linear scale, and just
        • by Lehk228 ( 705449 )
          you can also not process the data durin recording, use a large range adjusting jog wheel during playback to bring emphasis to any part the viewer needs to see,
    • That's the first step, next we'll take three cameras and set them up to autoamticaly bracket. That really doesn't work that well because there will be situations that are still outside the brackets range so let's substitute cameras that auto-expose instead but put narrower lens on them then splice the sub-pictures back into the complete picture, that's what I'd think of when they say a bug's-eye, imagine an array of 128X128 cameras taking one picture; very little would ever fall outisde the setups dynamic r
  • by Lord Aurora ( 969557 ) on Monday September 11, 2006 @06:07AM (#16079732)
    Interestingly enough, the writers of TFA missed the entire idea behind flies' eyes. They talk about motion detection and whatnot, when the real issue is, flies see so well because

    they have a million fucking eyes!

    (Try not to take this post too seriously.)

    • by arun_s ( 877518 )
      I was actually under the impression that compound eyes were poorer than mammalian eyes ins pite of being more numerous, but you are correct. A page from everything2 [everything2.com] says:
      Compound Eyes:
      apposition eye: An eye type consisting of multiple ommatidia (lenses) which are each seperated by pigment cells, which surround them individually. Having the lenses seperate in this way creates the problem of poor photon reception, which led to the development of the superposition eye. The apposition eye is found in diurnal
      • by sm62704 ( 957197 )
        TFA (and another FA I linked earlier) is short on details, but what I suspect they're saying (or rather, not saying; at least not very well) is that essentially there are multiple lenses with each lens set at a different f-stop [wikipedia.org]. Each image could than have parts too dark or light removed, and the resulting multiple pictures combined into one.

        Note that this is only how I imagine the thing could work; I'm as ignorant as anybody here about WTF TFA is poorly trying to say. I'm not sure how a fly's eyes actually
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      Perhaps they should try mimick the human eye's optical nerve instead. For those billions of you out there who haven't paid attention during your lives, our optical nerve is also versatile in exposure - we don't quite suffer from the monotonous exposure syndrome that film and ccd does. We can actually adjust exposure on several separate areas of our visual. We don't need a million different eyes to do it, and we don't need to do multiple exposure passes like certain digital cameras have begun to incorporate.
      • by sm62704 ( 957197 )
        Perhaps they should try mimick the human eye's optical nerve instead. For those billions of you out there who haven't paid attention during your lives, our optical nerve is also versatile in exposure

        That's incorrect. The optic nerve only transmits the images to the brain. The human eye's retina is most likely better than film or CCD about this*, but it's your iris [wikipedia.org] that adjusts the light entering your eye, just like a camera's f-stop.

        The difference is that with your eye you only look at what you're focusing
  • FujiFilm SuperCCD (Score:2, Informative)

    As it was explained to me by a FujiFilm rep (YMMV) this is kinda how their 5th+ generation SuperCCD works. Near instananeously every cell of it adjusts to it's own lighting situation by communicating with other cells in the CCD. "Borrowing" light from other cells when underexposed and "sending" light to other cells when overexposed.
    • by Viol8 ( 599362 )
      >Borrowing" light from other cells when underexposed and "sending" light to other cells when overexposed.

      Sounds like a recipe for bleed over if ever I heard one.
    • by raynet ( 51803 )
      Dunno if it is the same generation SuperCCD, but the one I saw just has a hexaconical pixel array with 2 different pixel sizes. The smaller pixels are between the grid of normal size pixels and as they are smaller, they get less light. So the SuperCCD basicly sees the image with 2 different exposures and the camera then processes this information and uses small pixels for information in areas of over exposure and bigger pixels for other areas. So basicly it gives more dynamic range from where to pick the im
  • All the comments I read so far seem to think this is about increasing the dynamic range of the sensors - I didn't get that from the article at all. It seems to be something that goes after a conventional sensor.

    It sounds to me the software equivalent of the old (analog) darkroom technique of "dodge and burn" where, when you printed a negative, you manually exposed dark bits of the image more and light bits less but selectively covering bits of it.

    Not sure what the motion detection bit is about though, t

    • Based on the article's saying "individually to adjust to various parts of an image", and some other related media press releases, their idea is to adaptively control the exposure of the pixel sensors so that they don't saturate.

      Other equally non-technical press articles say that they're going to try to put an ASIC "between the camera lens and the image sensor." Now, I'm assuming they're not actually gonna block the light path, but that what they mean is that they're going to use some circuitry to control t
  • by oneiros27 ( 46144 ) on Monday September 11, 2006 @08:04AM (#16080015) Homepage
    Unless there's a whole lot more going on than the article says, based on what it's talking about, and the example images, it's nothing but a difference movie.

    (you look at changes from one frame to the next, and make a movie of those changes).

    There's nothing new about this -- scientists have been using it for years (if not decades) for instruments that they don't have enough data to fully calibrate (eg, those on spacecraft, where they might not be able to focus on fixed targets to calibrate it in its environment). It's also useful to tell when only small portions of the image are changing, or it's changing very slightly in relation to the whole image.

    Here are some examples:
  • And remember to visit his blog for additional pictures and references [zdnet.com].

    More Roland stories here [slashdot.org] and here [slashdot.org].

  • Progress (Score:3, Funny)

    by Rob T Firefly ( 844560 ) on Monday September 11, 2006 @09:33AM (#16080410) Homepage Journal
    Tomorrow's fly-based digital cameras will be so complex, they'll need more than a standard help file. They'll have a "help meeeeee!" file.
  • I don't know if this is a good idea; it bugs me for some reason.
  • " ... when you're seeing twenty things at a time

    you just can'st slow things down, baby..."

    "...with all those eyes, they're crowdin up my human face

    and all those eyes, TAKE AN OVERLOAD"

  • This technique could be used to develop better video cameras, military target-detection systems and surveillance equipment.

    Uses are listed in reverse chronological order, of course.
    -f
  • I'm not into photography but semiconductor photolithography equipment (at least the steppers I worked on that utilized G-line and I-line Hg arc lamps as a light source) used fly's eye lenses. I started working on them 14 years ago and I know they were not brand new then.

    Nikon was the manufacturer of the steppers I worked on.

    I would think fly's eye lenses would have made it into all sorts of other imaging equipment by now. Did it just take this long for someone to figure out that it would help in oth
  • These cameras based on fly's eyes lenses work really well for surveillance tasks. They've resulted in some really excellent photos of the insides of Osama Bin Laden's outside dunny.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...