Please create an account to participate in the Slashdot moderation system


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

DARPA's Cortically-Coupled Computer Vision System 145

BluePariah writes "Wired News has an article on a 'cortically coupled computer vision' system being developed at Columbia University and funded by the ever-curious folks at DARPA. Essentially, it uses the extremely powerful visual recognition ability of the human brain and couples it with a computer's raw processing power to allow a user wearing an EEG cap to filter through scores of digital images at high-speed and pick out something of interest. This has applications in military intelligence, face-recognition, anti-terrorism, and hunting down replicants."
This discussion has been archived. No new comments can be posted.

DARPA's Cortically-Coupled Computer Vision System

Comments Filter:
  • by hanshotfirst ( 851936 ) on Thursday July 13, 2006 @09:06AM (#15712281)
    The TV networks will love this!
    • So, I'm sitting in front of my World of Warcraft game, Does this device have a USB plug?
    • 1) put one of these on a suspect you are interogating. Hold up the photos. "ever seen the victim", watch if the eeg shows recognition.

      2) grab a focus group, play them your Jingle or TV commercial or sound bite. Assess subliminal recognition

      3) video game: good guy or bad guy that just moved too quickly to see.

      4) soldiers on guard duty.

      5) people looking through intel data for links, trying to process more info than their brains can recall.

    • The first commercialization of any new medium is always porn. Therefore, we can look forward to visions of computers coupling with our cortex. Yay!
  • pr0n (Score:5, Funny)

    by kv9 ( 697238 ) on Thursday July 13, 2006 @09:08AM (#15712302) Homepage

    [...] allow a user wearing an EEG cap to filter through scores of digital images at high-speed and pick out something of interest.

    hi-speed pr0n!

    • Re:pr0n (Score:3, Interesting)

      Funny that you say that, but this could be quite interesting for psychological testing. An array of images, for example could be given, ones with great shock value, loving images, landscapes, techno-graphix, an assortment of a few images of as many categories as possible, totalling let's say 6000. Showing one for a second each, 100 minutes (or a little over an hour and a half) you can see where interest peaks, and see someone's truly, unprocessed and unmasked interests. I betcha most men (and probably wome
      • Re:pr0n (Score:2, Funny)

        The gender of chosen interest in less clothing than would be accepted in public Some kind of crazy technological advancement An explosion of sorts and yes, in that order

        Then I wonder what happens when they are shown an image of a scantily clad female whos backside is exploding with some new technologically advanced orange juice delivery system?

  • by Anonymous Coward on Thursday July 13, 2006 @09:09AM (#15712303)
    So DARPA's invented something else now. How long before Al Gore goes on CNN to claim he invented this all by himself as well?
    • So DARPA's invented something else now. How long before Al Gore goes on CNN to claim he invented this all by himself as well?

      [*Sigh* -- not this again.]

      Al Gore never claimed that he "invented" the internet. In a March 1999 interview, Wolf Blitzer asked Gore what distinguished him from one of his opponents (Bill Bradley) for the Democratic presidential nomination. Gore responded by describing how he "took the initiative" on a number of issues, including "creating the internet". In context, he was talking
  • Next stop... (Score:5, Interesting)

    by Billosaur ( 927319 ) * <(ten.enilnotpo) (ta) (rehtorgw)> on Thursday July 13, 2006 @09:09AM (#15712308) Journal
    Essentially, it uses the extremely powerful visual recognition ability of the human brain and couples it with a computer's raw processing power to allow a user wearing an EEG cap to filter through scores of digital images at high-speed and pick out something of interest.

    Say it with me now... Porn!

    • Great! So now you can watch scads of porn without actually seeing any of it!

      Wait. How is this better than when I used to sneak down into the living room as a kid to watch still-scrambled skin flicks on cable?
      • Re:Next stop... (Score:3, Interesting)

        by Billosaur ( 927319 ) *

        Great! So now you can watch scads of porn without actually seeing any of it!

        No, now you can scan through your entire porn collection and pick out the things you want to see, much faster than doing a search. Imagine hooking this up to your browser and surfing through porn websites brought up by a Google search -- you could find what you're looking for instantly!

        Mind you, I'm sure DARPA didn't have this in mind when they thought it up. They probably want to hunt fro troops, missiles, terrorists, etc. Bu

    • Re:Next stop... (Score:5, Interesting)

      by rfischer ( 95276 ) on Thursday July 13, 2006 @10:00AM (#15712630)
      How telling is it that this is moderated "Interesting" rather than "Funny"?
  • by gasmonso ( 929871 ) on Thursday July 13, 2006 @09:10AM (#15712312) Homepage

    Now I'd imagine then that the results would vary from user to user. So would this system require the "right" person then for testing and calibration? Very interesting indeed. []
  • by dr_dank ( 472072 ) on Thursday July 13, 2006 @09:11AM (#15712321) Homepage Journal
    *Banana clip for your face sold separately
  • by UR30 ( 603039 ) on Thursday July 13, 2006 @09:14AM (#15712332) Homepage
    Suprise: I thought that the human visual systems is way superior to the existing computational image-processing systems. But I guess this technology switch directions as well, switching the roles and using human brain as co-processors in surveillance and security applications. Any volunteers for this?
  • I may be missing something here. But isn't this just like the security guard pressing a button when he sees something suspicious? Except you don't have to press a button here.
  • Jumpy, spazzed, feds from a nightmarish barrage of images 6 hours a day.
  • seems like a first case of positive usage of subliminal messages []. I wonder though, who would accept his brain to be fried in such a way.
  • This has applications in military intelligence, face-recognition, anti-terrorism, and hunting down replicants

    I don't know what replicants is, other then this Replicant [] (but who would want that).

    This has HUGE applications in finding that perfect pr0n pic.
    • Bladerunner
    • by tinkerghost ( 944862 ) on Thursday July 13, 2006 @10:02AM (#15712639) Homepage
      Having publicly admitted to having never watched Bladerunner, I motion that AviLazar's geek liscense be revoked ...
      Shees, next it's going to be 2001, The Time Machine, and Ice Pirates ... where will it end ...think of the children ...
      Wait, wrong argument ...
      • where will it end - check
      • think of the children - skip
      • work of terrorists - skip
      • violation of civil rights - check ^H^H - skip
      • end of civilization - change civ to cult - check
      OK, back....
      to allow this affrontery to continue will undoubtably lead to the end of Western Culture as we know it, for without due veneration of our classical arts, we shall indeed be doomed to an eternity of Jerry Springer and Teletubbies. Oh the humanity of it all.
  • by krell ( 896769 ) on Thursday July 13, 2006 @09:20AM (#15712380) Journal
    "and hunting down replicants."

    Hey, they are only guilty of DNA copyright infringement! It's not like it's an actual crime, bud!
  • I've already seen like 20 films where they had this fully working! Pfff, talk about tardy.
  • So who's seen the new Doctor Who series? The news station where peoples' brains moderate the news feeds and television shows- and the "controller" put in place at birth to moderate all video streams...

    Furthermore, can you say porn?
  • Replicants (Score:4, Informative)

    by Nick Fury ( 624480 ) <> on Thursday July 13, 2006 @09:25AM (#15712419)
    Replicants is a reference to Blade Runner. A move by Ridley Scott.

    The IMDB link: []

    The move is based on the work of Phillip K. Dick. It also stars Harrison Ford in his least favorite role.
  • by tygerstripes ( 832644 ) on Thursday July 13, 2006 @09:29AM (#15712443)
    Researchers at Columbia University are combining the processing power of the human brain with computer vision to develop a novel device that will allow people to search through images ten times faster than they can on their own.

    So, basically completely the opposite to the /. description, to whit:

    it uses the extremely powerful visual recognition ability of the human brain and couples it with a computer's raw processing power

    I picked that up within 5 seconds of clicking the link. Sort it out, editors.

    • Actually, I think they got it right. The point is the ability of the human brain to recognize patterns at a glance. For example, I can look at a thousand different ways to represent the face of a celebrity, and hit a 'yes' or 'no' button almost instantaneously to identify matches. These images would include crisp color photographs, blurry black and whites, caricatures, sketches, silhouettes, etc...maybe even ASCII. Currently the human brain can do this much faster and more reliably than a computer. The
      • I think this is a bad job for someone like me. I have a hard time remembering the names of ex-girlfriends I run into on the street.

        Seriously though, go to and look at how they categorize images. It's called keywords, and it's amazing. People sort the images by looking at them, and then type words to describe the image.

        Interesting research though, but this sounds more like UI land rather than new kind of processing land.

  • "This has applications in military intelligence, face-recognition, anti-terrorism, and hunting down replicants"

    Yeah, I've seen this one before ... where the Coyote tried using it to pick out the roadrunn behind a moving train with a slingshot. I didn't work for him.

    But maybe, Coyotes are just funny like that!
  • I hope someone will bring this kind of science to a court. God didn't make humans to become slaves of machines.
    Computer hardware that uses your brain, thats sound dangerous. Some people think some radio singnals of mobiles are bad, but this is much more worse. What would do this to the human spirit?

    humans are not a set of tools to be used in computer hardware this is dangerous technology. It should be the other way, we should use computers to do our things.

    Don't let someone else use your brain fo
    • "Computer hardware that uses your brain, thats sound dangerous.....Don't let someone else use your brain for them in this way."

      If paranoid luddites who have delusions of this technology making us into Borg aren't using their brains, why not let someone else make some good use of wasted grey matter?

    • God didn't make humans to become slaves of machines.

      You are right.
      God didn't make humans. It was the FSM with its noodly appendage.
      And we are supposed to be pirates, not slaves, arrrr.
    • I hate to break it to you, but you're using your brain to type a slashdot post. What does it matter if you use a keyboard or an EEG (or whatever the acronym is?).

      How is it slavery? Its completely voluntary.
    • Comments like this amaze me, they're using what at a basic level is just an interface device, like your keyboard, mouse, trackball, clickwheel, touchpad or whatever. I'm sure you're using punchcards or dip switches to enter your post here, because otherwise you might become a slave to the keyboard and mouse!

      Guess what, every interface is just a way to get brain impulses from your brain to the computer (OMFG, run for the hills!) Whether my fingers happen to be in between doesn't really matter to me. If I co
  • sounds frustrating (Score:3, Insightful)

    by Cycon ( 11899 ) <steve [at] thePr ... com ['lAm' in ga> on Thursday July 13, 2006 @09:40AM (#15712512) Homepage

    Sounds to me a bit frustrating for the user.

    Imaging sitting there for an hour or more, looking at endless streams of boring security footage. Every time something interesting flashed by, the machine would record the brain activity, but the stream would just continue. Say you saw the image of a known terrorist flash by, it seems to be human nature would make you want to take a closer look - natural reaction would be to want to pay a little more attention. Unless the stream of images slows down a little when a "hit" is registered, the whole process would be a bit of a tease.

    • if i understand the TFA correctly then it captuers the moments your brain spots something "interesting", so imagine this scenario:

      1. shopping mall frequented by young women, perhaps near to a beach
      2. security cameras placed a little above head height pointing down a bit
      3. hot day

      90% of the footage will be flagged as "interesting" because of all the cleavage on show
      • You know this touches on an insight of mine during the past year.

        Women don't want you to look at their chest.
        So they put on a shirt with a deep "V" cut showing bare flesh.
        Then they deride you for looking at their chest.

        I think if a men wore shirts cut like women, women would look at the men's chest too. It is hard not to look at this big "flesh arrow".
        • Maxo-Texas - I think if a men wore shirts cut like women, women would look at the men's chest too. It is hard not to look at this big "flesh arrow".

          Better analogy: If men wore pants cut like women's shirts, with a big v-shaped exposed area just above, well, you know...

          I don't know if women generally find men's chests as interesting as men generally find women's chests.
          • I guess we won't know for sure until we start wearing shirts cut to the bottom of our chests right along the edge of our nipples. B)

            I did get a joke card for a friend once that talked about a guys face but he really buff and shirtless and inside it had a punchline like *his FACE ... his FACE* and yup.. she'd been looking at his chest.
    • That's just it. The beauty(?) of this system is that it happens prior to conciousness, so you won't want to take a closer look because you won't 'know' that you saw anything. In fact, it would almost make more sense for the "viewer" to be different than the person that looks at the flagged images. I wonder what would happen if you had a few of these hooked up so that the first viewer filtered a bunch of images and then someone else filters through the flagged images from that person and so forth...
  • SETI@brain? (Score:5, Funny)

    by Rob T Firefly ( 844560 ) on Thursday July 13, 2006 @09:45AM (#15712544) Homepage Journal
    How soon until we get distributed-image-glancing teams together, racking up spare brain cycles for high scores and bragging rights?
    • I don't think that having a large amount of unengage, "idle" brain usage to waste on such things is worth bragging about. That's what cable television's for, after all.
  • If this technology feeds a series of digital images directly into the brain, what would happen when connected to the brain of a blind person? Would the person, with time, be able to interpret the new information?

    I imagine if this is the case, then connecting it to a camera worn by the user can possibly allow the person to see again.
  • Well, it looks similar but it's got more tin foil.
  • by blackcoot ( 124938 ) on Thursday July 13, 2006 @10:00AM (#15712629)
    IXO (DARPA'S Information eXploitation Office) just made awards for their VACE (Video Analysis and Content Extraction) BAA and this sounds a lot like some of the technologies they were trying to develop through that program. I'll have to do more digging, the article itself is somewhat suspect (some jackass with a Ph.D. in *transport systems* flaunting his ignorance of computer vision isn't exactly a good source to quote). I particularly like the bit about "They are limited in their ability to recognize suspicious activities or events." Turns out that he hasn't read Grimson and Stauffer's (fellow MIT alums) papers. Or, you know, about 20-30% of the computer vision literature.
  • Sometimes we spend hours looking through Getty/Veer/Corbis/iStock stock photo collections looking for 'just the right image'.

    This could cut it down to minutes. I'd just sketch out the image I was looking for and stare at it for a few... then turn on and tune out for a few, voila... instant banner ad ;-p I mean communications platforms and revenue enhancement strategies...
  • They have already hooked up a device to sex offender's penises to measure arousal, a penile plethysmograph (PPG), and shown the subjects various pr0n images to determine interest in the particular photographs. Switching this to a EEG is only a minor change, the results bypassing the subjects ability to control a physical response. Unfortunately for researchers, using a EEG, the subject must confirm each individual EEG result as positive or negative because the subject's EEG responses are not universal eno
  • by Comboman ( 895500 ) on Thursday July 13, 2006 @10:25AM (#15712763)
    Wired News has an article on a 'cortically coupled computer vision' system being developed at Columbia University and funded by the ever-curious folks at DARPA.

    Don't tell the MPAA! By feeding digital images directly into the brain of the viewer, they've finally managed to get rid of that nasty analog hole [] that pirates are always exploiting.

  • Now all we need is the technology from Spaceballs where videos can be released before the movie is finished. Have people watch security videos from the future and just have Tom Cruise round up the bad guys. Voila, no more crime.
  • It seems that they are using a varient on a EEG BCI which relies on what's called the P300 or oddball response. I've seen it implemented for things like a wordspeller. An eeg cap is placed on a subject. A series of letters are flashed up on a screen. When the subject sees the letter they want the P300 spikes and the system picks that letter. The problem I see in using this for what DARPA want is that the P300 will spike when you notice a picture of someone who looks sorta like your target (no real p[roblem
  • by Doc Ruby ( 173196 ) on Thursday July 13, 2006 @11:31AM (#15713203) Homepage Journal
    This device is the Matrix. I never believed that movie BS about needing humans' electrical energy. It needs our software libarary.

    To hack through captchas for porn. It really is the hive mind.
  • We are not aware of all of the activity that occurs in our brain, but the EEG can read this activity. It is hard enough to elimnate human bias in our conscious mind. How do we know that our subconcious, assuming that is what this EEG is reading, is not as or even more biased than our conscious mind?

    What else about a person can this EEG cap measure? Can one correllate what one is looking at with what one is thinking? Will there be a measureable response if the person looks at another individual and find
  • At what point do we say enough is enough? Who the hell is stupid enough to allow their employer to put a data feed directly in their brain?
  • by wanax ( 46819 ) on Thursday July 13, 2006 @02:05PM (#15714102)
    One of the basic tasks our visual system is much, much better at executing than computers is visual search. The basic 'experiment' is that you are asked either a question like "Is there a red car in this picture?" (natural images) or "Are all the lines the same orientation?" (more traditional psychophysics). Then images are displayed, and our response time is recorded. Early experiments in the visual search paradigm appeared to show that there was two classes of search stimuli: those that 'pop-out' and those that require incremental search. The difference is that in pop-out conditions, increasing the number of elements in the image does not increase search time, while in incremental it does at XXms/element... and generally it takes about twice as long for us to respond if there is no positive element.

    One main theory on how our brain does this, Feature Integration Theory by Anne Treisman (or similar but more recent, Guided Search by Jeremy Wolfe), which many computer vision algorithms try to copy, asserts that there are various feature maps for certain quantities like color, orientation, depth, spatial scale, etc. These are combined into a saliency map which is a weighted average of the feature maps. Things pop-out when the target has high salience compare to the background, for example it's easy to find the red T in a background of blue T's, but not so easy to find the red L in a background of red T's and blue L's.

    Now, it appears from the article, and what little they say on the Lab webpage, that they are trying to measure EEG responses (which are quite crude) during rapid serial search tasks, in order to prime a computer vision object recognition system, which is then only run on those images human's appear to find sufficintly salient when they see them. This saves the time of a person actually having to search and make a decision about an image, while utilizing the visual systems incredibly powerful early 'pre-attentive' form & object binding resources.

    If there is a sufficiently high signal from the EEG to do that after say, 100ms display times, then I think this could be useful for certain types of search task. However, due to the time courses present in most visual search experiments, the fact that it's not totally apparent how efficient certain parts of our saliency system actually are (check our Jeremy Wolfe's reviews for more data), I'm totally unconvinced that this type of system will give you a sufficent signal to noise ratio to be worth using for anything. This is especially true because of another perceptual phenomenon in search, which is that your error rate basically shoots up exponentially as the probability of a positive goes down. This is to say, in an experiment where a normal observer would have a 99% accuracy rate with 50% of the images containing the target, this drops to 60% accuracy for 10% target positive, and only 30% accuracy at 1% target positive (numbers fudged, but ballpark, since I'm too lazy to look them up). If this has its roots in insufficient priming in early vision, for example, then this entire scheme flops just as badly as using a human for tasks like finding the bomb in the x-ray image of the suitcase... and we haven't even started to get into issues of the person not actually looking at the image because they're bored, etc.

    As it is, DARPA is spending a mere 758k, which is chump change for them, and there's a decent chance that it'll work in certain specific but useful circumstances which may warrant the research.
  • I've always thought we'd find that the network guards on the boat in Ghost in the Shell 2: Innocence were actually fairly close to what network attadk and defense would be like in the future. Those helmets are actually EEG machines w/ displays for the users. They are getting fed a view based on dynamics of the network and the EEG is letting them know when they see something important.

A mathematician is a device for turning coffee into theorems. -- P. Erdos