Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Video Surveillance System That Reasons Like a Human 143

An anonymous reader writes "BRS Labs has created a technology it calls Behavioral Analytics which uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming."
This discussion has been archived. No new comments can be posted.

Video Surveillance System That Reasons Like a Human

Comments Filter:
  • Of course (Score:5, Insightful)

    by sopssa ( 1498795 ) * <sopssa@email.com> on Monday September 21, 2009 @06:08PM (#29497305) Journal

    Nothing can go wrong!

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Nothing can go wrong!

      Monday September 21, 6:08 PM > System Pawn, ID:1498795, "sopssa" making sarcastic joke regarding system. Execute Order 66. Will be a huge success.

    • Wow! It sounds almost too good to be true!

      Wait, what's that you say?

    • Re:Of course (Score:5, Insightful)

      by bugi ( 8479 ) on Monday September 21, 2009 @08:10PM (#29498411)

      The best of both worlds! Human stupidity plus the compassion of a machine.

      • by bugi ( 8479 )

        Having thought about it for a day, perhaps I should've said: thinks like a human, feels like a machine.

    • Fire all the cops and judges, convert them all to prison guards, and we'll make the city a jail.
    • Nothing can go wrong!

      Isn't that what they said about Skynet?

  • Proof? (Score:3, Interesting)

    by FlyingBishop ( 1293238 ) on Monday September 21, 2009 @06:10PM (#29497313)

    Source or it doesn't work.

    • Re: (Score:1, Funny)

      by Anonymous Coward

      Source? Come on man, CAMERAS! Tits or doesn't work.

    • Re:Proof? (Score:5, Insightful)

      by Jurily ( 900488 ) <jurily&gmail,com> on Monday September 21, 2009 @09:06PM (#29498925)

      Mod parent up. Said AI first needs to distinguish between "activity" and "the wind blew a leaf across the screen". Then you need to distinguish between "lights a cigarette" and "lights the fuse on dynamite".

      So, if it already does all that, just one more question: how do you define "criminal and terrorist activities" programmatically when not even the law is clear? Even shooting people can be a non-criminal act.

      • Re: (Score:3, Insightful)

        It must first differentiate between "time flies like an arrow" and "fruit flies like a banana". Then, and only then, can be the system be trusted.

      • Re:Proof? (Score:4, Funny)

        by beav007 ( 746004 ) on Monday September 21, 2009 @09:32PM (#29499145) Journal
        What I want to know is: whose cognitive reasoning is it based on, exactly?

        Male?

        Ooh, low cut top! Zoom zoom zoom!
        Wait, the wind is picking up! Initiate scan for pleated skirts!

        Or female?

        Ooh, there's a sale over there! *zoom* Do they have my colour?
        Wait, that handbag's a knockoff! *Dials DHS*

      • Agreed!

        TFA and the company's website don't mention anything at all about what the cognitive algorithms learn or how they do it.

        At my lab we use "cognitive algorithms" to do things like adjust for variable lighting conditions, learn new visual patterns, and estimate position/orientation of unknown objects. I'm told these algorithms are cutting edge, but at present they are way too fragile/clunky to be used in the real world. I fail to see how a cognitive algorithm can be taught what is a "criminal and ter

  • by xmas2003 ( 739875 ) * on Monday September 21, 2009 @06:10PM (#29497317) Homepage
    A little more info from the BRS Labs website: [brslabs.com]
    "The system takes the input from existing video security cameras (no need to change equipment); recognizes and identifies the objects in each frame and passes that data to its Machine Learning Engine. There, the system 'learns' what activity is normal for each unique area viewed by each camera. It then stores these LEARNED memories, much the same way the human brain does, and refers back to them with any and all future activities observed by the camera. If any behavior falls outside of the norm, alerts are generated."

    Sounds impressive, but will the algorithms be sophisticated enough to watch grass grow [watching-grass-grow.com] and realize that it's normal behavior for the garbage truck to come by weekly [watching-grass-grow.com] ... but still send an alarm when a burgler steals your stuff! [grisby.org]
    • by RightSaidFred99 ( 874576 ) on Monday September 21, 2009 @06:16PM (#29497375)

      My guess is it applies a few simple heuristics to analyze the behavior and the real trick is identifying the behavior.

      Example: In an alley behind a hotel people frequently walk out a door, put something in a container, and walk back in. This becomes "normal". Then someone goes out back and starts smoking. Whoops, wtf is this! Alert, alert. OK, so this gets flagged as OK a few times. The system decides it's OK. However, when two people hold a third at gunpoint and linger in an area of the alley not usually used for smoking, this would now trigger as abnormal.

      Another thing it might notice is the same person coming back to the front of a convenience store, waiting a minute, then leaving, then coming back again. Most people only walk in, walk out - this is abnormal.

      So it won't tell you someone is burglarizing you, but it might focus your attention on a camera where something could be happening. I'd assume it would get better over time as things were flagged "ok" or "not ok", but at best it would provide some simple pre-filtering to focus human attention on scenes that are slightly more likely to be "interesting".

      • Re: (Score:2, Insightful)

        by mhajicek ( 1582795 )
        So it's a video Zone Alarm. I imagine the first while of operation would be rather labor intensive.
      • Re: (Score:3, Interesting)

        No way that it's as complex as that. My guess is that it gets used to linear motion like cars driving by and develops a tolerance for humans walking by on the way to work, but when there's lots of irregular motion in different directions (ie not just from one side of the frame to the other) there's a good chance something unusual is happening.

        Your system lacks the element of "no human training" mentioned in the summary

        • Not needing human training to function, and functioning much better with human training are two separate things. Just like speech recognition. It will work without training, but there are still cases where it needs training.

          I didn't think what I described was that crazily complex. If the camera is stationary and you line everything up on a grid line and do edge detection to find outlines of people you can probably implement something like this. I'm just pulling stuff out of my ass, though, it's certainl

      • by Jurily ( 900488 )

        Another thing it might notice is the same person coming back to the front of a convenience store, waiting a minute, then leaving, then coming back again. Most people only walk in, walk out - this is abnormal.

        So now I'm verboten to look at the lottery numbers on the door each Sunday morning? Either you flag every alarm as OK or people will get pissed off that you question them about perfectly legal activities.

        This might just be the thing needed to finally get the cameras off the streets.

      • by Memroid ( 898199 )

        Alert, alert. OK, so this gets flagged as OK a few times. The system decides it's OK.

        Doesn't this contradict what the summary says? "without the need for human training"

      • Re: (Score:1, Informative)

        by Anonymous Coward

        "interesting" is context-sensitive. So filtering will not work: it either triggers false positives (normal behaviour that is unknown or misinterpreted) until the threshold is raised enough. Then it will create false negatives, e.g. flag an actual criminal act as 'normal' and hide it from the supervisors eye for workload reasons.
        Human behaviour is more complex and situation depended than actual AI could handle. This whole article reads like bullshit bingo or a pr-campaign for some insider stock trading scam

    • So if it watches Gary Indiana, murder and mayhem will be programmed in as normal?

    • Does that which we call pattern recognition, by any other name, stink as badly?
    • by droopycom ( 470921 ) on Monday September 21, 2009 @07:08PM (#29497845)

      If it really think like a human, the main feature will be to automatically upload videos of people having sex in elevators on the web.

    • by CAIMLAS ( 41445 )

      That sounds kinda useless in an area where terrorist-like activity might be taking place on a regular basis. For instance, somewhere where there are a lot of guns, people milling about and doing drills, and the like. Or rousing motivational speeches w/formations. Sounds like most police departments, shooting ranges, military training grounds, or for that matter police hangouts.

      I would think targeting specific known-terrorist activities (eg. we know a lot of them are Muslims, so wearing their head gear and/o

  • by Anonymous Coward

    that was a press release for the company's product. It has no reliable or interesting information whatsoever.

  • So I guess this means that the camera is going to Harass people taking Photos now?
    • Re: (Score:3, Insightful)

      So I guess this means that the camera is going to Harass people taking Photos now?

      Even better. It will call some rentacops and tell them that there's "suspected terroristic activity" taking place, and suddenly a tourist will get a taser up some orifice because "the computer" already labeled him a terrorist and therefore Osama's second in command.

  • by Jason Pollock ( 45537 ) on Monday September 21, 2009 @06:13PM (#29497337) Homepage

    It's a press release pretending to be journalism.

    If it doesn't need training, how does it define "terroristic activity"? Is it the "I'll know it when I see it" definition?

    The article seems to indicate it works like a Bayesian filter on the video - pointing out things that aren't typical for the camera.

    Much like any automated system that is supposed to filter out false positives, it is probably pretty easy to train either the operators or the system itself to throttle back the sensitivity to a point where it ignores everything.

    • Can we please see some examples of pornography detected by this system?

      That seems to be something identifiable by people as a "I'll know it when I see it". At least Supreme court justices...

    • Re: (Score:1, Informative)

      by Anonymous Coward

      With good heuristics, some bayesian analysis (is there an object or not?) or neural nets...I can actually imagine a lot of possibilities here. Suppose I've got a gate at an airport--traffic should all be going in one direction. Anything going the other way--anomaly. I could imagine ML systems picking that up fairly easily.

      Similarly if I've got a physically secured compound with double fences (the first fence is for screening--the second is your secure perimeter), foot traffic in the secure area should be

      • The trick is to train it to ignore you.

        So, if you want to enter an area with a backpack, you start walking in with a hump, and make the hump bigger with every entry. Better yet, give out a bunch of free backpacks (in increasing numbers) over a period.

        A human operator would go "WTF", a machine would simply recognise it as normal and increase the threshold.

        You want to walk back through a door? Start by looking over your shoulder as you walk through it normally.

        That fence? Add an automated fence wiggler. I

    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
  • by BJ_Covert_Action ( 1499847 ) on Monday September 21, 2009 @06:13PM (#29497341) Homepage Journal
    ...that somewhere else in the world, there is a young, badass mother fighting off robots from the future that were designed to look like my Governor in a heroic attempt to destroy this new technology along with her scrappy, but as-of-yet slightly immature son....

    At least, I think that's where we are in the time-line right?
  • It's a lie (Score:4, Insightful)

    by blhack ( 921171 ) on Monday September 21, 2009 @06:14PM (#29497349)

    The "machine learning engine" is a "datacenter" (warehouse) full of cheap African laborers who are all watching the cameras.

    (this is a joke, it just isn't funny, and it is meant to illustrate a point. See the next line):
    God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?

     

    • Re: (Score:1, Informative)

      by Anonymous Coward

      So it's a subsidiary of Spinvox [bbc.co.uk] then?

    • by Ponga ( 934481 )

      God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?

      We're lazy.

      Next!

    • by evanbd ( 210358 ) on Monday September 21, 2009 @07:26PM (#29498007)

      The "machine learning engine" is a "datacenter" (warehouse) full of cheap African laborers who are all watching the cameras.

      (this is a joke, it just isn't funny, and it is meant to illustrate a point. See the next line): God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?

      Because the owners of those brains get all whiny when you try to stick them in jars and make them solve the problems you want to solve, rather than sitting around watching porn? Really, sticking a bunch of brains in a 19" rack is harder than you'd think.

    • by dissy ( 172727 )

      God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?

      Because one could imagine that if we actually do have the ability to reinvent a mind, that we might also be able to improve upon it. And we will not know if we can or can not create a mind until we try to do so.

      If that is possible, then that better mind could arguably invent a mind better than itself, that much more better than ours.
      Human kind would no longer be the bottle neck of technological achievement.

      Now if we could also then just manage to not be stupid like usual and piss those minds off, they migh

      • by Dog-Cow ( 21281 )

        I would love to create a mind. Unfortunately I have not yet found a girl willing to indulge me.

    • by izomiac ( 815208 )
      Presumably because there simply aren't enough eyes to watch everything that people want watched. And even if there were, who would be left to watch the watchmen?

      Best case scenario: this technology is used to draw the security guard's attention away from late night TV when something actually happens.
      Worst case scenario: pick your nose in an alley and have the police automatically alerted.
  • Eagle Eye is not a blueprint for your surviellence computers. Thanks.
    • I thought it was a pretty awesome AI, we just need to make sure that when we build it we don't give it an ancient document written by revolutionaries and rather have programmers write it some "no murdering your admins" rules

  • yes, but... (Score:4, Funny)

    by gandhi_2 ( 1108023 ) on Monday September 21, 2009 @06:17PM (#29497379) Homepage
    ...does it run racial profiling?
    • Haha I wonder of a young "person of color", acting like a typical inner-city person of color, would get flagged on cameras in a white neighborhood because he "wasn't acting normally"?

      I bet that in fact he would.
      • As long as detection is based on behavior and not skin color, there's nothing wrong with it.

        If "acting like a young person of color" involves trespassing or loitering the system should flag it just as readily as anything else. Assuming that you're talking about legal behaviors, again, I don't see a problem. This system doesn't know anything about race, and is perfectly ignorant of it. This is an example of a "color-blind" system. There are people who claim to want a "color-blind" society, yet they alwa
        • See, that's where you are wrong. Or almost certainly wrong, anyway.

          If the system uses heuristics or anything of the sort to determine whether something is outside "normal" behavior in an area, then something that does not appear normal would -- by definition -- get flagged. Objectively, the system has no more idea what crime is than a newborn baby.

          I have been in areas where, as I mentioned, "young inner-city people of color" tend to behave in ways that are significantly different from their caucasian
  • Ocean's Thirteen that a system like that.

  • Human Intelligence (Score:5, Insightful)

    by Reason58 ( 775044 ) on Monday September 21, 2009 @06:21PM (#29497421)
    What a great way to absolve any personal responsibility. Detained wrongfully? Not our fault, the machine said you were moving like a terrorist.
    • Also, I wonder how well these systems will handle contextual clues that people pick up on automatically? Is that person moving in a suspicious manner because they are a terrorist, or because they are just carrying some heavy bags? Are they going to blow the place up, or are they just rushing to catch up with someone?
      • by radtea ( 464814 ) on Monday September 21, 2009 @07:46PM (#29498189)

        Also, I wonder how well these systems will handle contextual clues that people pick up on automatically?

        "Contextual clues" like a dark-skinned guy in London rushing to catch the Tube wearing a ski jacket on a warmish day?

        Those are the kind of "contextual clues" that people use all the time to make lethal misjudgements, and in the case at hand resulted in a completely innocent Brazilian who was legally in Britain going legally about his legal business being murdered by police.

        Given how badly humans are known empirically to suck at making these kinds of judgments only an arrogant idiot would think of programming a machine to emulate us. But of course, arrogant idiots are incapable of adjusting their beliefs in response to empirical data, so they probably aren't even aware of how badly they suck at this.

        • Re: (Score:3, Insightful)

          by mcrbids ( 148650 )

          Those are the kind of "contextual clues" that people use all the time to make lethal misjudgements, and in the case at hand resulted in a completely innocent Brazilian who was legally in Britain going legally about his legal business being murdered by police.

          No system lacking full disclosure of all information is perfect. People, by definition, *have* to make judgments without enough information to be sure. Yet we *have* to be sure.

          Sometimes this results in mistakes. And sometimes, those mistakes add up to

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Well, how's "our software" different from "our training", "our briefing", etc? I mean police are supposed to follow detailed regulations, not act as judges. It's only an officer's 'fault' right now if they don't follow regs.

    • Re: (Score:3, Funny)

      by LifesABeach ( 234436 )
      So while facing away from the camera, I see what looks like a quarter, and while bending over, I have this irresistible urge to scratch myself where my back pocket is. At least that's what the Camera will show, on CNN news, and if I have my way, YouTube also. It's convincing my wife, that's all it was; is going to be rough.
  • Scary (Score:2, Insightful)

    Human judgment isn't accurate enough to distinguish between an actual terrorist and someone who may look like one. Why is there anyone expecting good results from a machine emulating this judgment that isn't reliable in the first place?
  • by mjensen ( 118105 ) on Monday September 21, 2009 @06:26PM (#29497473) Journal

    Much like detecting terrorists by facial recognition, this is vaporware until they publish some numbers.

    I once had someone misplace a sales call to me, being proud his facial recognition system was 70% accurate. He had no idea how much his system is a pain in the ass when its wrong, and for the airport security business he was trying to get, 90% accuracy is considered terrible.

  • Maybe they shouldn't have used human cops as their behavior model after all.

  • Sick and tired (Score:3, Insightful)

    by WillRobinson ( 159226 ) on Monday September 21, 2009 @06:41PM (#29497613) Journal

    Really I am sick and tired of the surveillance realm. If anybody really wants to do something nefarious they will make sure the cameras don't work. Simply pull them down, spray them with paint or whatever. The authorities will not come running. After the fact usage is good, but really it doesn't stop any crime, even the random ones.We are the ones funding this and do not even have a say in it.

    • The ones finding all the cameras and covering them up are not necessarily the intended target of said cameras. They have value as a deterrent too. The chance of an organized robbery is hopefully small, but any business is likely to have small-time shoplifters and people defacing property. Fail all of that, the cameras should pay for themselves through incentives from the insurance company.

      NB: I'm just saying what sounds reasonable to me- I don't claim to know how true my assumptions are.
      • Re: (Score:3, Informative)

        I agree in business its ok, but really every intersection in Texas has 4 cameras now. Even ones in small towns, paid for by grants from DHS.

        In the Dallas Metroplex, they have also installed networking which is supposed to be used by emergency workers (this makes me laugh as when there is a real disaster they will not have power) but I am sure they are also being used by the cities to read the power and water meters in the future.

        Taking even a small failure rate, the cost of maintenance will be beyond what t

  • This is just one more step along the path that I and many others have predicted. Please reference the thread on "crowdsourcing" video analysis: http://slashdot.org/comments.pl?sid=1277749&cid=28429931 [slashdot.org]
  • The central processor in this thing isn't a computer at all... it's a dead salmon!
     

  • Boobies! (Score:3, Funny)

    by pavon ( 30274 ) on Monday September 21, 2009 @06:52PM (#29497725)

    So, it instinctively directs the cameras towards the hot women all the time, distracted from important things it should look at?

  • by D4C5CE ( 578304 ) on Monday September 21, 2009 @06:54PM (#29497741)
    Who, under video surveillance, tend to act rather irresponsibly:
    • Feeling safe(r) when and where they are not, because of the false promise of BB to be watching (over) them.
    • Mostly turning a blind eye on crime (and its victims), as the all-seeing eye of BB and/or "someone (else)" will surely take care of it.
    • Having learned from an early age to show only herd mentality out of preference falsification in their desperate attempts to try and please the watchmen and be seen to obey "like every other good citizen".
    • In the rare instances of courage, not fleeing insurmountable dangers out of the feeling that someone has got to be watching and will send backup any moment now.

    Interestingly in Europe after a series of dreadful incidents on live video, this is finally being debated on the eve of general elections: http://www.piratenpartei.de/node/920/29268#comment-29268 [piratenpartei.de] - as at the other end of the line, in a situation room (that may be on the next floor or station, and yet too) far away, officers will have to watch events unfold and wish in vain to finally be out there with a gun again (or have sufficient forces to dispatch), e.g. to stop that attacker they can only videotape and helplessly watch wreak havoc on screen.

  • I can just see the kids in the UK figuring out what kind of innocent activity triggers police reactions. When the flood of false-positives starts, the cameras will be back to being as use[ful|less] as they are today.

  • What are the odds these cameras won't be able to distinguish between people fighting and people shagging?

  • "That Reasons Like a Human"

    Spends most of it's time ogling - wait there's a honky in this black neighborhood, must be up to no good.

  • by netsharc ( 195805 ) on Monday September 21, 2009 @07:12PM (#29497867)

    to comply!

    No one talkd about ED-209 [3dblasphemy.com] yet?

  • Seriously, I would love to see some of this surveillance tech turned back on the government. Install this thing in congress and train it to watch for corruption. It would probably fill up a massive disk array in a couple hours with positive hits.
    • Install this thing in congress and train it to watch for corruption. It would probably fill up a massive disk array in a couple hours with positive hits.

      I understand it can only detect abnormal events. Now in many parliaments wouldn't that mean: alert to exceptionally rare cases of non-corruption? You know Professor Lessig changed his focus of research for a reason [wikipedia.org].
      Whatever the scene may look like to an "intelligent" camera, "Lobbyist walks into lawmakers' offices and leaves without the two black suitcases s/he brought" is probably not an instance of someone planting bombs (at least unless the pictures make it to the pages of the Post).

      To quote Ambrose Bi

  • by taustin ( 171655 ) on Monday September 21, 2009 @07:27PM (#29498019) Homepage Journal

    you give us another billion dollars to finish it.

    Yeah, right.

    A 1% error rate will produce a hundred times as many false positives - all innocent people accused of a crime - as real positives. And a 20% error rate is far, far more likely.

    Scams like this are the reason why you have to show up at airports three hours early now.

    Is it smart enough to knwo that "terroristic" isn't a real word, at least?

    • "all innocent people accused of a crime"

      Why? Even if the system flags the people as criminals, the operators will still be able to see the recordings, and then decide if it was a crime or not, no?

      • Re: (Score:2, Insightful)

        "all innocent people accused of a crime"

        Why? Even if the system flags the people as criminals, the operators will still be able to see the recordings, and then decide if it was a crime or not, no?

        If only I had a nickel for every time some one took something off the computer as gospel (figuratively) and could not be swayed because the computer doesn't make mistakes... Think PHBs and customer service reps. Or maybe I missed your sarcasm tags.. if so mia culpa

  • This system does not operate like a human at all. A human operator does not look for signs of terrorist activities. A human operator looks at boobs.

  • So, it makes wild guesses, allows others to tell it how it should be thinking and/or bases vital decisions on obviously false beliefs?

  • I didn't RTFA or This one [slashdot.org], but it looks similar.
  • The entire film crew, actors, and craft people are friend into plasma because they were Acting Like Terrorists.

    It's just another bloated pentagon pork project of no real value or merit. We see this all the time. It reminds me of avant garde art, only 10,000x more expensive and twice as pointless. But only half as ugly.

    RS

  • Two things (Score:1, Insightful)

    by Anonymous Coward

    a) Terroristic? Not just terrorist activities?
    b) Terrorism /is/ usually a crime, nothing special.

  • ...uses cognitive reasoning, much like the human brain...

    So how long before this thing figures out how to pork a co-worker on lunch break, record the act on one of the cameras it's supposed to be monitoring, and piss in the boss's coffee?

    I'm betting about three weeks.

  • So basically a "red light camera" for people.

    And like the red light cameras, there's no way to appeal to human judgement if the camera says you're guilty, you must be guilty unless you can prove you are innocent (for red light cameras, at least in California, that means proving the amber light lasted less than 4.8 seconds).

    I love the presumption of guilt they're slowly building into the system in the name of revenue generation. "The war on terror" has been going on for 8 years now, and they finally arreste

  • To do what AISight does one needs:

    • video software that can track 3-dimensional objects using a 2-dimensional video image. This is a known solved problem.
    • A second layer of software (that uses the first as input) that distinguishes static and moving objects. Static objects form a "background" which can largely be ignored except for collisions with moving objects and except for specific human-input exceptions.
    • A rule database. The initial rule database must have many rules about default object behavior and inter
  • To do what AISight does one needs:

    • video software that can track 3-dimensional objects using a 2-dimensional video image. This is a known solved problem.
    • A second layer of software (that uses the first as input) that distinguishes static and moving objects. Static objects form a "background" which can largely be ignored except for collisions with moving objects and except for specific human-input exceptions.
    • A rule database. The initial rule database must have many rules about default object behavior and
  • ... because even surveillance is not purely objective. Think about selling it to Islamic countries, for example. Suddenly drinking alcoholic beverages should raise an alert, marrying a second, third or fourth wife must not. There is a whole new market for Hijab, the head-scarves, scantily clad woman. But even in the Western society, a lot needs to be learned. Finns carry their wifes one day in a year, while a woman carried around by a man is 'offensive' in any other society.
    Yes, man, there are absolutely ze

  • Now they just need to merge this with a federally required RFID marker to identify you and the system can simply flag you as a threat automatically whenever you act outside of set threshold determined by a collectively gathered log of your previous day to day activities.

  • This is much the same concept as recently presented at ICDSC-2009 in the paper "Abnormal Motion Detection in a Real-time Smart Camera System", based on real-time video analytics and artificial neural networks to autonomously build a model of what is considered normal behavior, subsequently flagging outliers as possible events that require further (human) scrutiny. Thus it acts as an intelligent self-learning classifier/filter that greatly reduces the information stream per camera for human operators.
  • "without the need for human training, setup, or programming"

    that's funny, because even young human beings need to learn "good" from "bad", under adult supervision, it's not an automatic process.

  • It can figure out that Little Tiffany [youtube.com] is dangerous and deserved to die?

The use of money is all the advantage there is to having money. -- B. Franklin

Working...