Video Surveillance System That Reasons Like a Human 143
An anonymous reader writes "BRS Labs has created a technology it calls Behavioral Analytics which uses cognitive reasoning, much like the human brain, to process visual data and to identify criminal and terroristic activities. Built on a framework of cognitive learning engines and computer vision, AISight, provides an automated and scalable surveillance solution that analyzes behavioral patterns, activities and scene content without the need for human training, setup, or programming."
Of course (Score:5, Insightful)
Nothing can go wrong!
Re: (Score:2, Funny)
Nothing can go wrong!
Monday September 21, 6:08 PM > System Pawn, ID:1498795, "sopssa" making sarcastic joke regarding system. Execute Order 66. Will be a huge success.
Re: (Score:2)
Wow! It sounds almost too good to be true!
Wait, what's that you say?
Re:Of course (Score:5, Insightful)
The best of both worlds! Human stupidity plus the compassion of a machine.
Re: (Score:2)
Having thought about it for a day, perhaps I should've said: thinks like a human, feels like a machine.
Re:Of course.. (Score:1)
Re: (Score:1)
Nothing can go wrong!
Isn't that what they said about Skynet?
Re: (Score:1)
I say "bring it on."
More and better surveillance. Rampant surveillance. Ubiquitous surveillance.
I am personally rooting for the day when miniature swarming nanobots capture everything done by everybody, all the time, 24/7. I am a firm believer in the maxim "if you have nothing to hide, what are you afraid of?" ... but the catch is that everybody must be subject to the same rules. That means I can go look at the permanent archive belonging to my local politician same as he or she can look at mine. Business l
Re: (Score:1)
Re: (Score:1)
I am a firm believer in the maxim "if you have nothing to hide, what are you afraid of?"
If this is true then it must also be true that "if you do not intend to judge a person, then why are you surveilling him/her?"
And if you're judging a person, what standard are you applying and what gives you the right to do the judging?
Proof? (Score:3, Interesting)
Source or it doesn't work.
Re: (Score:1, Funny)
Source? Come on man, CAMERAS! Tits or doesn't work.
Re:Proof? (Score:5, Insightful)
Mod parent up. Said AI first needs to distinguish between "activity" and "the wind blew a leaf across the screen". Then you need to distinguish between "lights a cigarette" and "lights the fuse on dynamite".
So, if it already does all that, just one more question: how do you define "criminal and terrorist activities" programmatically when not even the law is clear? Even shooting people can be a non-criminal act.
Re: (Score:3, Insightful)
It must first differentiate between "time flies like an arrow" and "fruit flies like a banana". Then, and only then, can be the system be trusted.
Re:Proof? (Score:4, Funny)
Male?
Ooh, low cut top! Zoom zoom zoom!
Wait, the wind is picking up! Initiate scan for pleated skirts!
Or female?
Ooh, there's a sale over there! *zoom* Do they have my colour?
Wait, that handbag's a knockoff! *Dials DHS*
Re: (Score:1)
Agreed!
TFA and the company's website don't mention anything at all about what the cognitive algorithms learn or how they do it.
At my lab we use "cognitive algorithms" to do things like adjust for variable lighting conditions, learn new visual patterns, and estimate position/orientation of unknown objects. I'm told these algorithms are cutting edge, but at present they are way too fragile/clunky to be used in the real world. I fail to see how a cognitive algorithm can be taught what is a "criminal and ter
Bit more info - can it be as good as humans? (Score:5, Interesting)
"The system takes the input from existing video security cameras (no need to change equipment); recognizes and identifies the objects in each frame and passes that data to its Machine Learning Engine. There, the system 'learns' what activity is normal for each unique area viewed by each camera. It then stores these LEARNED memories, much the same way the human brain does, and refers back to them with any and all future activities observed by the camera. If any behavior falls outside of the norm, alerts are generated."
Sounds impressive, but will the algorithms be sophisticated enough to watch grass grow [watching-grass-grow.com] and realize that it's normal behavior for the garbage truck to come by weekly [watching-grass-grow.com]
Re:Bit more info - can it be as good as humans? (Score:4, Insightful)
My guess is it applies a few simple heuristics to analyze the behavior and the real trick is identifying the behavior.
Example: In an alley behind a hotel people frequently walk out a door, put something in a container, and walk back in. This becomes "normal". Then someone goes out back and starts smoking. Whoops, wtf is this! Alert, alert. OK, so this gets flagged as OK a few times. The system decides it's OK. However, when two people hold a third at gunpoint and linger in an area of the alley not usually used for smoking, this would now trigger as abnormal.
Another thing it might notice is the same person coming back to the front of a convenience store, waiting a minute, then leaving, then coming back again. Most people only walk in, walk out - this is abnormal.
So it won't tell you someone is burglarizing you, but it might focus your attention on a camera where something could be happening. I'd assume it would get better over time as things were flagged "ok" or "not ok", but at best it would provide some simple pre-filtering to focus human attention on scenes that are slightly more likely to be "interesting".
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:3, Interesting)
No way that it's as complex as that. My guess is that it gets used to linear motion like cars driving by and develops a tolerance for humans walking by on the way to work, but when there's lots of irregular motion in different directions (ie not just from one side of the frame to the other) there's a good chance something unusual is happening.
Your system lacks the element of "no human training" mentioned in the summary
Re: (Score:2)
Not needing human training to function, and functioning much better with human training are two separate things. Just like speech recognition. It will work without training, but there are still cases where it needs training.
I didn't think what I described was that crazily complex. If the camera is stationary and you line everything up on a grid line and do edge detection to find outlines of people you can probably implement something like this. I'm just pulling stuff out of my ass, though, it's certainl
Re: (Score:2)
Another thing it might notice is the same person coming back to the front of a convenience store, waiting a minute, then leaving, then coming back again. Most people only walk in, walk out - this is abnormal.
So now I'm verboten to look at the lottery numbers on the door each Sunday morning? Either you flag every alarm as OK or people will get pissed off that you question them about perfectly legal activities.
This might just be the thing needed to finally get the cameras off the streets.
Re: (Score:1)
Alert, alert. OK, so this gets flagged as OK a few times. The system decides it's OK.
Doesn't this contradict what the summary says? "without the need for human training"
Re: (Score:1, Informative)
"interesting" is context-sensitive. So filtering will not work: it either triggers false positives (normal behaviour that is unknown or misinterpreted) until the threshold is raised enough. Then it will create false negatives, e.g. flag an actual criminal act as 'normal' and hide it from the supervisors eye for workload reasons.
Human behaviour is more complex and situation depended than actual AI could handle. This whole article reads like bullshit bingo or a pr-campaign for some insider stock trading scam
what activity is normal for each unique area (Score:2)
So if it watches Gary Indiana, murder and mayhem will be programmed in as normal?
Re: (Score:2)
Re: (Score:2)
Re:Bit more info - can it be as good as humans? (Score:5, Funny)
If it really think like a human, the main feature will be to automatically upload videos of people having sex in elevators on the web.
Re: (Score:2)
That sounds kinda useless in an area where terrorist-like activity might be taking place on a regular basis. For instance, somewhere where there are a lot of guns, people milling about and doing drills, and the like. Or rousing motivational speeches w/formations. Sounds like most police departments, shooting ranges, military training grounds, or for that matter police hangouts.
I would think targeting specific known-terrorist activities (eg. we know a lot of them are Muslims, so wearing their head gear and/o
that was just a press release (Score:1, Informative)
that was a press release for the company's product. It has no reliable or interesting information whatsoever.
Photos (Score:2)
Re: (Score:3, Insightful)
So I guess this means that the camera is going to Harass people taking Photos now?
Even better. It will call some rentacops and tell them that there's "suspected terroristic activity" taking place, and suddenly a tourist will get a taser up some orifice because "the computer" already labeled him a terrorist and therefore Osama's second in command.
I'll know it when I see it. (Score:5, Insightful)
It's a press release pretending to be journalism.
If it doesn't need training, how does it define "terroristic activity"? Is it the "I'll know it when I see it" definition?
The article seems to indicate it works like a Bayesian filter on the video - pointing out things that aren't typical for the camera.
Much like any automated system that is supposed to filter out false positives, it is probably pretty easy to train either the operators or the system itself to throttle back the sensitivity to a point where it ignores everything.
Re: (Score:1)
That seems to be something identifiable by people as a "I'll know it when I see it". At least Supreme court justices...
Re: (Score:1, Informative)
With good heuristics, some bayesian analysis (is there an object or not?) or neural nets...I can actually imagine a lot of possibilities here. Suppose I've got a gate at an airport--traffic should all be going in one direction. Anything going the other way--anomaly. I could imagine ML systems picking that up fairly easily.
Similarly if I've got a physically secured compound with double fences (the first fence is for screening--the second is your secure perimeter), foot traffic in the secure area should be
Re: (Score:2)
The trick is to train it to ignore you.
So, if you want to enter an area with a backpack, you start walking in with a hump, and make the hump bigger with every entry. Better yet, give out a bunch of free backpacks (in increasing numbers) over a period.
A human operator would go "WTF", a machine would simply recognise it as normal and increase the threshold.
You want to walk back through a door? Start by looking over your shoulder as you walk through it normally.
That fence? Add an automated fence wiggler. I
Re: (Score:3, Insightful)
It Comforts Me To Know.... (Score:5, Funny)
At least, I think that's where we are in the time-line right?
Re: (Score:1, Offtopic)
It's a lie (Score:4, Insightful)
The "machine learning engine" is a "datacenter" (warehouse) full of cheap African laborers who are all watching the cameras.
(this is a joke, it just isn't funny, and it is meant to illustrate a point. See the next line):
God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?
Re: (Score:1, Informative)
So it's a subsidiary of Spinvox [bbc.co.uk] then?
Re: (Score:2)
God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?
We're lazy.
Next!
Re:It's a lie (Score:4, Funny)
The "machine learning engine" is a "datacenter" (warehouse) full of cheap African laborers who are all watching the cameras.
(this is a joke, it just isn't funny, and it is meant to illustrate a point. See the next line): God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?
Because the owners of those brains get all whiny when you try to stick them in jars and make them solve the problems you want to solve, rather than sitting around watching porn? Really, sticking a bunch of brains in a 19" rack is harder than you'd think.
Re: (Score:1)
Actually, it's a lot easier than you think, it's just hard to get them to do anything.
Re: (Score:2)
God/nature/FSM/evolution/al gore/$deity has done a pretty damn good job at building our brains, why are we trying to reinvent that wheel in a computer?
Because one could imagine that if we actually do have the ability to reinvent a mind, that we might also be able to improve upon it. And we will not know if we can or can not create a mind until we try to do so.
If that is possible, then that better mind could arguably invent a mind better than itself, that much more better than ours.
Human kind would no longer be the bottle neck of technological achievement.
Now if we could also then just manage to not be stupid like usual and piss those minds off, they migh
Re: (Score:2)
I would love to create a mind. Unfortunately I have not yet found a girl willing to indulge me.
Re: (Score:1)
Best case scenario: this technology is used to draw the security guard's attention away from late night TV when something actually happens.
Worst case scenario: pick your nose in an alley and have the police automatically alerted.
Panopticon, your doing it wrong. (Score:2)
Re: (Score:2)
I thought it was a pretty awesome AI, we just need to make sure that when we build it we don't give it an ancient document written by revolutionaries and rather have programmers write it some "no murdering your admins" rules
yes, but... (Score:4, Funny)
Re: (Score:2)
I bet that in fact he would.
Re: (Score:2)
If "acting like a young person of color" involves trespassing or loitering the system should flag it just as readily as anything else. Assuming that you're talking about legal behaviors, again, I don't see a problem. This system doesn't know anything about race, and is perfectly ignorant of it. This is an example of a "color-blind" system. There are people who claim to want a "color-blind" society, yet they alwa
Re: (Score:2)
If the system uses heuristics or anything of the sort to determine whether something is outside "normal" behavior in an area, then something that does not appear normal would -- by definition -- get flagged. Objectively, the system has no more idea what crime is than a newborn baby.
I have been in areas where, as I mentioned, "young inner-city people of color" tend to behave in ways that are significantly different from their caucasian
Ocean's Thirteen that a system like that. (Score:1)
Ocean's Thirteen that a system like that.
Re: (Score:1)
Ocean's Thirteen that a system like that.
Uuuuum, yes?
Was that a question or a statement?
Re: (Score:2, Funny)
Ocean's Thirteen that a system like that.
Brad Pitt that an actor in that.
Re: (Score:2)
so you can a black level player?
Re: (Score:2)
so you can a black level player?
In oil or spring water? I always liked my black level players canned in oil. The people who like them in red sauce are just weird.
Human Intelligence (Score:5, Insightful)
Re: (Score:2)
Re:Human Intelligence (Score:4, Insightful)
Also, I wonder how well these systems will handle contextual clues that people pick up on automatically?
"Contextual clues" like a dark-skinned guy in London rushing to catch the Tube wearing a ski jacket on a warmish day?
Those are the kind of "contextual clues" that people use all the time to make lethal misjudgements, and in the case at hand resulted in a completely innocent Brazilian who was legally in Britain going legally about his legal business being murdered by police.
Given how badly humans are known empirically to suck at making these kinds of judgments only an arrogant idiot would think of programming a machine to emulate us. But of course, arrogant idiots are incapable of adjusting their beliefs in response to empirical data, so they probably aren't even aware of how badly they suck at this.
Re: (Score:3, Insightful)
Those are the kind of "contextual clues" that people use all the time to make lethal misjudgements, and in the case at hand resulted in a completely innocent Brazilian who was legally in Britain going legally about his legal business being murdered by police.
No system lacking full disclosure of all information is perfect. People, by definition, *have* to make judgments without enough information to be sure. Yet we *have* to be sure.
Sometimes this results in mistakes. And sometimes, those mistakes add up to
Re: (Score:1, Insightful)
Well, how's "our software" different from "our training", "our briefing", etc? I mean police are supposed to follow detailed regulations, not act as judges. It's only an officer's 'fault' right now if they don't follow regs.
Re: (Score:3, Funny)
Scary (Score:2, Insightful)
False positve and False negative readings (Score:4, Insightful)
Much like detecting terrorists by facial recognition, this is vaporware until they publish some numbers.
I once had someone misplace a sales call to me, being proud his facial recognition system was 70% accurate. He had no idea how much his system is a pain in the ass when its wrong, and for the airport security business he was trying to get, 90% accuracy is considered terrible.
Security Cameras that shake you down for donuts (Score:2)
Maybe they shouldn't have used human cops as their behavior model after all.
Sick and tired (Score:3, Insightful)
Really I am sick and tired of the surveillance realm. If anybody really wants to do something nefarious they will make sure the cameras don't work. Simply pull them down, spray them with paint or whatever. The authorities will not come running. After the fact usage is good, but really it doesn't stop any crime, even the random ones.We are the ones funding this and do not even have a say in it.
Re: (Score:2)
NB: I'm just saying what sounds reasonable to me- I don't claim to know how true my assumptions are.
Re: (Score:3, Informative)
I agree in business its ok, but really every intersection in Texas has 4 cameras now. Even ones in small towns, paid for by grants from DHS.
In the Dallas Metroplex, they have also installed networking which is supposed to be used by emergency workers (this makes me laugh as when there is a real disaster they will not have power) but I am sure they are also being used by the cities to read the power and water meters in the future.
Taking even a small failure rate, the cost of maintenance will be beyond what t
One more step. (Score:1)
The secret ingredient (Score:2)
The central processor in this thing isn't a computer at all... it's a dead salmon!
Boobies! (Score:3, Funny)
So, it instinctively directs the cameras towards the hot women all the time, distracted from important things it should look at?
Hopefully not like humans (Score:4, Interesting)
Interestingly in Europe after a series of dreadful incidents on live video, this is finally being debated on the eve of general elections: http://www.piratenpartei.de/node/920/29268#comment-29268 [piratenpartei.de] - as at the other end of the line, in a situation room (that may be on the next floor or station, and yet too) far away, officers will have to watch events unfold and wish in vain to finally be out there with a gun again (or have sufficient forces to dispatch), e.g. to stop that attacker they can only videotape and helplessly watch wreak havoc on screen.
Fun in the UK (Score:2)
I can just see the kids in the UK figuring out what kind of innocent activity triggers police reactions. When the flood of false-positives starts, the cameras will be back to being as use[ful|less] as they are today.
porno, Porno, PORNO! (Score:2)
What are the odds these cameras won't be able to distinguish between people fighting and people shagging?
Like a human? (Score:2)
"That Reasons Like a Human"
Spends most of it's time ogling - wait there's a honky in this black neighborhood, must be up to no good.
In other words (Score:1)
It will lie.
Please put down your weapon. You have 20 seconds.. (Score:3, Funny)
to comply!
No one talkd about ED-209 [3dblasphemy.com] yet?
Can we install this in congress? (Score:1)
Unlikely to spot corruption among politicians (Score:2)
Install this thing in congress and train it to watch for corruption. It would probably fill up a massive disk array in a couple hours with positive hits.
I understand it can only detect abnormal events. Now in many parliaments wouldn't that mean: alert to exceptionally rare cases of non-corruption? You know Professor Lessig changed his focus of research for a reason [wikipedia.org].
Whatever the scene may look like to an "intelligent" camera, "Lobbyist walks into lawmakers' offices and leaves without the two black suitcases s/he brought" is probably not an instance of someone planting bombs (at least unless the pictures make it to the pages of the Post).
To quote Ambrose Bi
And it will work as soon as . . . (Score:3, Insightful)
you give us another billion dollars to finish it.
Yeah, right.
A 1% error rate will produce a hundred times as many false positives - all innocent people accused of a crime - as real positives. And a 20% error rate is far, far more likely.
Scams like this are the reason why you have to show up at airports three hours early now.
Is it smart enough to knwo that "terroristic" isn't a real word, at least?
Re: (Score:2)
"all innocent people accused of a crime"
Why? Even if the system flags the people as criminals, the operators will still be able to see the recordings, and then decide if it was a crime or not, no?
Re: (Score:2, Insightful)
"all innocent people accused of a crime"
Why? Even if the system flags the people as criminals, the operators will still be able to see the recordings, and then decide if it was a crime or not, no?
If only I had a nickel for every time some one took something off the computer as gospel (figuratively) and could not be swayed because the computer doesn't make mistakes... Think PHBs and customer service reps. Or maybe I missed your sarcasm tags.. if so mia culpa
Not Like a Human (Score:2)
This system does not operate like a human at all. A human operator does not look for signs of terrorist activities. A human operator looks at boobs.
Reasons like a human... (Score:1)
So, it makes wild guesses, allows others to tell it how it should be thinking and/or bases vital decisions on obviously false beliefs?
Sounds a lot like.... (Score:1)
And so when they're filming Terror 2010 (Score:2)
It's just another bloated pentagon pork project of no real value or merit. We see this all the time. It reminds me of avant garde art, only 10,000x more expensive and twice as pointless. But only half as ugly.
RS
Two things (Score:1, Insightful)
a) Terroristic? Not just terrorist activities? /is/ usually a crime, nothing special.
b) Terrorism
I can't wait to see this in action (Score:2)
So how long before this thing figures out how to pork a co-worker on lunch break, record the act on one of the cameras it's supposed to be monitoring, and piss in the boss's coffee?
I'm betting about three weeks.
So basically a "red light camera" for people. (Score:2)
So basically a "red light camera" for people.
And like the red light cameras, there's no way to appeal to human judgement if the camera says you're guilty, you must be guilty unless you can prove you are innocent (for red light cameras, at least in California, that means proving the amber light lasted less than 4.8 seconds).
I love the presumption of guilt they're slowly building into the system in the name of revenue generation. "The war on terror" has been going on for 8 years now, and they finally arreste
Not a Difficult Problem? (Score:1)
To do what AISight does one needs:
A Series of Already-solved Problems? (Score:2, Informative)
To do what AISight does one needs:
Something is fishy here (Score:2)
... because even surveillance is not purely objective. Think about selling it to Islamic countries, for example. Suddenly drinking alcoholic beverages should raise an alert, marrying a second, third or fourth wife must not. There is a whole new market for Hijab, the head-scarves, scantily clad woman. But even in the Western society, a lot needs to be learned. Finns carry their wifes one day in a year, while a woman carried around by a man is 'offensive' in any other society.
Yes, man, there are absolutely ze
Nice... (Score:2)
Now they just need to merge this with a federally required RFID marker to identify you and the system can simply flag you as a threat automatically whenever you act outside of set threshold determined by a collectively gathered log of your previous day to day activities.
Smart cameras with neural networks (Score:1)
right.... (Score:2)
that's funny, because even young human beings need to learn "good" from "bad", under adult supervision, it's not an automatic process.
so... (Score:2)
It can figure out that Little Tiffany [youtube.com] is dangerous and deserved to die?
Re: (Score:1)
1. Sell to customers who blindly trust in it.
4. PROFIT!!!
3. ???
2. Fail to detect anything on many an occasion because it most likely isn't perfect.