

EU Poised To Set AI Rules That Would Ban Surveillance and Social Behavior Ranking (bloomberg.com) 73
The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications. From a report: The rules are part of legislation set to be proposed by the European Commission, the bloc's executive body, according to a draft of the proposal obtained by Bloomberg. The details could change before the commission unveils the measure, which is expected to be as soon as next week. The EU proposal is expected to include the following rules:
* AI systems used to manipulate human behavior, exploit information about individuals or groups of individuals, used to carry out social scoring or for indiscriminate surveillance would all be banned in the EU. Some public security exceptions would apply.
* Remote biometric identification systems used in public places, like facial recognition, would need special authorization from authorities.
* AI applications considered to be 'high-risk' would have to undergo inspections before deployment to ensure systems are trained on unbiased data sets, in a traceable way and with human oversight.
* High-risk AI would pertain to systems that could endanger people's safety, lives or fundamental rights, as well as the EU's democratic processes -- such as self-driving cars and remote surgery, among others.
* Some companies will be allowed to undertake assessments themselves, whereas others will be subject to checks by third-parties. Compliance certificates issued by assessment bodies will be valid for up to five years.
* Rules would apply equally to companies based in the EU or abroad.
* AI systems used to manipulate human behavior, exploit information about individuals or groups of individuals, used to carry out social scoring or for indiscriminate surveillance would all be banned in the EU. Some public security exceptions would apply.
* Remote biometric identification systems used in public places, like facial recognition, would need special authorization from authorities.
* AI applications considered to be 'high-risk' would have to undergo inspections before deployment to ensure systems are trained on unbiased data sets, in a traceable way and with human oversight.
* High-risk AI would pertain to systems that could endanger people's safety, lives or fundamental rights, as well as the EU's democratic processes -- such as self-driving cars and remote surgery, among others.
* Some companies will be allowed to undertake assessments themselves, whereas others will be subject to checks by third-parties. Compliance certificates issued by assessment bodies will be valid for up to five years.
* Rules would apply equally to companies based in the EU or abroad.
Whats the problem? (Score:2, Funny)
Re:Whats the problem? (Score:5, Insightful)
Re: (Score:1)
Re: (Score:3)
Since the alternative is arresting people for things they didn't do, with no actual evidence they intended to do so other than an AI algorithm's "hunch"?
I'm going to go with yes, we need to wait for people to actually commit a crime before we arrest them for it.
Re: (Score:2)
There's an entire show [myanimelist.net] based on this.
I'm going to go with yes, we need to wait for people to actually commit a crime before we arrest them for it.
I don't think there's anything wrong with sending a police car to follow high risk people around though.
Re: (Score:2)
I don't think there's anything wrong with sending a police car to follow high risk people around though.
Would you think it wrong if most of the people on the high risk list had certain immutable characteristics?
If not, how might following around certain people affect the correlation between criminal convictions and that particular characteristic? Might it reach the point where, statistically speaking, it is safest to assume someone with said characteristic has, or will commit a crime?
Re: (Score:2)
So then we create victimless "gateway" crimes that we can arrest people for, hoping it prevents worse crimes. You know, like sodomy, or flag burning, or the catch-all "disorderly conduct". That makes it all better, right?
If the AI predicts a crime, there ought to be a way for a therapist to double-check the signs and intervene if necessary, until the danger has passed. Sort of a last line of defense
Re: (Score:2)
*Tom Cruise crashes through the window*
You are under arrest for a crime you are going to commit!
Re: (Score:2)
Is this how he recruits for his shitty cult?
Re: (Score:2)
Likewise, why don't we just wait until you bought the thing to decide you want to buy it? See how the same it is?
Re: Whats the problem? (Score:2)
Re: (Score:2)
Jesus, I get that most people did not live under an oppressive regime that used mass surveillance in order to keep their citizens under the thumb (I'm from Romania) for super rational reasons like not cheering for the living god.
But there's been an ample amount of popular fictional work that addresses these same issues.
Re: (Score:2)
Re:Whats the problem? (Score:4, Funny)
If an AI ranks my behaviour and says that I'm likely to be interested in killing X...
I'm not sure who this X fellow is but just knowing that smug bastard is out there just sticks in my craw. I'll do it, I'll help you kill X.
Re: Whats the problem? (Score:2)
AI into earpiece of cops: "That's exactly what a guilty person would say! Get him!"
Re: (Score:2)
This is more about cases where the AI ranks your behavior and then decides if you are allowed to do things... buy tickets to an outdoor concert take public transit or renew obtain a fishing license, go on the internet, whatever.
see China's social credit system
As for YOUR example, the application would never be so mundane as to to detect things you'd be likely to be interested in buying. It would be applied to determine what things it could persuade you to think you are interested in buying.
A fool and his m
Re: (Score:2)
Re: (Score:3)
When the fool is broke and desperate, it's time for the gambling adverts. Offer the fool the prospect of salvation, however remote, and see how much debt they can accumulate.
Re: (Score:2)
Then package and sell that debt to other fools, until the system collapses and the public has to bailout the failed economy; the multinational corporation that is actually at fault keeps all the money it made selling the debt, and pockets an additional commission to distribute the bailout money since they have the logistics and infrastructure in place to do it.
Re: (Score:2)
Re: (Score:2)
Of course not. It'll present them with terrible rent to own plans that work out to 50% or higher interest and and point them to payday lenders. The kind that employ a toe cutter.
Re: (Score:2)
Re: (Score:3)
It won't try to figure out what you want. It will try to figure out how to make you buy whatever shit the person paying for the ad is selling.
It will exploit every psychological trick, every weakness.
Re: (Score:2)
Because AI is nothing more that a computer program filled with conditions, branches, functions, and algorithms that are written by flawed, error-prone, subjective, biased, and prejudiced humans.
Re: (Score:2)
The problem is that we all make poor financial decisions and what you describe is creating a surveillance state and massive supercomputer cranking on the task of getting you and everyone else to do exactly that, 24/7.
It isn't bringing the things you need to you, it is finding a suggestion you might be vulnerable to in the moment you'd be vulnerable.
Re: (Score:2)
If an AI ranks my behavior and says that im likely to be interested in in buying X, and then I get and ad, and I am in fact interested and I buy X, then whats the problem?
The problem is that you didn't install an adblocker.
Throws a wrench in the works (Score:5, Insightful)
Re: (Score:3)
Re:Throws a wrench in the works (Score:5, Insightful)
You have a deeply flawed understanding of market economies.
Money is a tally of how well you can game the system, not the value you contribute to it. The entire basis of market economies is that goods are priced based on the marginal value of the last unit they can sell, not the actual value of all the units sold beforehand. E.g. water has radically higher inherent value than gold - gold is practically useless, while water is vital for continued survival. However, water is plentiful, and you're going to be willing to pay far less for the last gallon of water used to water your lawn, than for the first gallon that was vital to your survival.
As applied to wages - you get paid based not on how much value you contribute to the company, but on how easily you can be replaced. Unless you're a member of the executive class, in which case your wages are set by the board of directors, who are themselves mostly all executives, many of them in companies you may be on the board of. Then it's just a mutual admiration society ratcheting up their own wages with no outside feedback. A similar issue exists around legislative bodies, who also set their own pay rate, though they at least have some feedback from the voters. Theoretically the executives have some feedback from the stockholders, but almost all stock is itself owned by members of either the executive or higher classes.
Re: (Score:3)
I'd love to know who taught you economics.
A free market economy is a system by which every exchange of value is voluntary. It is an ideal that may not be practical, and certainly not what we have today. That being said, if you're going to build a cynical view of capitalism, I suggest you consider the ideal it is based upon and more importantly why and how we stray from them.
Since compulsion is not allowed within the market, it is relegated to the state through force of law. Hopefully via democratic process
Re: (Score:2)
Since compulsion is not allowed within the market, it is relegated to the state through force of law. Hopefully via democratic processes. As such, the free market cannot take property, effort or value out of anyone's pockets who doesn't want it taken, only the state can do this. If you're complaining that evil capitalists are making everyone else poor, start looking at the laws that force money out of some people's pockets and and into others. You will find no capitalism there, just criminals and useful idiots willing to legislate themselves and their friends unfair access to the state's monopoly on force.
While it’s true there isn’t a free market in many situations, it doesn’t even deal with the fact that the free market is among the worst possible choices of an economic system for many of the services we require. For example, if someone drunk drives into my car, it’s not like I’m going to price check various hospitals and procedure costs while I’m unconscious en route. You can’t even dig this information out, and what and which providers in that hospital are even
Re: (Score:2)
But those are essential core components of my highly profitable pre-crime detection system. This dystopian nightmare isn't going to build itself people!
I have China for you on line two, they want to hear more about this pre-crime system.
Privacy for me but not for thee (Score:1)
This sounds like a great way to prevent people from tracking the authorities!
This is stupid (Score:2)
If they are going to ban A.I. from doing it, then they might as well ban it entirely.
But of course, they aren't going to do that.
A.I. is just intelligence that happens to be artificial. Full stop. There is no consistent logic to prohibiting artificial intelligence from doing something that natural intelligence is allowed to do with impunity.
But at what point is intelligence artificial?
For that matter, what constitutes intelligence in the first place?
Re: (Score:2)
The part about traceability made me laugh-snort.
Until self-driving cars are successfully mass-deployed, I'm not going to lose any sleep over Sky Net.
Re: (Score:2)
If you understand how the program works, it's not AI any more.
Re: (Score:2)
Are you alleging that we cannot understand what makes behavior intelligent in the first place?
If not, then what difference does understanding it make to whether or not it is artificial?
If so, then the question remains open... why should natural intelligence be permitted to do what artificial intelligence is not?
Matters of scale (Score:3)
> There is no consistent logic to prohibiting artificial intelligence from doing something that natural intelligence is allowed to do with impunity.
Ah, but there is. What artificial intelligence excels at is handling issues of scale. Feed all the surveillance cameras in a city into a central hub, and if you want to track the movements millions of people "just in case", you'll need thousands of human snoops, maybe tens of thousands. Paying all those salaries gets *extremely* expensive very quickly. Ha
Re: (Score:2)
Some humans can have *FAR* better recall than average... should they have to go to be restricted by law what they are allowed to do just because they can remember faces better than most?
Looking to the future, what if someone had their memory augmented by computer chips to stave off dementia or Alzheimer's? What if a side effect of using such wetware was that a person had perfect recall?
What about some successor to neuralink being used to provide a person with a two-way interface to the internet, and h
Re: (Score:2)
What about some successor to neuralink being used to provide a person with a two-way interface to the internet, and have immediate access to essentially all knowledge? Would we outlaw procedures from altering people in this way?
If we do, then those people will just have to move to Mars.
Re: (Score:2)
Exactly. You can't just ban doing such things, because humans do them. There's not even any clear line as to what level of capability a human definitely falls below.
But you might want to ban artificial "intelligences" because we already know they are not subject to some of the limitations of a human mind. A human may have perfect recall, artificial or not, but they do not have the effectively infinite amount of "attention" that an AI can easily bring to bear. For example, no matter how good a memory, a
Re: (Score:2)
Oh, but they can, and do, try.
Replace “arson” with “unjustifiable shootings” and “fire” with “useful, normal, modern guns”...
Re: (Score:1)
This is a "silly monkey" law that will have no effect.
Just as people and companies are calling things "A.I." now that have nothing to do with artificial intelligence people and companies will implement these data processing systems and call them "not A.I." just to skirt around the law. It will be up to the government to prove that they are, in fact, A.I. and I don't see how that's feasible without sufficient access to the source code of the systems involved and actual legal definitions of what constitutes A
Re: (Score:2)
To prove that it is A.I., it needs to prove two things. One that it is artificial, and two that it is intelligent.
Unfortunately, neither of the definitions for each of these is particularly clear.
Does this apply to FB & Google? (Score:3)
Does it require EU companies to stop doing business with entities(China, NSA, GCHQ, etc) that do use AI for surveillance?
Hard to enforce. (Score:2)
Cue Google and BookFace forced out of the EU (Score:2)
Because this seems to be the basis of their business there.
Re: (Score:2)
You can do advertising and targeting without AI. You can do it a bit better with AI, because machine learning lets you identify patterns in data sets that are too big and complicated to process through conventional statistics, but it's not essential. Besides, Google and Facebook will just outsource any illegal processing to a wholly-owned subsidiary located outside the EU.
Aaag! (Score:3)
Re: (Score:2)
There is far less lobbying and lawyering going on in Europe than in the US. It has nothing to do with that.
Instead it has everything with low worker protections from straight up exploitation and being monopoly friendly creating huge companies that swallow startups. China and the US are practically the same in that regard -- yep.
You only need to look at your "choice" for telephony or internet. There is practically none because they're fucking monopolies that just lobby the government to keep it that way.
The one possible good result (Score:3)
AI systems used to manipulate human behavior, exploit information about individuals or groups of individuals, used to carry out social scoring
Please tell me this means Twitter is forced to shut down the normal feed, and offer only "Latest Tweets" which is 100% of the time what I want to see.
Vaccine Status (Score:1)
Re: (Score:2)
Governments *always* exempt themselves; the only surprise anymore would be if they were to explicitly *not* exempt themselves.
Freedom's just another word for total surveilance (Score:1)
These horrible communist EU big government totalitarians restraining the freedom of corporations to earn a simple profit.
Next they'll put limits on the discharge of deadly pollutants. Perhaps even ban slavery!
This is why Europeans are so unfree.
I'll take my liberty with an expensive health insurance plan and a big gun. No MASKS! Also, no vax jabs. And I want a greasy burger with that.
Re: (Score:1)
Without a constitution limiting what this pseudo-government can do, the precedent can just as easily go the other way, to help the corporations instead. All it takes is a simple vote by a few unelected persons, and the law has been reversed.
They'll just change what they call it (Score:1)
Well, this is problematic. (Score:1)
> Some public security exceptions would apply. ...
>
> won't apply to AI systems used exclusively for
> military purposes
These are probably the most problematic. These sorts of technologies concern me far more in government hands than in corporate. A corporation, or any business really, is at its heart a very simple beast. All it wants is money. It could be my money; if it's trying to sell me something. Or, it could be someone else's money; if it's taking theirs in exchange for advertising at
Good Thing There's Virus Behaviour Ranking (Score:2)
which is not social behaviour ranking because it's intentional antisocial.
Human mechanization (Score:1)
Is not "A.I".
Not by a long shot.
Alors ! (Score:3)
The EU is trying to set specific rules for it's member countries.
Please note that those member countries will basically ignore those rules if it suits them [France being a fantastic example of this].
In the meantime the EU will royally shaft the companies involved UNLESS they heavily involved in Defence/Security Technologies [which by an amazing coincidence tend to be French again].
Luckily us plucky Brits got out so we are free to sell hyper-invasive systems and technology to every tinpot dictator and their dog, sorry, strategic overseas partners.
p.s. I'm not having a specific pop at the French, it's always been well understood in security circles that the only priority for France is France first, EU second.
serious question (Score:2)
Going to be a challenge... (Score:2)
"used to carry out social scoring"
Failing to apply social scoring to disproportionately advantage some people and disadvantage others is considered racist by a large influx of people who have invaded technology.
Credit ratings (Score:2)
But "mass surveillance and ranking social behavior" is already a multi-billion-dollar industry, even if we don't count online tracking and advertising.
Perhaps the new laws will have Credit Rating Agencies will be grandfathered in?
Repaying debts is modern society's version of the most fundamental social behaviour: reciprocation.
Its not AI when it works (Score:1)
legislating a buzzword has to be the funniest thing I've heard in ages.
I love this. I want this in Canada! (Score:2)
Most important rule for any AI system. (Score:2)
"You are responsible for all crimes committed by your AI".
Re: Most important rule for any AI system. (Score:2)
Same rule as parents for their children. I like it.
Though more correctly: Same rule as: "You are responsible for everyone killed by your gun!"
Training it is the equivalent to aiming your gun. You choose the target. You launch the process too.
(Implying that owner == keeper, of course)
Watch them all call it "not AI" from now on. (Score:2)
E.g. by calling it "machine learning", or "tensor multiplication" or "universal functions" if all else fails. So everything I called it before.
Which would even work, since it has never been AI.
But hey, at least the direction of this is right.