Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
EU AI

EU Poised To Set AI Rules That Would Ban Surveillance and Social Behavior Ranking (bloomberg.com) 73

The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications. From a report: The rules are part of legislation set to be proposed by the European Commission, the bloc's executive body, according to a draft of the proposal obtained by Bloomberg. The details could change before the commission unveils the measure, which is expected to be as soon as next week. The EU proposal is expected to include the following rules:

* AI systems used to manipulate human behavior, exploit information about individuals or groups of individuals, used to carry out social scoring or for indiscriminate surveillance would all be banned in the EU. Some public security exceptions would apply.
* Remote biometric identification systems used in public places, like facial recognition, would need special authorization from authorities.
* AI applications considered to be 'high-risk' would have to undergo inspections before deployment to ensure systems are trained on unbiased data sets, in a traceable way and with human oversight.
* High-risk AI would pertain to systems that could endanger people's safety, lives or fundamental rights, as well as the EU's democratic processes -- such as self-driving cars and remote surgery, among others.
* Some companies will be allowed to undertake assessments themselves, whereas others will be subject to checks by third-parties. Compliance certificates issued by assessment bodies will be valid for up to five years.
* Rules would apply equally to companies based in the EU or abroad.

This discussion has been archived. No new comments can be posted.

EU Poised To Set AI Rules That Would Ban Surveillance and Social Behavior Ranking

Comments Filter:
  • If an AI ranks my behavior and says that im likely to be interested in in buying X, and then I get and ad, and I am in fact interested and I buy X, then whats the problem?
    • by YuppieScum ( 1096 ) on Tuesday April 13, 2021 @05:10PM (#61270058) Journal
      If an AI ranks my behaviour and says that I'm likely to be interested in killing X, and then I get the police kicking down my door and arresting me, and I am in fact not interested in killing X, then that's the problem...
      • Should we wait untill you've killed someone?
        • Since the alternative is arresting people for things they didn't do, with no actual evidence they intended to do so other than an AI algorithm's "hunch"?

          I'm going to go with yes, we need to wait for people to actually commit a crime before we arrest them for it.

          • by djinn6 ( 1868030 )

            There's an entire show [myanimelist.net] based on this.

            I'm going to go with yes, we need to wait for people to actually commit a crime before we arrest them for it.

            I don't think there's anything wrong with sending a police car to follow high risk people around though.

            • I don't think there's anything wrong with sending a police car to follow high risk people around though.

              Would you think it wrong if most of the people on the high risk list had certain immutable characteristics?

              If not, how might following around certain people affect the correlation between criminal convictions and that particular characteristic? Might it reach the point where, statistically speaking, it is safest to assume someone with said characteristic has, or will commit a crime?

          • by Ichijo ( 607641 )

            I'm going to go with yes, we need to wait for people to actually commit a crime before we arrest them for it.

            So then we create victimless "gateway" crimes that we can arrest people for, hoping it prevents worse crimes. You know, like sodomy, or flag burning, or the catch-all "disorderly conduct". That makes it all better, right?

            If the AI predicts a crime, there ought to be a way for a therapist to double-check the signs and intervene if necessary, until the danger has passed. Sort of a last line of defense

        • *Tom Cruise crashes through the window*

          You are under arrest for a crime you are going to commit!

        • Likewise, why don't we just wait until you bought the thing to decide you want to buy it? See how the same it is?

        • by fazig ( 2909523 )
          No of course not. Arrest anyone who commits a thought crime right away!

          Jesus, I get that most people did not live under an oppressive regime that used mass surveillance in order to keep their citizens under the thumb (I'm from Romania) for super rational reasons like not cheering for the living god.
          But there's been an ample amount of popular fictional work that addresses these same issues.
        • Are you suggesting that people should be arrested that haven't committed a crime?
      • by Gravis Zero ( 934156 ) on Tuesday April 13, 2021 @08:00PM (#61270572)

        If an AI ranks my behaviour and says that I'm likely to be interested in killing X...

        I'm not sure who this X fellow is but just knowing that smug bastard is out there just sticks in my craw. I'll do it, I'll help you kill X.

      • Suspect of future crime: "but I'm not planning to do that."
        AI into earpiece of cops: "That's exactly what a guilty person would say! Get him!"
    • by vux984 ( 928602 )

      This is more about cases where the AI ranks your behavior and then decides if you are allowed to do things... buy tickets to an outdoor concert take public transit or renew obtain a fishing license, go on the internet, whatever.

      see China's social credit system

      As for YOUR example, the application would never be so mundane as to to detect things you'd be likely to be interested in buying. It would be applied to determine what things it could persuade you to think you are interested in buying.

      A fool and his m

      • Once the AI knows the fool is broke (bank records, credit card records), will it stop presenting the fool with what the fool wants? Or are we all destined to become paperclips?
        • When the fool is broke and desperate, it's time for the gambling adverts. Offer the fool the prospect of salvation, however remote, and see how much debt they can accumulate.

          • by vux984 ( 928602 )

            Then package and sell that debt to other fools, until the system collapses and the public has to bailout the failed economy; the multinational corporation that is actually at fault keeps all the money it made selling the debt, and pockets an additional commission to distribute the bailout money since they have the logistics and infrastructure in place to do it.

        • we are all already paperclips
        • by sjames ( 1099 )

          Of course not. It'll present them with terrible rent to own plans that work out to 50% or higher interest and and point them to payday lenders. The kind that employ a toe cutter.

      • are you kidding, the primary point of AI is to make money. do you think people are spending billions on it for the public good?
    • by AmiMoJo ( 196126 )

      It won't try to figure out what you want. It will try to figure out how to make you buy whatever shit the person paying for the ad is selling.

      It will exploit every psychological trick, every weakness.

    • Because AI is nothing more that a computer program filled with conditions, branches, functions, and algorithms that are written by flawed, error-prone, subjective, biased, and prejudiced humans.

    • by Shaitan ( 22585 )

      The problem is that we all make poor financial decisions and what you describe is creating a surveillance state and massive supercomputer cranking on the task of getting you and everyone else to do exactly that, 24/7.

      It isn't bringing the things you need to you, it is finding a suggestion you might be vulnerable to in the moment you'd be vulnerable.

    • If an AI ranks my behavior and says that im likely to be interested in in buying X, and then I get and ad, and I am in fact interested and I buy X, then whats the problem?

      The problem is that you didn't install an adblocker.

  • by burtosis ( 1124179 ) on Tuesday April 13, 2021 @04:56PM (#61270028)
    Ban mass surveillance and behavioral ratings? But those are essential core components of my highly profitable pre-crime detection system. This dystopian nightmare isn’t going to build itself people!
    • What they are core components of is - capitalism. (No, seriously.) Money itself is a tally of how much you have contributed minus how much you have taken back. (Hold your laughter, that is the basic point of it). Debt (driven by appraisals of your creditworthiness) is the same thing with a negative balance. Money, or really the economy writ large, is a system for compelling people to do productive things so we don't starve, and then beyond that, serving the ego by proving who's best and will be last to
      • by Immerman ( 2627577 ) on Tuesday April 13, 2021 @06:51PM (#61270340)

        You have a deeply flawed understanding of market economies.

        Money is a tally of how well you can game the system, not the value you contribute to it. The entire basis of market economies is that goods are priced based on the marginal value of the last unit they can sell, not the actual value of all the units sold beforehand. E.g. water has radically higher inherent value than gold - gold is practically useless, while water is vital for continued survival. However, water is plentiful, and you're going to be willing to pay far less for the last gallon of water used to water your lawn, than for the first gallon that was vital to your survival.

        As applied to wages - you get paid based not on how much value you contribute to the company, but on how easily you can be replaced. Unless you're a member of the executive class, in which case your wages are set by the board of directors, who are themselves mostly all executives, many of them in companies you may be on the board of. Then it's just a mutual admiration society ratcheting up their own wages with no outside feedback. A similar issue exists around legislative bodies, who also set their own pay rate, though they at least have some feedback from the voters. Theoretically the executives have some feedback from the stockholders, but almost all stock is itself owned by members of either the executive or higher classes.

        • by Shark ( 78448 )

          I'd love to know who taught you economics.

          A free market economy is a system by which every exchange of value is voluntary. It is an ideal that may not be practical, and certainly not what we have today. That being said, if you're going to build a cynical view of capitalism, I suggest you consider the ideal it is based upon and more importantly why and how we stray from them.

          Since compulsion is not allowed within the market, it is relegated to the state through force of law. Hopefully via democratic process

          • Since compulsion is not allowed within the market, it is relegated to the state through force of law. Hopefully via democratic processes. As such, the free market cannot take property, effort or value out of anyone's pockets who doesn't want it taken, only the state can do this. If you're complaining that evil capitalists are making everyone else poor, start looking at the laws that force money out of some people's pockets and and into others. You will find no capitalism there, just criminals and useful idiots willing to legislate themselves and their friends unfair access to the state's monopoly on force.

            While it’s true there isn’t a free market in many situations, it doesn’t even deal with the fact that the free market is among the worst possible choices of an economic system for many of the services we require. For example, if someone drunk drives into my car, it’s not like I’m going to price check various hospitals and procedure costs while I’m unconscious en route. You can’t even dig this information out, and what and which providers in that hospital are even

    • But those are essential core components of my highly profitable pre-crime detection system. This dystopian nightmare isn't going to build itself people!

      I have China for you on line two, they want to hear more about this pre-crime system.

  • by Anonymous Coward

    Remote biometric identification systems used in public places, like facial recognition, would need special authorization from authorities.

    This sounds like a great way to prevent people from tracking the authorities!

  • If they are going to ban A.I. from doing it, then they might as well ban it entirely.

    But of course, they aren't going to do that.

    A.I. is just intelligence that happens to be artificial. Full stop. There is no consistent logic to prohibiting artificial intelligence from doing something that natural intelligence is allowed to do with impunity.

    But at what point is intelligence artificial?

    For that matter, what constitutes intelligence in the first place?

    • As I read the OP, I thought it was mostly about which tasks/powers/judgements are we reluctant to hand off to a computer program, no matter how much the programmers have obfuscated the actual flow control.

      The part about traceability made me laugh-snort.

      Until self-driving cars are successfully mass-deployed, I'm not going to lose any sleep over Sky Net.
    • If you understand how the program works, it's not AI any more.

      • by mark-t ( 151149 )

        Are you alleging that we cannot understand what makes behavior intelligent in the first place?

        If not, then what difference does understanding it make to whether or not it is artificial?

        If so, then the question remains open... why should natural intelligence be permitted to do what artificial intelligence is not?

    • > There is no consistent logic to prohibiting artificial intelligence from doing something that natural intelligence is allowed to do with impunity.

      Ah, but there is. What artificial intelligence excels at is handling issues of scale. Feed all the surveillance cameras in a city into a central hub, and if you want to track the movements millions of people "just in case", you'll need thousands of human snoops, maybe tens of thousands. Paying all those salaries gets *extremely* expensive very quickly. Ha

      • by mark-t ( 151149 )

        Some humans can have *FAR* better recall than average... should they have to go to be restricted by law what they are allowed to do just because they can remember faces better than most?

        Looking to the future, what if someone had their memory augmented by computer chips to stave off dementia or Alzheimer's? What if a side effect of using such wetware was that a person had perfect recall?

        What about some successor to neuralink being used to provide a person with a two-way interface to the internet, and h

        • What about some successor to neuralink being used to provide a person with a two-way interface to the internet, and have immediate access to essentially all knowledge? Would we outlaw procedures from altering people in this way?

          If we do, then those people will just have to move to Mars.

        • Exactly. You can't just ban doing such things, because humans do them. There's not even any clear line as to what level of capability a human definitely falls below.

          But you might want to ban artificial "intelligences" because we already know they are not subject to some of the limitations of a human mind. A human may have perfect recall, artificial or not, but they do not have the effectively infinite amount of "attention" that an AI can easily bring to bear. For example, no matter how good a memory, a

          • Oh, but they can, and do, try.

            Replace “arson” with “unjustifiable shootings” and “fire” with “useful, normal, modern guns”...

    • by Anonymous Coward

      This is a "silly monkey" law that will have no effect.

      Just as people and companies are calling things "A.I." now that have nothing to do with artificial intelligence people and companies will implement these data processing systems and call them "not A.I." just to skirt around the law. It will be up to the government to prove that they are, in fact, A.I. and I don't see how that's feasible without sufficient access to the source code of the systems involved and actual legal definitions of what constitutes A

      • by mark-t ( 151149 )

        It will be up to the government to prove that they are, in fact, A.I.

        To prove that it is A.I., it needs to prove two things. One that it is artificial, and two that it is intelligent.

        Unfortunately, neither of the definitions for each of these is particularly clear.

  • by schwit1 ( 797399 ) on Tuesday April 13, 2021 @05:16PM (#61270074)

    Does it require EU companies to stop doing business with entities(China, NSA, GCHQ, etc) that do use AI for surveillance?

  • It's not easy to regulate what types of software a business or person can or cannot run on their internal computers. Basically this law seeks to regulate how businesses use publicly available information, or at least information that was legally available to the company. These laws are always very difficult to enforce in practice.
  • Because this seems to be the basis of their business there.

    • You can do advertising and targeting without AI. You can do it a bit better with AI, because machine learning lets you identify patterns in data sets that are too big and complicated to process through conventional statistics, but it's not essential. Besides, Google and Facebook will just outsource any illegal processing to a wholly-owned subsidiary located outside the EU.

  • by timeOday ( 582209 ) on Tuesday April 13, 2021 @05:23PM (#61270100)
    My Karma! The Germans are coming for my precious Karma!!
  • by SuperKendall ( 25149 ) on Tuesday April 13, 2021 @05:43PM (#61270156)

    AI systems used to manipulate human behavior, exploit information about individuals or groups of individuals, used to carry out social scoring

    Please tell me this means Twitter is forced to shut down the normal feed, and offer only "Latest Tweets" which is 100% of the time what I want to see.

  • Because of the "Some public security exceptions would apply" clause government's would exempt themselves from this ban. Would this ban apply to the vaccine passports being implemented by private companies?
    • Governments *always* exempt themselves; the only surprise anymore would be if they were to explicitly *not* exempt themselves.

  • These horrible communist EU big government totalitarians restraining the freedom of corporations to earn a simple profit.

    Next they'll put limits on the discharge of deadly pollutants. Perhaps even ban slavery!

    This is why Europeans are so unfree.

    I'll take my liberty with an expensive health insurance plan and a big gun. No MASKS! Also, no vax jabs. And I want a greasy burger with that.

    • Without a constitution limiting what this pseudo-government can do, the precedent can just as easily go the other way, to help the corporations instead. All it takes is a simple vote by a few unelected persons, and the law has been reversed.

  • All so-called 'AI' is just misusing the term anyway, they'll just replace all references to 'AI' in their documentation with something else.
  • > Some public security exceptions would apply.
    > ...
    > won't apply to AI systems used exclusively for
    > military purposes

    These are probably the most problematic. These sorts of technologies concern me far more in government hands than in corporate. A corporation, or any business really, is at its heart a very simple beast. All it wants is money. It could be my money; if it's trying to sell me something. Or, it could be someone else's money; if it's taking theirs in exchange for advertising at

  • which is not social behaviour ranking because it's intentional antisocial.

  • Is not "A.I".

    Not by a long shot.

  • by hoofie ( 201045 ) <mickey@MOSCOWmouse.com minus city> on Tuesday April 13, 2021 @09:18PM (#61270790)

    The EU is trying to set specific rules for it's member countries.

    Please note that those member countries will basically ignore those rules if it suits them [France being a fantastic example of this].

    In the meantime the EU will royally shaft the companies involved UNLESS they heavily involved in Defence/Security Technologies [which by an amazing coincidence tend to be French again].

    Luckily us plucky Brits got out so we are free to sell hyper-invasive systems and technology to every tinpot dictator and their dog, sorry, strategic overseas partners.

    p.s. I'm not having a specific pop at the French, it's always been well understood in security circles that the only priority for France is France first, EU second.

  • This may seem like a snide question, but would this apply to online games? Quite a bit of online gaming revolves around rankings. Would this make esports impossible in Europe? Rankings drive engagements with others. They don't *just* control standings. They control how others chose to interact with you.
  • "used to carry out social scoring"

    Failing to apply social scoring to disproportionately advantage some people and disadvantage others is considered racist by a large influx of people who have invaded technology.

  • But "mass surveillance and ranking social behavior" is already a multi-billion-dollar industry, even if we don't count online tracking and advertising.

    Perhaps the new laws will have Credit Rating Agencies will be grandfathered in?
    Repaying debts is modern society's version of the most fundamental social behaviour: reciprocation.

  • legislating a buzzword has to be the funniest thing I've heard in ages.

  • I can see these companies screaming that they will anonomyze the data. Don't even let them do this. This is one of those laws that needs to go way beyond into basically, "If you track any aspect of public behaviour or data, then you get punished far beyond any potential gains."
  • "You are responsible for all crimes committed by your AI".

    • Same rule as parents for their children. I like it.

      Though more correctly: Same rule as: "You are responsible for everyone killed by your gun!"
      Training it is the equivalent to aiming your gun. You choose the target. You launch the process too.
      (Implying that owner == keeper, of course)

  • E.g. by calling it "machine learning", or "tensor multiplication" or "universal functions" if all else fails. So everything I called it before.

    Which would even work, since it has never been AI.

    But hey, at least the direction of this is right.

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...