

AI Systems With 'Unacceptable Risk' Are Now Banned In the EU 72
AI systems that pose "unacceptable risk" or harm can now be banned in the European Union. Some of the unacceptable AI activities include social scoring, deceptive manipulation, exploiting personal vulnerabilities, predictive policing based on appearance, biometric-based profiling, real-time biometric surveillance, emotion inference in workplaces or schools, and unauthorized facial recognition database expansion. TechCrunch reports: Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely.
Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to ~$36 million, or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."
Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to ~$36 million, or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."
LOL (Score:3)
Bye, Tesla "FSD".
Not that you were really legal to use in much of the EU anyway.
Re: (Score:2)
The only things worse than actual rsilvergun posts are acs making fake rsilvergun posts.
Re: (Score:2)
Bye, Tesla "FSD".
Not that you were really legal to use in much of the EU anyway.
Why would FSD be affected by this? It doesn't fit any category that you cover. As a side note there's been no FSD related fatalities in the EU. I personally welcome the idea of idiots beta testing non-production ready lethal tools elsewhere. And it is a beta test, even Tesla uses that word.
FSD (Score:2)
Why would FSD be affected by this? It doesn't fit any category that you cover.
As FSD can have fatal consequence (crashes against cars, bikes and pedestrian. Or crashes against environment fatal to the person on the wheel).
Thus in Europe we will likely consider this category 3 "High risk" from the summary, and we will heavily regulate it.
This will very probably be beyond what Tesla's cavalier "move fast and break things" method could guarantee.
(Elon wanting to only use a couple of cameras to save a few pennies on what is to him basically a marketing bullet point to extract even more r
Re: (Score:2)
Even if FSD falls into this category, it won't be banned (what the OP postulated with this comment). In fact the requirements for high risk AI systems seem to be no more (if not less) stringent than those already in place for self driving systems under existing legislation for cars.
So again, not impacted. Everyone likes to talk about Tesla's move fast and break things method, but they don't have (and never had) the option of ignoring regulatory requirements. The requirements for high risk systems basically
Re: (Score:2)
The rankings are basically how humanity benefits versus the downsides.
Levels 1, 2 and 3 offer some benefit, but takes the risks into account. Level 4 is basically "AI offers no benefits". Things like social scoring is really just code for "discrimination against the poor". Or other such algorithm washing which leads to people having their lives altered simply because a computer said so, with no room to appeal.
So pre-crime prediction, or cheater detection, or even things like facial recognition basically fal
Re: (Score:2)
Yeah, because putting an (illegal to activate in some EU countries) beta test AI driver onto motorways doesn't sound like a "high risk" thing whatsoever under this legislation, right? Or are you trying to claim it's not AI or that it's not a high risk to put an AI in charge of a car at motorway speeds?
FYI:
https://www.shop4tesla.com/en/... [shop4tesla.com]
Basically some parts of Europe are letting you use it like a limited cruise control in some circumstances, but it's basically NOT like the US version at all and requires a
Re: (Score:2)
I don't care if you want FSD or not. I think the idea of getting a "taxi" to anywhere with no human contact is pretty cool, in fact.
But Tesla FSD is not FSD. It's still in Beta. It's not compliant with EU regulations (and can't be in its current form, much like the CyberTruck), it's being missold (which is its most dangerous aspect of all because people then USE it like it's FSD when it's not), etc. etc. etc.
Dozens of other manufacturers have properly-monikered devices approaching various levels of vehic
Re: (Score:2)
I've used FSD. It seems to work as well as virtually all driver assisted systems except that it steers corners as well. It seems to work more poorly than every actual attempt at autonomous driving (which is why its not certified as such). It is classified as Level 2 self driving, despite what musk promised.
There is one street legal level 3 self driving system, but it's not in a Tesla car (look up a certain German manufacturer if you're interested).
But by all means you do you. I'm glad to live in a country w
Re: (Score:2)
Indeed it's high risk. High risk is permitted under this legislation, not banned. They don't get classified as unacceptable risk, and on top of that the actual documentation requirements for high risk systems are likely less stringent than existing requirements required for assistive driving under automotive regulations in Europe.
Except that's you still guessing. Why not look at the list instead: Since a car isn't infrastructure, where do you think it would appear in this official list: https://artificialin [artificial...enceact.eu]
One please! (Score:3, Funny)
Re: (Score:1)
Can somebody point me at one such unacceptable risk model? Now that they banned this, i definitely need to have one
Well from TFS:
Re: (Score:2)
Can somebody point me at one such unacceptable risk model? Now that they banned this, i definitely need to have one
Well from TFS:
By the looks if it, it's not AI per-se but applications of AI. Yeah healthcare recommendations need to be very heavily regulated for people, so of course they need to be for AI.
What's worse? I dunno cueing nukes from AI? Do you really want a nuclear armed AI just because the EU thinks that's a bad idea? OK yes you want one, but do you want your neighbour to have one?
Looks like something targeted at industry, not individuals. I suspect industries like defence would be heavily restricted in using commercial AI, no one really wants ChatGPT integrated into a CIWS. This isn't a new thing for Defence, there's a lot of restrictions on using COTS already.
Profiling (Score:3)
Can somebody point me at one such unacceptable risk model?
Giving a dead serious answer to a joke question (I want to have the banned thing-joke):
The kind of AI application for predicting crime that the US has experimented will most likely be completely banned here around in EU.
These have turned out to devolve into racial profiling (just indirectly inferring the profile based on other marker)
AI based surveillance for remote work (ends up discriminating against poor).
AI based anti-cheat (among-other: ends up discriminating against non-native speakers)
etc.
Basically a
Re: (Score:1)
When is the EU going to ban your comments as "shitty tech"? For example, what does "AI based anti-cheat" have to do with language skills?
It's also funny that you blame the fact that race is an extremely good predictor of criminal recidivism, to the point that non-race surrogates end up looking like proxies for race, on racism rather than reality. You sound like you want to impose your view of race dynamics on the world in spite of actual facts. That's what we usually call racist.
Re: (Score:2)
what does "AI based anti-cheat" have to do with language skills?
That's an easy one. Training data is mostly based on native speaker writing. Even if a small portion is second-language English speakers, it's not going to skew things enough. A non-native writer will write things that look "non-human" from the perspective of a model trained that way.
Re: (Score:2)
what does "AI based anti-cheat" have to do with language skills?
LMGTFY [google.com]
First result: AI-Detectors Biased Against Non-Native English Writers [stanford.edu]
Quoting:
"While the detectors were 'near-perfect' in evaluating essays written by U.S.-born eighth-graders, they classified more than half of TOEFL essays (61.22%) written by non-native English students as AI-generated (...). According to the study, all seven AI detectors unanimously identified 18 of the 91 TOEFL student essays (19%) as AI-generated and a remarkable 89 of the 91 TOEFL essays (97%) were flagged by at least one of the d
Biases (Score:2)
You sound like you want to impose your view of race dynamics on the world in spite of actual facts.
^---: This behaviour is exactly what the EU want to avoid -- people who mistake the output of AI model for truth just because it came out of a computer. That it was spitted out of a model doesn't necessarily mean it's a fact. It could also be reflecting bias in the input data. That where the expression "algorithm washing" comes from: You give heavily biased data to train a model, the model learns the bias, reproduces it, but now that "it's AI" suddenly people start to trust it.
It's also funny that you blame the fact that race is an extremely good predictor of criminal recidivism
Race isn't a good predictor of
Re: (Score:1)
Also, Tesla "full self driving" which is unsafe. And AI to automate weapons outside of limited military applications.
Addictive AIs might also fall into this category. An AI designed to encourage gambling, or a sexbot that manipulates the victim into becoming a cash cow.
Potentially financial AIs could qualify as well, if they are able to cause the market to crash or stock prices to become too volatile.
Re: (Score:2)
Google Search. AI is regurgitating the commonly accepted answers from the internet. Anything it says can be found with a search engine.
Re: (Score:2, Flamebait)
Can somebody point me at one such unacceptable risk model? Now that they banned this, i definitely need to have one
I think the risk is based on how much money the EU can extract.
Making it illegal by fines is not illegal. It is pecuniary extraction. If they wanted to not simply use others as a income, they would have full blown criminal trils, and put the offending 'Murricans in prison, or maybe some of the one way camping trips the Euroelites are fond of.
Re: EU wants WOKE, anti-male, anti-white AI (Score:2)
Re: (Score:3, Insightful)
Re: Overregulation (Score:2)
And I thought it was just a movie.
How naive of me.
The EU loves regulations (Score:2, Informative)
Strangle a new technology with regulations. Then wonder why EU tech companies can't compete.
It's a mystery...
Re: (Score:3, Informative)
Re: (Score:2)
Not in any way covered by these new regulations.
Re: (Score:1)
Re: (Score:2)
Sure, none are large like Microsoft, Alphabet, Meta, but then in Europe they start treating cancer before the growth reaches intercontinental proportions.
Also, you speak as if it's something to be proud of that such large, influential companies are American. Quite to the contrary, if you ask me....
Re:The EU loves regulations (Score:5, Insightful)
remind me again which company from the EU has any relevance what-so-ever
All those Trump wants to tariff because their products are better and cheaper than their US counterparts, and US companies cannot compete fairly with them, hence their utter need for the nanny US State protecting them.
What is it you say? That this isn't the case? That American products are better and cheaper already? Oh. Well, then nothing EU-made should sell in the US market to begin with, at all. So why the tariffs?
Re: (Score:2)
remind me again which company from the EU has any relevance what-so-ever .. i'll wait.
Airbus
Re: (Score:2)
AI can't perform the most basic of tasks. AI is strangling itself.
Re: (Score:2)
That's just so wrong it's silly.
There are things that we think of as basic tasks that AI can't do, or at least can't do well. There are lots of others where it does the basic tasks better than any human can, and it's only the advanced level that it can't do.
For example, ChatGPT can write a sentence faster than you can. But writing a sentence that makes sense in context is something it might fail at.
Re: (Score:2)
Don't worry.
We will innovate these technologies in the USA and China without restrictions*, giving non-EU businesses and governments the advantage over their EU counterparts. We will sell sanitized/restricted versions to EU businesses. We will sell tools based on non-compliant versions to EU governments (who will exempt themselves from compliance on "national security" basis.)
Imagine if Europe had opted out of semiconductor development because they could be misused...
*USA will try to restrict development
Re: (Score:2)
(3) high risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight; and
(4) unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely.
Fuck off, this is exactly what everyone has been wanting. Risky applications of AI to be regulated, not AI research.
From head in the sand AI skeptics to moonshot general intelligence researchers, nobody wants unregulated AI healthcare services. Period.
Nobody wants AI directed municipal retirement funds either. That's something easily for the fourth category.
DLSS (Score:2)
I hope this also means no more fake generated frames for video games.
Of course! (Score:2, Informative)
Electricity too expensive to train AI models? Ban them, citing yet another crop of woke nonsense.
Europe is a soft totalitarian mega state now. The un-ellected Brussels sprouts who stagnated the economy for almost 20 years are doubling down. But ask yourselves you smug elitists, why does anyone care what EU thinks and does? Ah, because it is economically powerful...for NOW! Do these morons realize that their smug circle jurk of " global importance" is fast disappearing as Europe paints itself into irrelevanc
Re: (Score:2)
Ah, yes, right-wing US half-wits circle-jerking to their own exceptionalism while the US president puts a not-domestic-goods tax on most of the goods brought into the country: Brilliant.
Your president's staring contest worked with China, Mexico, Canada and won't work with the entire EU. You've just revealed why the EU is important.
I don't think that staring contest worked with China either. What it might do is cause the EU to reconsider making some form of trade agreement with China in order to de-risk from the US due to how politically unstable it has become.
Re: (Score:3)
Electricity too expensive to train AI models? Ban them, citing yet another crop of woke nonsense.
Europe is a soft totalitarian mega state now. The un-ellected Brussels sprouts who stagnated the economy for almost 20 years are doubling down. But ask yourselves you smug elitists, why does anyone care what EU thinks and does? Ah, because it is economically powerful...for NOW! Do these morons realize that their smug circle jurk of " global importance" is fast disappearing as Europe paints itself into irrelevance? That the only reason they matter is because of what was built and created by those before them?
Any politician in power in the entire EU in the last 25 years should be in jail.
I don't know that it is a woke action. More like the EU is regulating every second of their citizen's lives - or to be more accurate, well on its way. with that as a goal.
And while I don't care if they desire to turn themselves into EuroAmish, it's annoying AF when they demand the rest of the world osculate their anus.
Re:Of course! (Score:5, Informative)
Europe is a soft totalitarian mega state now.
You might want to read the news. https://www.nbcnews.com/politi... [nbcnews.com]
Trump floats foreign imprisonment of American criminals who are 'repeat offenders'
The un-ellected Brussels sprouts
Speaking of the unelected, https://abcnews.go.com/US/trea... [go.com]
Treasury Dept. gives Elon Musk's team access to federal payment system: Sources
This is terrifying on every level.
The EU is..... (Score:2)
Re: (Score:2)
Re: (Score:2)
Better than "move fast and break things" - for the majority of the population at least. America is on the forefront of innovation for a lot of things, can't deny that, but a lot of the time it feels like doing constant software updates on tools that you use to work and meet deadlines at the societal level, with a success handling policy of privatise profits and socialize losses.
To my way of thinking though, imagine if a dangerous man was trying to date your daughter. Is the proper procedure to intervene, and tell him to slag off, or do you want him to pay you money?
I am perhaps not very smart, and there are plenty here to make that claim, if something doesn't meet my standards, or is harmful, I get rid of it or do not use it, not demand money. The EU apparently prefers to profit, which to my way of thinking - they hand out a lot of fines that makes the money they collect a ver
Re: (Score:2)
Re: (Score:3)
No they haven't. In fact the reason you think it is just shows you haven't even look at what this is about. Most of the classifications for risk on AI systems relate to government use, including policing, migration, border control, and justice administration.
American AI (Score:2)
We're already hooking it up to sentry guns. Because nobody seems to give a fuck here.
Re: (Score:2)
I think that's probably an extreme legal risk. Even those against regulating people using guns are likely to be against that. And juries have been known to vote their biases.
Re: (Score:2)
I wish long-term planning to avoid legal liability was part of business culture here.
But the entrepreneurial culture here sets those kinds of concerns aside when making important inventions like a ChatGPT-powered sentry gun. It mostly shoots the color of balloon [indianexpress.com] you ask, occasionally popping innocent bystanders.
Great idea in theory (Score:2)
In practice, it won't stop the bad guys, it will just make life harder for the good guys
Top researchers will find workarounds or leave the area
Bans don't work. See the drug war for an example
Re: (Score:2)
Which bad guys would use social credit scores? And do you think people operating at the same level as a drug gang would build AI systems to collect facial recognition databases?
Thank god they don't apply those standards... (Score:2)
If they did, human life would become illegal in the EU.