Consumer Group Calls On EU To Urgently Investigate 'The Risks of Generative AI' (techcrunch.com) 35
An anonymous reader quotes a report from TechCrunch: European regulators are at a crossroads over how AI will be regulated -- and ultimately used commercially and non-commercially -- in the region, and today the EU's largest consumer group, the BEUC, weighed in with its own position: stop dragging your feet, and "launch urgent investigations into the risks of generative AI" now, it said. "Generative AI such as ChatGPT has opened up all kinds of possibilities for consumers, but there are serious concerns about how these systems might deceive, manipulate and harm people. They can also be used to spread disinformation, perpetuate existing biases which amplify discrimination, or be used for fraud," said Ursula Pachl, Deputy Director General of BEUC, in a statement. "We call on safety, data and consumer protection authorities to start investigations now and not wait idly for all kinds of consumer harm to have happened before they take action. These laws apply to all products and services, be they AI-powered or not and authorities must enforce them."
The BEUC, which represents consumer organizations in 13 countries in the EU, issued the call to coincide with a report out today (PDF) from one of its members, Forbrukerradet in Norway. That Norwegian report is unequivocal in its position: AI poses consumer harms (the title of the report says it all: "Ghost in the Machine: addressing the consumer harms of generative AI") and poses numerous problematic issues. It highlights, for example, how "certain AI developers including Big Tech companies" have closed off systems from external scrutiny making it difficult to see how data is collected or algorithms work; the fact that some systems produce incorrect information as blithely as they do correct results, with users often none the wiser about which it might be; AI that's built to mislead or manipulate users; the bias issue based on the information that is fed into a particular AI model; and security, specifically how AI could be weaponized to scam people or breach systems. [...]
The AI Law, when implemented, will be the world's first attempt to try to codify some kind of understanding and legal enforcement around how AI is used commercially and non-commercially. The next step in the process is for the EU to engage with individual countries in the EU to hammer out what final form the law will take -- specifically to identify what (and who) would fit into its categories, and what will not. The question will be in how readily different countries agree together. The EU wants to finalize this process by the end of this year, it said. "It is crucial that the EU makes this law as watertight as possible to protect consumers," said Pachl in her statement. "All AI systems, including generative AI, need public scrutiny, and public authorities must reassert control over them. Lawmakers must require that the output from any generative AI system is safe, fair and transparent for consumers."
The BEUC, which represents consumer organizations in 13 countries in the EU, issued the call to coincide with a report out today (PDF) from one of its members, Forbrukerradet in Norway. That Norwegian report is unequivocal in its position: AI poses consumer harms (the title of the report says it all: "Ghost in the Machine: addressing the consumer harms of generative AI") and poses numerous problematic issues. It highlights, for example, how "certain AI developers including Big Tech companies" have closed off systems from external scrutiny making it difficult to see how data is collected or algorithms work; the fact that some systems produce incorrect information as blithely as they do correct results, with users often none the wiser about which it might be; AI that's built to mislead or manipulate users; the bias issue based on the information that is fed into a particular AI model; and security, specifically how AI could be weaponized to scam people or breach systems. [...]
The AI Law, when implemented, will be the world's first attempt to try to codify some kind of understanding and legal enforcement around how AI is used commercially and non-commercially. The next step in the process is for the EU to engage with individual countries in the EU to hammer out what final form the law will take -- specifically to identify what (and who) would fit into its categories, and what will not. The question will be in how readily different countries agree together. The EU wants to finalize this process by the end of this year, it said. "It is crucial that the EU makes this law as watertight as possible to protect consumers," said Pachl in her statement. "All AI systems, including generative AI, need public scrutiny, and public authorities must reassert control over them. Lawmakers must require that the output from any generative AI system is safe, fair and transparent for consumers."
Captain Kirk, Doctor Who (Score:1)
Consider the benefits of generative AI (Score:2)
Before your ban a technology, consider its benefits.
ChatGPT provides a personal tutor for every student, if they know how to use it.
Banning this amazing aide that makes me 3x more effective as a teacher is a BAD IDEA.
Re:Consider the benefits of generative AI (Score:4, Insightful)
No one is suggesting banning ChatGPT, and using it as a "personal tutor" is a nightmare scenario. Why would we teach our students only what a ML system is already capable of learning already? It's like limiting mathematics taught to only that which calculators are capable of.
Especially delightful is the "if they know how to use it" part. What in the world does that even mean? The student is responsible for his own tutoring? Do you "know how to use it"? You think ChatGPT is "3x more effective as a teacher"? More than your teachers, certainly.
Stay far away from our kids, please.
Re: (Score:2)
No one is suggesting banning ChatGPT, and using it as a "personal tutor" is a nightmare scenario. Why would we teach our students only what a ML system is already capable of learning already?
A nightmare? It's world-changing. Every child with internet access will soon have their own personal tutor for basic education, with infinite patience and understanding of where the child needs help. Teaching basic language and math skills, money management, etc. No more illiteracy. Every child fluent in not only their own language but any other languages they choose.
It will be a tremendous boon. Eventually of course, the computers take over, but that's going to happen no matter what.
Re:Consider the benefits of generative AI (Score:4, Insightful)
Every child with internet access will soon have their own personal tutor for basic education, with infinite patience and understanding of where the child needs help.
That's fantasy, not reality. ChatGPT has no mechanism by which it could evaluate a child's understanding. Hell, it doesn't 'understand' anything itself. [educationnext.org] It cannot review, analyze, evaluate, or adapt. It will confidently tell lies and tell more lies in their defense.
It is simply not suitable for use as a personal tutor, infinite patience or not.
Re: Consider the benefits of generative AI (Score:2)
Sorry, but your definition of "understanding" is inadequate and poorly thought-out. It will have to change.
Re: (Score:2)
You're going to have to do a lot better than that. Tell me, what is 'my' definition of understanding and how is it inadequate and poorly thought-out? What do you believe would be more appropriate. This ought to be good for a laugh.
Re: (Score:2)
Hell, it doesn't 'understand' anything itself. [educationnext.org] It cannot review, analyze, evaluate, or adapt.
A crappy open-source model recently wrote an epilogue [twitter.com] to The Great Gatsby that Fitzgerald's publisher would probably have accepted without raising an eyebrow.
What's exhibited here is understanding, by any meaningful definition of the word. The God-of-the-gaps argument always fails in the end.
Re: (Score:2)
Did you forget about this?
"Tell me, what is 'my' definition of understanding and how is it inadequate and poorly thought-out?"
You have nothing. You just can't stand that reality doesn't agree with your silly fantasy.
What's exhibited here is understanding, by any meaningful definition of the word.
That's complete and total nonsense.
You won't find the limitations of models like this from cherry-picked successes, but from their many, many, failures. No reasonable person looking at the output would conclude that there is anything like understanding there. Anyone who knows about how these
Re: (Score:2)
Translation: "LOL, your talking dog is a dumbass. The C++ code it wrote is full of security holes."
Expect pain.
Re: (Score:2)
Still can't answer then? Pathetic.
Re: (Score:1)
Every child with internet access will soon have their own personal tutor for basic education, with infinite patience and understanding of where the child needs help.
That's fantasy, not reality. ChatGPT has no mechanism by which it could evaluate a child's understanding.
It will. Not necessarily GPT, but something like it. Kahn Academy is already building some tools.
Thinking it won't happen is the fantasy.
Re: (Score:2)
It will.
Pure fantasy. There is no rational basis for that belief. Quite the opposite, in fact.
Not necessarily GPT,
Definitely not.
but something like it.
Obviously not.
Kahn Academy is already building some tools.
Computerized assessments are nothing new, if that's what you mean. That capability, however, is far beyond what can be done with something like ChatGPT. Models like that will never be able to identify a student's deficiencies and provide remedial instruction. That's just not how they work.
Re: (Score:2)
Every child with internet access will soon have their own personal tutor for basic education
Correction, every child will receive an inadequate, homologous, impersonal education, geared towards the needs of the state (starting with reducing taxes) and not the child. Children with parents wealthy enough to afford human teachers will receive good educations.
Re: (Score:2)
A personal tutor knows the material. ChatGPT knows nothing. It produces sequences of text that sound like a response to things that sound like its prompt. If you provide a prompt to regurgitate relatively common information, it will usually produce a response that's mostly factual. Usually. If you give it a prompt to regurgitate rare information, it will produce a response that sounds like it's factual but is actually bullshit.
For some subjects, this is okay up to the undergrad level. For others, it's going
Re: (Score:2)
A personal tutor knows the material. ChatGPT knows nothing. It produces sequences of text that sound like a response to things that sound like its prompt. If you provide a prompt to regurgitate relatively common information, it will usually produce a response that's mostly factual. Usually. If you give it a prompt to regurgitate rare information, it will produce a response that sounds like it's factual but is actually bullshit.
Sal thinks the know nothing next word predictor is good enough for Khan academy.
Re: Consider the benefits of generative AI (Score:2)
Gee. If only things like this got better with time and further development.
Your ilk woud have cancelled the Commodore 64 because it couldn't run Crysis.
Re:Consider the benefits of generative AI (Score:4, Interesting)
ChatGPT provides a personal tutor for every student, if they know how to use it.
I really hope you're kidding. Not only will it lie, it will confidently argue that its lies are really true. It also has no way to evaluate a student's understanding, identify deficiencies, or adapt materials. If educational malpractice was a thing, using ChatGPT as a tutor would be just that.
Banning this amazing aide that makes me 3x more effective as a teacher
... no one could possibly be that bad ... You'd make ultra-conservative third-generation home school moms look like Maria Montessori.
Take a look at this article [educationnext.org] and please rethink your use of ChatGPT as a tutor. This is one area where it really is dangerous.
"... filling the screen with text that is fluent, persuasive, and sometimes accurate -- but it isn’t reliable at all. ChatGPT is often wrong but never in doubt."
Re: (Score:1)
I really hope you're kidding. Not only will it lie, it will confidently argue that its lies are really true. It also has no way to evaluate a student's understanding, identify deficiencies, or adapt materials. If educational malpractice was a thing, using ChatGPT as a tutor would be just that.
So do humans. My son had several teachers who taught him crap and argued it was true until it was pointed out to them, with evidence, that they were speaking bollocks.
Jeeze (Score:4, Interesting)
Jesus Christ man, AI is nowhere close to being able to even mimic human reasoning ability. Squashing it this early makes no sense. Not when we need AI to solve some specific tasks like operate a kitchen or pick and place loose items etc. we still can’t even replace a lot of repetitive manual labor with robotics and AI. When robotics has replaced jobs like restaurant cook or amazon sorter or Uber driver .. THEN we can talk about AI dangers. Right now AI is dumber than a bacteria.
Re:Jeeze (Score:5, Insightful)
Squashing it this early makes no sense. [...] When robotics has replaced jobs like restaurant cook or amazon sorter or Uber driver
I see Facebook, Amazon, Uber, and I think we better regulate them early. If you let them grow, then become too big too fail. You attempt to regulate them later when big, they'll complain you'll destroy jobs and destroy the innovations they struggled over a decade to bring to you. They'll call the public to try explain how Uber has created such a wonderful world and it would be a pity to regulate it down now that they has perfected the Western society, and other hypocrisies. Amazon did not invent online sales and our lives would be the same without Amazon. They invent one click purchase apparently and our lives would actually be better without that invention. I'd also come back in time and happily squash Uber in the egg. It's not like taxi companies would never have thought of an app of sorts to book their services.
These systems? (Score:3)
"...but there are serious concerns about how these systems might deceive, manipulate and harm people."
We already have enormous propaganda generators and infinite pipelines for delivering damaging messaging. The ability is deceive and harm people is already a massive problem, generative AI doesn't change that at all. If government were interested in fixing it, they would be doing so already. The threat is not generative AI, that is merely a new way to generate damaging messaging. The threat is the people who would exploit it and the means they would use to deliver it.
Re: (Score:2)
Re: (Score:2)
Guaranteed they'll make it worse.
Re: (Score:3)
Generative AI makes building a fake news factory much easier and lower cost.
In the past if you wanted to create a fake news org, you had to have humans write a lot of stories, or steal them and hope the copyright lawyers don't notice. Maybe photoshop some fake photos for social media.
Now you just ask AI to generate you a story and photo of Donald Trump being piled on by cops, and hit publish.
Social media companies will have a hard time blocking it, because you can produce new posts and images in seconds to
Re: (Score:2)
AI is taking the order of magnitude of difficulty debunking bullshit that is generated by humans, and making the problem several order of magnitude worse. Never has it been so convenient and easy to make up plausible-sounding bullshit to poison the information space and make sure nobody knows the truth (the Russian propaganda style).
soo... (Score:2)
>has opened up all kinds of possibilities for consumers, but there are serious concerns about how these systems might deceive, manipulate and harm people. They can also be used to spread disinformation, perpetuate existing biases which amplify discrimination, or be used for fraud," said Ursula Pachl, Deputy Director General of BEUC, in a statement.
So... exactly the same as twitter, facebook, instagram, tiktok, and any other "social media" or "news site" these days?
What scares me more? (Score:2)
Regulating software / speech because of what hypothetically someone could possibly do with it or bad robots / ChaosGPT?
I'm going to go with the regulation.
Will the EU regulate predictive text? (Score:2)
Potentially people could use an AI-controlled predictive text keyboard to send out a misleading tweet. Heaven forbid.
Maybe the EU will regulate predictive text keyboards so that they can only type truthful, unbiased, hate-free sentences ...
Re: (Score:3)
How do you think predictive text works?
Re: (Score:2)
How do you think predictive text works?
It predicts the next word in the sequence, using an algorithm trained on lots of data. How to you think large language models work?
Too late .... (Score:2)
There are several OpenSource GPT's already out there ... it's far far too late to try and regulate it now ..But regulation would not work anyway, and is probably not needed ..
I urge the EU to investigate the risks of "consume (Score:1)
I urge the EU to investigate the risks of "consumer groups."
These unregulated, dangerous, possibly terrorist-supporting groups have to be stopped.