
Humanity At Risk From AI 'Race To the Bottom,' Says MIT Tech Expert (theguardian.com) 78
An anonymous reader quotes a report from The Guardian: Max Tegmark, a professor of physics and AI researcher at the Massachusetts Institute of Technology, said the world was "witnessing a race to the bottom that must be stopped." Tegmark organized an open letter published in April, signed by thousands of tech industry figures including Elon Musk and the Apple co-founder Steve Wozniak, that called for a six-month hiatus on giant AI experiments. "We're witnessing a race to the bottom that must be stopped," Tegmark told the Guardian. "We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don't jeopardize our shared future."
In a policy document published this week, 23 AI experts, including two modern "godfathers" of the technology, said governments must be allowed to halt development of exceptionally powerful models. Gillian Hadfield, a co-author of the paper and the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, said AI models were being built over the next 18 months that would be many times more powerful than those already in operation. "There are companies planning to train models with 100x more computation than today's state of the art, within 18 months," she said. "No one knows how powerful they will be. And there's essentially no regulation on what they'll be able to do with these models."
The paper, whose authors include Geoffrey Hinton and Yoshua Bengio -- two winners of the ACM Turing award, the "Nobel prize for computing" -- argues that powerful models must be licensed by governments and, if necessary, have their development halted. "For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready." The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation. Further reading: AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief
In a policy document published this week, 23 AI experts, including two modern "godfathers" of the technology, said governments must be allowed to halt development of exceptionally powerful models. Gillian Hadfield, a co-author of the paper and the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, said AI models were being built over the next 18 months that would be many times more powerful than those already in operation. "There are companies planning to train models with 100x more computation than today's state of the art, within 18 months," she said. "No one knows how powerful they will be. And there's essentially no regulation on what they'll be able to do with these models."
The paper, whose authors include Geoffrey Hinton and Yoshua Bengio -- two winners of the ACM Turing award, the "Nobel prize for computing" -- argues that powerful models must be licensed by governments and, if necessary, have their development halted. "For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready." The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation. Further reading: AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief
Well now (Score:4, Insightful)
If all that's required for AI systems to create a marvelous plague for their human designers is an underwhelming amount of insightful government attention, well, we are already doomed.
Our fearless leaders have already won their race to the bottom.
Re: (Score:2)
Maybe that's the point. Decisive action now to prevent us, all of us, from getting into that position. Something along the lines of the nuclear non-proliferation treaty.
Read Neuromancer (Score:3, Insightful)
Re: (Score:3)
You're thinking short term. After we "win", everyone loses.
Re: (Score:3)
Re: (Score:3)
In other words, everyone loses either way.
Re: (Score:2)
Re: (Score:2)
I'd love to, but I think I'm not invited to play, the best we may hope for is some spectator seat in the nosebleeds.
Re:Read Neuromancer (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2, Insightful)
Believing with a certainty that everyone loses is like believing Roko's basilisk or The Terminator is real.
Re:Read Neuromancer (Score:5, Interesting)
Re: (Score:3)
Re: (Score:2)
The issue with most AI solutions today is the lack of ability to demonstrate how the AI arrived at its conclusion. That makes finding and identifying bad information exceedingly difficult to filter out of datasets, especially the longer they are present as AI builds off existing datasets to create new datasets.
This I believe is the goal of this misguided albeit well-intended hiatus. The main issue is that I don't believe there is going to be any change in this status in 6 months so effectively it is to all
Re:Read Neuromancer (Score:4, Insightful)
It's pissing into the wind. Who is going to comply with a six month hiatus? They might move AI work to a "black ops" status or site, but it's not simply going to go away. Then the six months will expire, and all accumulated changes that couldn't be implemented during the hiatus will be applied simultaneously, and that sounds like an invitation to disaster.
Re: (Score:2)
Re: Read Neuromancer (Score:1)
Once AGI arrives, we'll all be force-fed robot pron!
Re: (Score:2)
I have yet to see evidence of capabiility (Score:3, Insightful)
When it comes to reasoned decision making, the chatbots seem completely mindless. Sentence construction is fine but the logic is always busted. They're still Eliza bots.
To me, the only risk is stupidly of expectations.
AI is old news (Score:1)
If he thinks AI is bad, he should try using Alexa some time.
But seriously, we know they've been using these types of AI systems internally with zero regulation for decades. Their problem is that normal people will have access to this kind of automated power. Yet individuals will be held responsible while corporations are given months to come up with an alibi when their expert systems decide to poison or screw millions of customers.
Re: (Score:2, Insightful)
It's easy to make an Eliza bot. 500 lines of javascript [njit.edu] for a fun but dumb pattern matching script.
Comparing Eliza to cutting-edge models like GPT or BERT is akin to comparing a basic pocket calculator to a supercomputer. Eliza's rudimentary pattern matching is reminiscent of using an abacus, a far cry from the advanced algorithms we see today. Modern AI transformer architectures, built on deep learning foundations, possess the precision and intricacy of a Swiss watch. They're anchored in layers of neural
Re: (Score:2)
The assertion is that ChatGPT is just as mindless as Eliza.
Re: (Score:2)
The assertion is that ChatGPT is just as mindless as Eliza.
Lets call it "the fact of the matter", because it is.
Re: (Score:2)
LLMs are stupid in the sense that they are good at working with language but terrible at understanding the world. Which only makes sense, since they were trained on language. It's frankly amazing that being good at language yields correct results outside of the language domain as often as it does. But if your job is threatened by an algorithm that gets it right 70% of the time, your job wasn't very secure to begin with.
The comparison to Eliza is unfair though: Eliza only works with the information in the co
Re: (Score:2)
Same here. As I am following AI research for 35 years now, I also _know_ these things are completely mindless. LLMs have no reasoning ability at all.
Is there anything specific they have in mind? (Score:2, Interesting)
I'm out of the loop. I understand there are many people calling for regulations and giving governments that regulatory power. What regulations do they mean? I don't mean "they want government licensing," I mean are there any specific, concrete things things AI is being used for and must be stopped?
"Halts" are not going to happen (Score:4, Insightful)
THE SKY IS FALLING!!!! (Score:4, Insightful)
THE SKY IS FALLING!!!! THE SKY IS FALLING!!!! WE'RE ALL DOOMED!!! DOGS AND CATS WILL BE LIVING TOGETHER, HELLFIRE AND BRIMSTONE WILL RAIN DOWN FROM THE HEAVENS!!! SOMEONE WILL KICK YOUR CHILDREN AND GIVE YOU A WEDGIE!!!
And the only possible way to save yourselves . . . give us money, and complete control over this new industry so that we can make certain nobody else can get a piece of the action.
Yawn.
What we need to do is stop believing the bullshit in the press releases, and stop taking AI seriously until it's worth taking seriously. For what passes as AI now - fancy autofill algorithms - if we ignore it, it really will go away. And there's no reason to believe that will change any time soon.
Remember NFTs? Remember cryptocurrency? Remember cold fusion? Remember flying cars? Or any of a hundred other fads that were going to change/destroy/improve the world with the snap of a finger?
If any of these morons actually believe what they're shoveling, they'd be taking a different approach. They're not afraid of AI, they just want to control the industry. Same as every other huge, market dominating company.
Re: (Score:2, Insightful)
It was taken out behind the barn of the left, and shot in the head.
Re: (Score:2)
For what passes as AI now - fancy autofill algorithms - if we ignore it, it really will go away.
If you think that Large-Language Models like ChatGPT are going to "go away", you are very seriously deluded.
And yes, LLMs do function like a type of "fancy autofill", but that does not prevent them from being powerful.
Re: (Score:1)
They're the next step in autofill for search engines. That's all they are, and the results are, so far, not all that accurate.
There's certainly a market for that. A big market. That's why Google and Microsoft want to (continue to) control it with monopoly power. Because it's the search engine market they already make billions from, and they do not like competition.
Anybody calling it "artificial intelligence" should be sued for false advertising, except there's case law that says an ad claim so ridiculous th
Re:THE SKY IS FALLING!!!! (Score:5, Interesting)
You are clearly someone that hasn't used ChatGPT. It is far more than fancy autofill. Even my wife uses it to draft emails when she needs to broach emotionally charged subjects with an employee that is misbehaving. She of course reviews the final outcome but it saves a whole lot of time and typing.
Flash forward to me, I needed to write a powershell script to lock down some specific IIS extensions across many servers. ChatGPT wrote my script in 30 seconds, I tweaked it with environment specific info on my own computer because we aren't going to give OpenAI any sensitive info.
Another project, taking an arp table and telling me how many IPs are in which subnets. I have over 200k IPs, took an hour to write with GPT and then another 30 minutes to tweak. It saves a tremendous amount of time. As always, its a trust but verify. If I wrote my own script I'll test it out in my lab first and make sure it doesn't do anything unexpected.
Lawyers have even used it to draft depositions, some are stupid and don't double check to make sure they are citing actual precedents.
There are no confidence meters for any answers and because they can't cite or otherwise tell you how they arrived at the conclusion its uses are largely limited to a pretty good starting point.
Do yourself a favor, try out these solutions.
Re: (Score:1)
Re: THE SKY IS FALLING!!!! (Score:1)
I'm reading a Slashdot discussion about AI. One of the posts is the following - please critique the reasoning displayed within and provide a response which I will post as a reply. The response should be in the style of someone on Slashdot (slightly offensive with heavy use of analogies - perhaps pizza-analogies):
<text of your post& gt;
AI Skepticism vs. Reality
Default (GPT-3.5)
User
I'm reading a Slashdot discussion about AI. One of the posts is the following - please critique the reasoning displayed wi
FUD (Score:2)
This is fearmongering.
The AI we have developed is the equivalent of inventing the abacus: It is an amazing accomplishment, that may someday lead to changing our world. But it is still just an abacus, nothing more.
Re: (Score:2)
Re: FUD (Score:3)
Re: (Score:2)
The very people making these "panicked" calls are the very people developing this tech.
I think they are fear mongering so that the government puts so many restrictions that the competition can't catch up.
It's very cynical.
Two kinds of safety standards (Score:3)
There is one kind that is imagined preemptively by academics or lawyers. This is where bureaucratic red tape comes from. People are terrible at assessing risks before they happen. The result is a crazy set of rules, many of which address problems that never actually happen, but we imagine would be terrible.
The other kind is the result of people getting hurt. Though tragic, this kind of safety standard is based on empirical data and addresses risks that are real. The design of things like roads, and safety features of cars, are built using this kind of analysis.
The sad truth is, you can't anticipate the real risks until they happen. On that basis, we should *not* pause, we should instead let AI run its course, and be alert for problems.
Re: (Score:2)
the jail has free room and board at an much higher (Score:2)
the jail has free room and board at an much higher cost then welfare
How observant is he really? (Score:3)
In Max's best imagination of how AI could disaffect humanity, how does it compare to the mundane ways humans consciously choose to disaffect humanity to satisfy an optimization algorithm? What if an AI made freight trains be 4 miles long so that roads are always blocked, to the extreme that timely medical care is fatally denied? Maybe the AI "decided" that occasionally derailing and contaminating a city is more optimal than allocating cheap resources to balance load in response to braking requirements.
People did, can, and will beat machines to such bad outcomes easily.
The answer is easy: When one or more humans notices the defective behavior, choose better.
The implementation of that easy answer is the tricky part.
Re: (Score:2)
Max also thinks the world is literally made of mathematics, not even just figuratively, literally. I hesitate to take his opinions as anymore than him driveling on as he usually does. AI may be a threat, but I do not value his opinion the matter.
will we get more laws like can't pump your own gas (Score:2)
will we get more laws like can't pump your own gas in NJ to save jobs?
maybe laws that ban 100% self check out / must have at least X cashiers per Y selfcheck out station?
and so on
The faces of tomorrow (Score:3)
James P Hogan wrote a book (The Two Faces of Tomorrow) that started with an AI being asked for the most expedient way to build a tunnel on the moon. The AI's solution was to use a railgun cargo shipment system as a kinetic delivery device, to the chagrin of the operators and hazard to anyone near the site.
Corey Doctorow wrote a short story where automobile self-driving systems developed emergent behavior that included, effectively, flocking. To the hazard of anyone in the cars and possibly to nearby.
We aren't going to know the dangers until they happen. But we already have people (even lawyers) relying on large language models to answer questions, and ruing the results. We can't stop this flavor of AI - your pause for caution is his squandering a financial advantage - but we can still think about it, and plot out hazards.
And as my examples show, we have been doing just that.
Buy stamps (Score:3, Funny)
Every time I read one of these stories about AI, I think back to every conversation I've ever heard involving automated telephone systems, where you say, "I wan to speak to an agent" and it says, "I think you said you want to buy stamps. Is that correct?" and you say "No," and then curse at it and it says, "I think you wanted to go to the main menu. Is that correct?" and I breathe a sigh of relief, because I figure the massive job losses that people fear AI will cause probably won't happen in our great grandchildren's lifetime at this rate. :-D
Re:Buy stamps (Score:4, Insightful)
Re: (Score:2)
Yet those automated telephone systems eliminated 1000s of jobs. It doesn't have to be better as long as it's cheaper.
I doubt the supposed AI systems have actually eliminated any jobs. If anything, they likely resulted in having to hire more retention employees trying to make up for the reputational damage caused by the phone systems. Now the really early systems where you push 1 to do whatever and push 2 to do something else — *those* cost people jobs, but that was just simple automation replacing people, not AI per se.
Race to the bottom. (Score:2)
Will that be humans or AI? 'Cause I'm guessing AI will end up the "top" in this relationship. :-)
Link to the actual paper (Score:2)
The useless Guardian article doesn't even link to the actual paper. I managed to find it here:
https://managing-ai-risks.com/ [managing-ai-risks.com]
Re: (Score:2)
Thanks for digging that up. It's rather pointless to discuss the risks without actually naming those risks.
It seems to me that all of the risks they mention are things that humans are already doing but might be boosted by AI. Is AI really the problem here?
Economic inequality is a problem, but in our current economic system it's going to get worse over time whether there is AI or not. The cynic in me wonders whether slowing down AI is just a way to stretch the status quo a bit longer, slow-boiling us frogs.
T
Re: (Score:2)
It seems to me that all of the risks they mention are things that humans are already doing but might be boosted by AI. Is AI really the problem here?
Yes, it is. As the paper says, "Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective."
Combine that with a system sufficiently skilled in AI research, you'll get something that can recursively increase its own intelligence to far-superhuman levels.
Combine that with any utility function we can currently specify, none of which are existentially safe in the limit of optimization pressure, you'll get the extinction of all Earth-born lif
Re: (Score:2)
None of that has any chance of happening in the near future.
Machine learning takes a huge amount of computation. In particular, while a larger capacity networks become more powerful, it requires exponentially larger networks. For example, Microsoft already admitted that while GPT-4 performs well, it is too computationally expensive to deploy at a large scale. Any AI with superhuman levels of intelligence would require so much compute power that it would be easy to detect and shut down: you could literally p
Re: (Score:2)
The risk is not AI (Score:4, Interesting)
The risk is people who use AI as a weapon
What's the real danger (Score:5, Insightful)
If you look at the way AI "learns", you'll notice that it gets worse and worse as time goes on. The original models were trained on human products. And let's face it, we have standards. Not high ones, mind you, but we, in general, know what we're talking about. If a human talks about a car, we all have a pretty good idea what is required for a car to be recognized as a car. Same if we talk about a house, a human, a cow, even abstract things like an idea, a dream, hope, desire.
We do understand that these words mean something specific. We may not all attribute the same meaning to them, or they may not have the same value to us, but they represent something that we can all, at least mostly, understand.
And AI does not understand anything. It can correlate and deduce, but it does not understand. But at least the first generation of AI actually had a pretty good run... but this is where the problem starts.
The following generations will be training on diluted input material. Because AI already generates content itself now. And we all know that AI is far from perfect when generating. Even if nobody messes with the input material, AI is often drawing horribly wrong deductions and conclusions. But the amount of content created makes it impossible to vet and audit the generated content. Since it is also quite hard to tell human from AI generated content, and since AI generates content faster than humans can, following generations of AI will be trained on more and more AI generated content.
Garbage in, garbage out.
The danger here is now that AI will learn from quite heavily damaged, if not outright false, source material. Since AI has even less capability to tell reality from bullshit than any conspiracy loony out there, what you get, given enough time, is an AI model that has a so completely fucked up image of reality that the average flat earth reptiloid hunter sounds sane in comparison.
And now the problem starts: As we already know, there are way, way too many people who know SO little, that they can't even detect bullshit told to them when it is blatantly obvious. And now all that increasingly crappy content generated by AI is being dumped onto exactly these people. If you think that we're currently living in "postfactual times", you ain't seen nothing yet when these bullshit generators meet the gullible masses.
Re: (Score:2)
So the enshittification begins even before it's built? I guess that represents the advancing pace of tech to a T.
Re: (Score:2)
We'll eventually wake up and find out, AI is shit all the way down.
Re: (Score:2)
If you look at the way AI "learns", you'll notice that it gets worse and worse as time goes on.
Ability to learn only improves with time. Things are getting better faster not the other way around.
AI is able to reflect on its own knowledge to improve itself.
https://openreview.net/pdf?id=... [openreview.net]
RAGs and similar schemes help to ground models to some extent yet there is a long road ahead.
Present day LLMs are structurally severely limited. Models have no senses, no real world experiences of their own lacking even the ability to form their own memories or experiment from trial and error. Modes of thought are
Which governments? (Score:4, Funny)
So which governments are going to halt development within their countries? China? Iran? North Korea? India? Russia? Oh wait, not those countries. Just the other Western countries so that they totally fall behind in this technological curve.
Regulation is critical to safe innovation (Score:2)
Fine. But what do you regulate? All the brain-dead, hard coded chat bots? Or only the LLMs? And how do you know what is behind a companies web site? Ask them and they'll tell you that, "No. It's just a simple Python script. Exempt from regulations.* But it's all trade secrets anyway, so you don't get to look."
The AI we will have to worry about is not the stuff that's advertised. It's the stuff we'll never know about.
*Like how broadband providers classify themselves as "not common carriers". Because they d
The Elephant In The Room (Score:2, Funny)
Can I be an AI expert? (Score:2)
Max Tegmark is a guy I remember reading about 20+ years ago when he wrote an article in Scientific American about how if you pick any direction in space and go far enough, eventually you'll have to see an identical arrangement of atoms as right here.
As a kid I lapped it up.
As an adult, I'm less impressed with the ascription of Meaning with a capital M to something by definition so far outside our light cone as to be irrelevant. By a guy calling himself a physicist.
A "6 month moratorium" is an equally meani
Humanity will be fine (Score:2)
The current economic status quo, however, is going to implode. Whether or not the aftermath will lead to utopia or dystopia is going to be a coin flip.
Funny (Score:2, Interesting)
Now, it is pretty well known that technical people at least in California tend to be remarkably non-religious to actively anti-religious. A subset of these people seem to be hell (?) bent on creating an AGI, essentially an emergent God, they can find themselves worshiping. I find this highly amusing, interesting, and alarming all at once. I think this should be thought through far more carefully, a decade ago if you have a time machine. Note that if you bias the data to have it come out the way you think un
Re: (Score:3)
At least this god will actually exist.
Racing each other to the bottom (Score:2)
What with all the other races to the bottom, e.g. environmental degradation, climate change, political shortsightedness, money-grubbing, food/water scarcity, cultural upheaval, wars, and general psychological aimlessness, amongst others primarily in the West, it surely is an exciting race to spectate to see what will bring us to the bottom the first.
From where I sit, my money is not on AI though.
Max Tegmark fear mongering for his own benefit (Score:2)
Max Tegmark most recent paper is about:
https://www.researchgate.net/s... [researchgate.net]
So I guess his raising the panic level makes him more relevant.
The main "danger" LLMs present is the dumbing down of public discourse with all this end of the world talk.
\o/ (Score:1)
Anything which promises to make conversations with human customer-service agents a thing of the past, is worth the risk in my book.
What about resource use and waste? (Score:2)
I never see any specifics in these articles (Score:2)
Now, I think AI (e.g. LLMs and other advanced automation systems) *are* a problem. They're going to destroy jobs at a rate we can't possibly keep up with in a "if you don't work, you don't eat" society. But nobody
How would this have played out with the A-Bomb? (Score:1)
Pay them well and enable them to do something more ethics based, maybe more aligned with the vision of cybernetics tha
Competition for thee, not for me (Score:2)
Title. (Score:2)