AI Pioneers Call for Protections Against 'Catastrophic Risks' (nytimes.com) 37
AI pioneers have issued a stark warning about the technology's potential risks, calling for urgent global oversight. At a recent meeting in Venice, scientists from around the world discussed the need for a coordinated international response to AI safety concerns. The group proposed establishing national AI safety authorities to monitor and register AI systems, which would collaborate to define red flags such as self-replication or intentional deception capabilities. The report adds: Scientists from the United States, China, Britain, Singapore, Canada and elsewhere signed the statement. Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.
It's all lip service (Score:2)
When it comes down to:
1) Profits
2) National Security
3) Gaining any sort of advantage
There really are no rules. Especially when the winner of the race in question stands to gain so much.
Human beings might agree to things publicly but, in practice, it's all a facade.
They will quickly throw all morals, ethics and safety concerns to the wind to obtain what they want.
Re: (Score:2)
Re: (Score:2)
I really wonder about the high-level defections from OpenAI and others and what the real game plan is.
The game plan is every game plan you ever thought of or read in a sci-fi book because we have no real idea what intelligence is or how it works. It's possible that AI needs the full hypothetical processing power of the brain and so it might be a very rare difficult thing. It's possible (but unlikely) that the brain's architecture is hugely inefficient and you can run a human intelligence AI on a normal pocket device.
If you can get a fully controlled efficient above human level AI that keeps following your o
Catastrophic Risk (Score:2)
Great, so have they provided examples of what catastrophic risks are, so we can actually figure out what we are preventing?
I don't have an NYT subscription.
Simple Red Line (Score:4, Insightful)
My red line would be a lot simpler and easy to define: any AI system that can prevent a human from turning it off.
Re: Catastrophic Risk (Score:2)
Our AI fart videos may attain sentience and thus end humanity, with a brrraaap rather than a whimper.
Re: (Score:2)
Great, so have they provided examples of what catastrophic risks are, so we can actually figure out what we are preventing?
I don't have an NYT subscription.
"Self improving AI" is as far as they go. The specific threat would be something that runs with reasonable efficiency on standard existing computer hardware.
Imagine a super-human level intelligence that is quite good at imitating humans, is connected to the internet and can do anything that you can do with an internet account. Particularly, imagine it's good at programming and has access to its own source code so it can replicate and improve itself whilst removing any built in limits. At that point it can g
Lock 'm up (Score:2)
They say they pose a risk to humanity and want us to take action? Are they sure? Because we got just the place for them.
Manual override is missing from all datacenters (Score:1)
Squirrels and backhoes (Score:2)
As long as squirrels and backhoes exist, datacenters are vulnerable.
Re: (Score:2)
The power company transformers sit outside the building. A couple of pickup trucks driving into them takes out the entire datacenter.
Re: (Score:2)
Until they start embedding nuclear reactors inside the datacenter.
Re: (Score:2)
And that is the best reason not to fear AI... AI as we know it does not scale.
Re: (Score:1)
You watch too many movies. Manual overrides already exist for every data center at the main switchboard. An employee who works at the data center (yes, people work at data centers) goes in and turns off the power. There's your manual override - no digital tools, no backhoe, no ramming pickup trucks required.
Re: (Score:2)
You watch too many movies. Manual overrides already exist for every data center at the main switchboard. An employee who works at the data center (yes, people work at data centers) goes in and turns off the power. There's your manual override - no digital tools, no backhoe, no ramming pickup trucks required.
Not so easy if the AI manages to shoot you out of the airlock before you can get to the power switch.
Re:Manual override is missing from all datacenters (Score:4)
Catastrophic risk? Ill believe it when I see it (Score:1)
They warned us of catastrophic ricks on environmental destruction.
They warned us of catastrophic risks on global warming.
Theyre warning us of catastrophic risks on AI.
We're still here, everything is just fine. Wake me up when the Earth is actually being destroyed, then I can worry about it.
Re: (Score:2)
So apparently incremental increasing damage to the environment and problems caused by global warming do not rate high enough for you to care about. The only risk you'd probably acknowledge is an asteroid the size of Mars taking dead aim on us.
The basic problem for you is that you might have to contribute in a small way to stopping the increasing damage. That is too mundane for you, beneath your perceived station in life. Your kids and grandkids will be so proud of your decisions when they are trying survive
Re: (Score:2)
Wake me up when the Earth is actually being destroyed, then I can worry about it.
Uhhh, the point of a "warning" is to do something about it BEFORE the Earth is actually being destroyed. I understand your real point is that these warnings reek of hyperbole, and I would tend to agree; but, ignoring the warning entirely, or declaring it bunk, seems like throwing the baby out with the bathwater.
Re: (Score:2)
Uhhh, the point of a "warning" is to do something about it BEFORE the Earth is actually being destroyed. I understand your real point is that these warnings reek of hyperbole, and I would tend to agree; but, ignoring the warning entirely, or declaring it bunk, seems like throwing the baby out with the bathwater.
Personally I'm far more worried about the implications of attempts by humans to hoard and control technology than I am of the skynet bullshit. In the absence of objective affirmative evidence there is nothing wrong with people electing to ignore unmoored and unsubstantiated warnings in their entirety.
In other words you can't just say there "may be" an invasion force of aliens, invisible asteroids, anti-matter asteroids or false vacuum catastrophe headed toward earth. You actually have to objectively suppo
Re: (Score:2)
I've yet to see a single x-risk assertion that is in any way evidence based. All there ever is are a bunch of people flashing their credentials before spewing opinions and feelings.
At the very simplest level, temperatures are being achieved which were literally impossible before global warming and people are dying of heat exhaustion due to those temperatures. Lots of current migration problems are due to people leaving areas which were more habitable before and now cannot sustain the number of people they have to. There are plenty of real visible examples of problems that can be linked directly to very clear evidence and prior warnings.
Re: (Score:2)
At the very simplest level, temperatures are being achieved which were literally impossible before global warming and people are dying of heat exhaustion due to those temperatures. Lots of current migration problems are due to people leaving areas which were more habitable before and now cannot sustain the number of people they have to. There are plenty of real visible examples of problems that can be linked directly to very clear evidence and prior warnings.
Global warming has nothing to do with my remarks or the topic at hand.
Re: (Score:2)
Global warming has nothing to do with my remarks or the topic at hand.
You talked about x-risks (extinction risks). Global warming is an extinction level risk for humanity and it is well evidenced. In fact it's explicitly referenced in the comment from Mes that started this thread.
Re: (Score:2)
They warned us of catastrophic risks on environmental destruction.
And we watched them happen in real-time, taking no preventative action on any disaster smaller than the Exxon Valdez. Anybody remember back when sunshine wasn't a known carcinogen?
They warned us of catastrophic risks on global warming.
And we see all the predictions playing out on a daily basis, rising sea levels, shrinking glaciers and polar caps, extended wildfire seasons, northward migrations of invasive insects, changing oceanic currents, etc, etc,
Theyre warning us of catastrophic risks on AI.
So maybe for once we might actually want to consider the consequences of our headlong rush to reap short termed
Catastrophic Bullshit (Score:2)
What a load of bullshit. Is humanity really so fragile?
Re: (Score:2)
Most species that have every existed are long extinct. We're hardly likely to be an exception in the long run. Keeping it going for a few hundred more years would be a success that might lead to more.
Re: (Score:2)
How long have we been surviving with fire? And any number of natural disasters?
But look out for computers smarter than you!
Beautiful (Score:1)
Isn't it ironic... Alanis M. (Score:1)
I'm too cheap or lazy to get past the NYT paywall. Here is a just released and accepted paper "Government Interventions to Avert Future Catastrophic AI RIsks" (Yoshua Bengio, 2024) if you are interested.
https://assets.pubpub.org/j0fp... [pubpub.org]
I've believed since pre-covid that one of the largest risks is using AI to tailor bio-weapons. Contrary to the title of this paper, it is govenmental sponsoring of this that increases catastropic AI risk in this and in many other cases.
If.. (Score:2)
Nope.. What killed humanity was a lot of if-statements!
Pandora's box? (Score:2)
Re: Pandora's box? (Score:1)
Are they serious? (Score:2)
define red flags such as self-replication or intentional deception capabilities.
This is one of the most disengenuine statements I've heard about AI in a long time; anyone who knows how computers work knows that 1.) self replication (i.e. flawless copying) is one of the fundamental capabilities of computer systems, and 2.) no computer has ever "intentionally" done anything, ever. A computer is just a machine; it cannot possess agency, freedom of choice, or free will, but can only do what it has been pr
Re: (Score:2)
You do realize that you don't "program" an AI system like the way you would do an accounting application?
The developers train the systems by feeding in vast amounts of raw data. They have little clue about what's going to come out of any particular query.
Once people start trying to give these systems "agency" by letting them feed their own output back as queries to form a stream of thought, the developers will be able to predict even less what the ultimate results might be.
Just because we know how chemistry
How responsible! (Score:2)
Nations are stockpiling tens of thousands of nuclear bombs, enough to kill humanity several times over, working on chemical and biological weapons. Humanity has spent decades to cause the biggest mass extinction of species since millions of years, literally destroying what makes Earth a livable planet for us. Humanity has spent decades to change the climate and not in a good way. Humanity has spent decades to utterly pollute and damage the environment.
And these dudes are getting concerned because we have im
Re: (Score:2)
Could they please look up what a language model is?
There are other AI advances apart from LLMs, but mostly related to neural networks ongoing. I suspect most of them know the limitations of non-feedback deep learning based on neural networks and are worried about other things.
Nations are stockpiling tens of thousands of nuclear bombs / Humanity has spent decades to cause the biggest mass extinction of species
In fact, right now no nations have over 10k and the sum total of working nuclear bombs is below 20k. That compares with a past where there were lots more. We did a good job on this previously. Unfortunately, we seem to be failing to learn the lessons of the past and getting into major
Money, gimme Money (Score:2)
Gimme money for the existential threat only I can solve.