Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

AI Pioneers Call for Protections Against 'Catastrophic Risks' (nytimes.com) 37

AI pioneers have issued a stark warning about the technology's potential risks, calling for urgent global oversight. At a recent meeting in Venice, scientists from around the world discussed the need for a coordinated international response to AI safety concerns. The group proposed establishing national AI safety authorities to monitor and register AI systems, which would collaborate to define red flags such as self-replication or intentional deception capabilities. The report adds: Scientists from the United States, China, Britain, Singapore, Canada and elsewhere signed the statement. Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.
This discussion has been archived. No new comments can be posted.

AI Pioneers Call for Protections Against 'Catastrophic Risks'

Comments Filter:
  • When it comes down to:

    1) Profits
    2) National Security
    3) Gaining any sort of advantage

    There really are no rules. Especially when the winner of the race in question stands to gain so much.

    Human beings might agree to things publicly but, in practice, it's all a facade.
    They will quickly throw all morals, ethics and safety concerns to the wind to obtain what they want.

    • I've seen some interviews with AI CEO's and other "lumenaries" for various AI companies. When they talk about the first person to get to AGI, they've used analogies of things like "the first fisherman to haul in a genie." They think that whoever gets there first is going to be able to have huge material advantages over the rest of humanity. They are all ivy-league grads; maybe they are right. I really wonder about the high-level defections from OpenAI and others and what the real game plan is. Some have ove
      • I really wonder about the high-level defections from OpenAI and others and what the real game plan is.

        The game plan is every game plan you ever thought of or read in a sci-fi book because we have no real idea what intelligence is or how it works. It's possible that AI needs the full hypothetical processing power of the brain and so it might be a very rare difficult thing. It's possible (but unlikely) that the brain's architecture is hugely inefficient and you can run a human intelligence AI on a normal pocket device.

        If you can get a fully controlled efficient above human level AI that keeps following your o

  • Great, so have they provided examples of what catastrophic risks are, so we can actually figure out what we are preventing?

    I don't have an NYT subscription.

    • Simple Red Line (Score:4, Insightful)

      by Roger W Moore ( 538166 ) on Monday September 16, 2024 @04:06PM (#64791257) Journal
      I don't have an NYT subscription either but apparently a javascript blocker is just as good. The article makes no mention of what those risks are but does suggest some "red lines" that at least governments should be notified about, specific examples were an AI that can copy itself (which seems a very low bar given how easy that is to achieve) and an AI that can deliberately deceive its creators which seems a very vaguely defined bar since "deliberately" pre-supposes free will which is not well defined given that there is some debate about whether even we have that.

      My red line would be a lot simpler and easy to define: any AI system that can prevent a human from turning it off.
    • Our AI fart videos may attain sentience and thus end humanity, with a brrraaap rather than a whimper.

    • Great, so have they provided examples of what catastrophic risks are, so we can actually figure out what we are preventing?

      I don't have an NYT subscription.

      "Self improving AI" is as far as they go. The specific threat would be something that runs with reasonable efficiency on standard existing computer hardware.

      Imagine a super-human level intelligence that is quite good at imitating humans, is connected to the internet and can do anything that you can do with an internet account. Particularly, imagine it's good at programming and has access to its own source code so it can replicate and improve itself whilst removing any built in limits. At that point it can g

  • They say they pose a risk to humanity and want us to take action? Are they sure? Because we got just the place for them.

  • The real existential threat is that all datacenters are hard for humans to get into and very easy for super-intelligent AI with zero-day intrusion capabilities to take over. It is very easy to create manual overrides to turn the redundant power and redundant connectivity off manually - without any digital tools - if we want to. But nobody seems to realise the importance of this manual override.
    • As long as squirrels and backhoes exist, datacenters are vulnerable.

    • The power company transformers sit outside the building. A couple of pickup trucks driving into them takes out the entire datacenter.

    • by Anonymous Coward

      You watch too many movies. Manual overrides already exist for every data center at the main switchboard. An employee who works at the data center (yes, people work at data centers) goes in and turns off the power. There's your manual override - no digital tools, no backhoe, no ramming pickup trucks required.

      • You watch too many movies. Manual overrides already exist for every data center at the main switchboard. An employee who works at the data center (yes, people work at data centers) goes in and turns off the power. There's your manual override - no digital tools, no backhoe, no ramming pickup trucks required.

        Not so easy if the AI manages to shoot you out of the airlock before you can get to the power switch.

    • Why do you think there are no manual overrides at data centers?
  • They warned us of catastrophic ricks on environmental destruction.

    They warned us of catastrophic risks on global warming.

    Theyre warning us of catastrophic risks on AI.

    We're still here, everything is just fine. Wake me up when the Earth is actually being destroyed, then I can worry about it.

    • by gtall ( 79522 )

      So apparently incremental increasing damage to the environment and problems caused by global warming do not rate high enough for you to care about. The only risk you'd probably acknowledge is an asteroid the size of Mars taking dead aim on us.

      The basic problem for you is that you might have to contribute in a small way to stopping the increasing damage. That is too mundane for you, beneath your perceived station in life. Your kids and grandkids will be so proud of your decisions when they are trying survive

    • by Rinnon ( 1474161 )

      Wake me up when the Earth is actually being destroyed, then I can worry about it.

      Uhhh, the point of a "warning" is to do something about it BEFORE the Earth is actually being destroyed. I understand your real point is that these warnings reek of hyperbole, and I would tend to agree; but, ignoring the warning entirely, or declaring it bunk, seems like throwing the baby out with the bathwater.

      • Uhhh, the point of a "warning" is to do something about it BEFORE the Earth is actually being destroyed. I understand your real point is that these warnings reek of hyperbole, and I would tend to agree; but, ignoring the warning entirely, or declaring it bunk, seems like throwing the baby out with the bathwater.

        Personally I'm far more worried about the implications of attempts by humans to hoard and control technology than I am of the skynet bullshit. In the absence of objective affirmative evidence there is nothing wrong with people electing to ignore unmoored and unsubstantiated warnings in their entirety.

        In other words you can't just say there "may be" an invasion force of aliens, invisible asteroids, anti-matter asteroids or false vacuum catastrophe headed toward earth. You actually have to objectively suppo

        • I've yet to see a single x-risk assertion that is in any way evidence based. All there ever is are a bunch of people flashing their credentials before spewing opinions and feelings.

          At the very simplest level, temperatures are being achieved which were literally impossible before global warming and people are dying of heat exhaustion due to those temperatures. Lots of current migration problems are due to people leaving areas which were more habitable before and now cannot sustain the number of people they have to. There are plenty of real visible examples of problems that can be linked directly to very clear evidence and prior warnings.

          • At the very simplest level, temperatures are being achieved which were literally impossible before global warming and people are dying of heat exhaustion due to those temperatures. Lots of current migration problems are due to people leaving areas which were more habitable before and now cannot sustain the number of people they have to. There are plenty of real visible examples of problems that can be linked directly to very clear evidence and prior warnings.

            Global warming has nothing to do with my remarks or the topic at hand.

            • Global warming has nothing to do with my remarks or the topic at hand.

              You talked about x-risks (extinction risks). Global warming is an extinction level risk for humanity and it is well evidenced. In fact it's explicitly referenced in the comment from Mes that started this thread.

    • They warned us of catastrophic risks on environmental destruction.

      And we watched them happen in real-time, taking no preventative action on any disaster smaller than the Exxon Valdez. Anybody remember back when sunshine wasn't a known carcinogen?

      They warned us of catastrophic risks on global warming.

      And we see all the predictions playing out on a daily basis, rising sea levels, shrinking glaciers and polar caps, extended wildfire seasons, northward migrations of invasive insects, changing oceanic currents, etc, etc,

      Theyre warning us of catastrophic risks on AI.

      So maybe for once we might actually want to consider the consequences of our headlong rush to reap short termed

  • What a load of bullshit. Is humanity really so fragile?

    • Most species that have every existed are long extinct. We're hardly likely to be an exception in the long run. Keeping it going for a few hundred more years would be a success that might lead to more.

      • How long have we been surviving with fire? And any number of natural disasters?

        But look out for computers smarter than you!

  • Beautiful ending for human race - to be devoured by AI. Remember - we were always just a bootloader.
  • I'm too cheap or lazy to get past the NYT paywall. Here is a just released and accepted paper "Government Interventions to Avert Future Catastrophic AI RIsks" (Yoshua Bengio, 2024) if you are interested.

    https://assets.pubpub.org/j0fp... [pubpub.org]

    I've believed since pre-covid that one of the largest risks is using AI to tailor bio-weapons. Contrary to the title of this paper, it is govenmental sponsoring of this that increases catastropic AI risk in this and in many other cases.

  • by muffen ( 321442 )
    Global warming, threat of nuclear weapons detonating, third world war, asteroid hitting the planet... Killer robots?

    Nope.. What killed humanity was a lot of if-statements!
  • They found this pretty box laying around with the words "Don't Open Me". They did and now they're warning us that inside that box was a threat to humanity. Maybe they'll be the first to disappear.
  • define red flags such as self-replication or intentional deception capabilities.

    This is one of the most disengenuine statements I've heard about AI in a long time; anyone who knows how computers work knows that 1.) self replication (i.e. flawless copying) is one of the fundamental capabilities of computer systems, and 2.) no computer has ever "intentionally" done anything, ever. A computer is just a machine; it cannot possess agency, freedom of choice, or free will, but can only do what it has been pr

    • You do realize that you don't "program" an AI system like the way you would do an accounting application?

      The developers train the systems by feeding in vast amounts of raw data. They have little clue about what's going to come out of any particular query.

      Once people start trying to give these systems "agency" by letting them feed their own output back as queries to form a stream of thought, the developers will be able to predict even less what the ultimate results might be.

      Just because we know how chemistry

  • Nations are stockpiling tens of thousands of nuclear bombs, enough to kill humanity several times over, working on chemical and biological weapons. Humanity has spent decades to cause the biggest mass extinction of species since millions of years, literally destroying what makes Earth a livable planet for us. Humanity has spent decades to change the climate and not in a good way. Humanity has spent decades to utterly pollute and damage the environment.

    And these dudes are getting concerned because we have im

    • Could they please look up what a language model is?

      There are other AI advances apart from LLMs, but mostly related to neural networks ongoing. I suspect most of them know the limitations of non-feedback deep learning based on neural networks and are worried about other things.

      Nations are stockpiling tens of thousands of nuclear bombs / Humanity has spent decades to cause the biggest mass extinction of species

      In fact, right now no nations have over 10k and the sum total of working nuclear bombs is below 20k. That compares with a past where there were lots more. We did a good job on this previously. Unfortunately, we seem to be failing to learn the lessons of the past and getting into major

  • Gimme money for the existential threat only I can solve.

The trouble with being punctual is that nobody's there to appreciate it. -- Franklin P. Jones

Working...