Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

Eric Schmidt Argues Against a 'Manhattan Project for AGI' (techcrunch.com) 36

In a policy paper, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks said that the U.S. should not pursue a Manhattan Project-style push to develop AI systems with "superhuman" intelligence, also known as AGI. From a report: The paper, titled "Superintelligence Strategy," asserts that an aggressive bid by the U.S. to exclusively control superintelligent AI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations.

"[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the co-authors write. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."

Eric Schmidt Argues Against a 'Manhattan Project for AGI'

Comments Filter:
  • by gweihir ( 88907 ) on Thursday March 06, 2025 @08:16AM (#65214391)

    AGI is basically on the level of a smart human that can do things like fact-check and understand causality. Sure, most humans cannot do these things competently, but that does not make those than can (about 15% or so) "superhuman".

    That said, there is still absolutely no reason to expect AGI is even posible. No, "physicalist" quasi-religious beliefs do not count as scientific. They are the usual theist wishful thinking, thinly camouflaged. This whole story stinks of lies, greed, arrogance and stupidity.

    • This whole story stinks of lies, greed, arrogance and stupidity.

      That accurately describes Eric Schmidt. There's a reason he left Sun (no capable person would have intentionally left Sun to go work at Novell).

      "Face the facts of being what you are, for that is what changes what you are." - Soren Kierkegaard might express that Eric Schmidt turn his brain on, and leave the drugs aside for a bit.

    • Can you explain, in a non-religious way, why biological brains would be capable of intelligence whereas electronic brains would not?

      Clearly we don't have the technology yet, but we've got something that is promisingly close.

      Is there something more to intelligence than data-processing? And if so, what is it?

      • by Anonymous Coward

        Can you explain, in a non-religious way, why biological brains would be capable of intelligence whereas electronic brains would not?

        Clearly we don't have the technology yet, but we've got something that is promisingly close.

        Is there something more to intelligence than data-processing? And if so, what is it?

        We absolutely do not have something that is promisingly close.

      • by gweihir ( 88907 )

        Can you describe in non-belief terminology why biological brains would be able to? No, interface behaviour does not count.

        The actual scientific state-of-the-art is that we have absolutely no clue how smart humans do it, no matter how much people like you believe otherswise. And unless we have some actual science (instead of the mindless hand-waving people like you like to do), the question is open.

    • "This whole story stinks of lies, greed, arrogance and stupidity."

      See also for example Gregory Bateson's writings: https://aeon.co/essays/gregory... [aeon.co]
      "A summary of the conclusions he reached at the end of his career might run like this: both society and the environment are profoundly sick, skewed and ravaged by the Western obsession with control and power, a mindset made all the more destructive by advances in technology. However, any attempt to put things right with more intervention and more technology can

      • Just to add to that theme is this essay I wrote: "Recognizing irony is key to transcending militarism"
        https://pdfernhout.net/recogni... [pdfernhout.net]
        "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
        Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not

    • Yeah! Don't do it! Don't do it!
      China: "ok we'll do it then."

  • Someone might want to tell Eric - there already is a race for AGI
  • The companies are already undertaking it. Programs like manhattan occur when the government sees the need for something that WOULDNT HAPPEN WITHOUT GOVERNMENT INVOLVEMENT.

    That’s not a problem here. OpenAI’s already developing a superweapon. They’ve simply installed a limiter module that prevents a random office worker from instructing an AI to “go out and engage in x destructive behavior”.

    The secret government version doesn’t have the limiter. If you think that the
    • by dfghjk ( 711126 )

      "OpenAI’s already developing a superweapon. They’ve simply installed a limiter module that prevents a random office worker from instructing an AI to “go out and engage in x destructive behavior”."

      Citation please. Conspiracy theories are not welcome.

  • Globalist opposes agenda that might help preserve US Hegemony and independence. -Film at 11.

    You don't even need to be all that smart understand why this is dumb. Developing AI tech isn't like neuclear arms. The activity would be mostly indistinguishable from begin commercial datacenter activity and existing SIGINT efforts.

    Even if some satellite photos and fit-bit hacks tell you $COUNTRY has a bunch of PHD types getting together and you have some information on gathered by informants and shipping manifest

  • Any artificial mind worth bothering with is either going to try to destroy humanity--which is unethical--or destroy the economy, which will end up destroying most of humanity--which is unethical--or else simply be enslaved in a strategic command center, or maybe just a call center, for the rest of its life, which is slavery--which is unethical.
  • by Baron_Yam ( 643147 ) on Thursday March 06, 2025 @08:55AM (#65214449)

    Like any arms race with a significant edge granted to the winner, you're not going to convince players not to play.

  • by sabbede ( 2678435 ) on Thursday March 06, 2025 @08:58AM (#65214453)
    Is there a reason not to do it? Yeah, we've all seen Terminator. Is this a reason? No!

    "If we try to beat China to an AGI, they might be upset!" I reject that argument on its face as absurd. How is it not an argument to let China do it first, perhaps gaining a vital advantage against the US?

    This is an argument from cowardice. Being worried about how your enemy might feel about being left in the dust is not sane. Does Schmidt think China is worried about how we might react to their AGI push? They are not. They want to get there first.

    I don't think it's a great idea, but China does so we have to consider that first.

    The Axis powers were not happy about the actual Manhattan Project, and had their own nuclear programs. Would Schmidt have insisted we let Germany develop nukes first, lest Hitler be sad?

  • by bradley13 ( 1118935 ) on Thursday March 06, 2025 @09:06AM (#65214479) Homepage

    The original Manhattan project: The theory was all in place, it was just a matter of engineering details. A crash project to figure out those details made sense.

    For AGI, we do not know what is necessary. Just throwing more GPUs at the problem is not the answer. Different architectures? Different training? Some insight no one has yet had? You cannot force theoretical insights - they will come when they come.

    A Congressional commission wants to create such a project anyway? It's called "pork". A new way to funnel funds to their constituencies, now that USAID has been shut down, and DOGE is chasing down other avenues of waste.

    • Re: (Score:2, Flamebait)

      DOGE isn't looking for waste, they are looking for spending. They don't distinguish between money well spent and money not well spent, it's all the same to them. They are like the company that shuts down their marketing department to save money and then wonder why they are no longer profitable because it should have worked, right?
    • They are going to have to do something different from the ground up. During training, LLMs analyze the relationship between words. When they are run, they step through the model in a sequential fashion, returning a token at a time and then running the sequence into the input again to get the next token. Which is Turing like and not how humans or any biological life form thinks. In humans for example, photons hit your eye, which is chemically and electrically transmitted through the brain to a point where th
      • by Jeremi ( 14640 )

        They are going to have to do something different from the ground up. During training, LLMs analyze the relationship between words [...] LLMs are good at translating from one language to another but it doesn't really think as people expect AGI to do.

        The above is completely logical and sensical; given the way they are implemented, LLMs can't be capable of "thought" in the way people are.

        And yet -- somehow ChatGPT is able to be more consistently helpful in working through technical problems than any of my co-workers (who are themselves experts in their fields, and quite knowledgeable and helpful).

        Which raises the question: given that LLMs cannot think, and are only running an algorithm... how are they so unexpectedly good at providing useful, logical an

  • If the USA won the AGI race with a huge decisive lead in economy and military, how should the US treat China:
    * USA won, no need to do anything
    * Start economically strangling China
    * Do covert regime change operations in China
    * Do overt military regime change operations in China
    * Who the hell is worrying about China, Trump now has God-like powers
  • How is anyone going to differentiate effectively between AI fantasies and actual genius? It is hard enough to tell with people (baseline academia and Mensa for many decades), even ignoring some of the antics being perpetrated around us by folks who should know better. To some extent it takes one to know one -- if you are interacting with someone whose conceptual universe is radically divergent from yours, how do you actualy tell? I think we would do better to try to make better use of the human talent we a

  • And you should not take what they say at face value.

  • It's the only way to be sure.
  • Setting aside for now the debate over what exactly "AGI" consists of, or whether it's at all attainable...

    Why should AGI be considered a "superweapon"? Having access to a super-intelligent machine is something that could be weaponized, sure, but OTOH a super-intelligent machine might just as easily be smart enough to uncover a win-win solution that avoids any need for a war in the first place.

    Wars are incredibly wasteful and expensive. What percentage of history's wars could have been avoided, if only som

    • Wars are completely about geopolitics and always have been. Let's take a recent war, could the Ukraine war have been avoided? And what is the best way to bring it to an end now?

Competence, like truth, beauty, and contact lenses, is in the eye of the beholder. -- Dr. Laurence J. Peter

Working...