Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AI

Eric Schmidt Argues Against a 'Manhattan Project for AGI' (techcrunch.com) 63

In a policy paper, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks said that the U.S. should not pursue a Manhattan Project-style push to develop AI systems with "superhuman" intelligence, also known as AGI. From a report: The paper, titled "Superintelligence Strategy," asserts that an aggressive bid by the U.S. to exclusively control superintelligent AI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations.

"[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the co-authors write. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."

This discussion has been archived. No new comments can be posted.

Eric Schmidt Argues Against a 'Manhattan Project for AGI'

Comments Filter:
  • by gweihir ( 88907 ) on Thursday March 06, 2025 @09:16AM (#65214391)

    AGI is basically on the level of a smart human that can do things like fact-check and understand causality. Sure, most humans cannot do these things competently, but that does not make those than can (about 15% or so) "superhuman".

    That said, there is still absolutely no reason to expect AGI is even posible. No, "physicalist" quasi-religious beliefs do not count as scientific. They are the usual theist wishful thinking, thinly camouflaged. This whole story stinks of lies, greed, arrogance and stupidity.

    • Re: (Score:2, Flamebait)

      by phantomfive ( 622387 )

      This whole story stinks of lies, greed, arrogance and stupidity.

      That accurately describes Eric Schmidt. There's a reason he left Sun (no capable person would have intentionally left Sun to go work at Novell).

      "Face the facts of being what you are, for that is what changes what you are." - Soren Kierkegaard might express that Eric Schmidt turn his brain on, and leave the drugs aside for a bit.

    • by Brain-Fu ( 1274756 ) on Thursday March 06, 2025 @11:08AM (#65214589) Homepage Journal

      Can you explain, in a non-religious way, why biological brains would be capable of intelligence whereas electronic brains would not?

      Clearly we don't have the technology yet, but we've got something that is promisingly close.

      Is there something more to intelligence than data-processing? And if so, what is it?

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Can you explain, in a non-religious way, why biological brains would be capable of intelligence whereas electronic brains would not?

        Clearly we don't have the technology yet, but we've got something that is promisingly close.

        Is there something more to intelligence than data-processing? And if so, what is it?

        We absolutely do not have something that is promisingly close.

      • Re: (Score:2, Informative)

        by gweihir ( 88907 )

        Can you describe in non-belief terminology why biological brains would be able to? No, interface behaviour does not count.

        The actual scientific state-of-the-art is that we have absolutely no clue how smart humans do it, no matter how much people like you believe otherswise. And unless we have some actual science (instead of the mindless hand-waving people like you like to do), the question is open.

        • by Brain-Fu ( 1274756 ) on Thursday March 06, 2025 @01:08PM (#65214973) Homepage Journal

          Well here's the thing.....we have observed that brains are the part of a human where intelligent data processing happens.
          It's not the kidneys. Not the lungs. Etc. It's in the brain, and that's a well-established fact, at this point.

          It is true that there are loads of details we don't know. Far more that we don't know than that we do. But we DO know that brains do it.

          So what does that mean? It means that intelligence is possible. It means that complex neural networks, like the brain, can do it. So, if intelligence is possible and complex neural networks can do it....perhaps complex neural networks that aren't made of gross squishy cells can also do it.

          This line of reasoning doesn't prove that it can be done. But it gives us a good enough reason to expect that it might be possible, and there is still plenty we can learn from the attempt.

          I will also point out that we might not need to understand every detail in order to achieve it. There are many unsolved mysteries lurking in technologies that we use on a regular basis. Sometimes we just need to understand enough in order to get something working, without understanding everything.

          From your original post, calling physicalism "quasi religious," I was curious to know if you were going to assert some religious concept like the soul as the true seat of intelligence. Usually people who accuse science of being religious are just religious zealots themselves who are defending against scientific ideas that would conflict with their religious beliefs, and hence projecting. I don't know if that's you or not, but that's really what I was fishing for.

          • Well here's the thing.....we have observed that brains are the part of a human where intelligent data processing happens.

            There is actually lots or evidence that is not true.

            • Like what?

              • The fact that cuts heal even on people who are parapalegic for one. You think the cells aren't receiving data and acting on it? How do they know there is a cut to heal. Beyond that, there is a lot of evidence that all sorts of parts of the body including individual cells take in information and act without any involvement of the brain.
                • Ah ok that is a good point, though it is tangential to my point. I was talking about the kinds of data processing that would uniquely qualify as "intelligent." You know, the very high level things we do like write computer software, debate law and philosophy, create works of art, and so on.

                  The kinds of processes you are talking about happen in things that generally don't fall into the category of "intelligent" as we are using it in this context. Like micro-animals, and even plants.

                  Generally speaking, the

          • by gweihir ( 88907 )

            Well here's the thing.....we have observed that brains are the part of a human where intelligent data processing happens.

            Aaand, first lie. We have observed _activity_ in those cases, but we have no clue what that means. And here I stop reading. Do better.

            • Ah, I see, you are just trolling.

              You accuse me of lying when obviously I have no intent to deceive, and was looking for an actual discussion on the topic.

              Then you grossly oversimplify the scientific knowledge we have developed around the brain and the tremendous amount of data processing that it has been shown to do.

              And then you just refuse to answer my perfectly reasonable questions for no good reason, and act like I am the one who did something wrong.

              These are exactly the sorts of actions that religious z

        • If we can't agree on what AGI is, then the best measure of intelligence is measuring outcomes on a variety of tasks we consider intelligent. Is that what you mean by 'interface behavior'?

          By that measure they're better than humans at some tasks and improving. What's going on inside and whether you define that as 'intelligence' is meaningless.

          • by gweihir ( 88907 )

            If we can't agree on what AGI is, then the best measure of intelligence is measuring outcomes on a variety of tasks we consider intelligent. Is that what you mean by 'interface behavior'?

            Basically yes. But remember that most people do not understand what intelligent means and that we know of quite a few things that used to be considered requiring intelligence, but turned out that mechanical approaches can actually work. Driving, for example, or playing chess. Both do not require intelligence but can be done using intelligence. But now think of inventing the car or inventing chess. Some (few) humans can do things like that.

            By that measure they're better than humans at some tasks and improving. What's going on inside and whether you define that as 'intelligence' is meaningless.

            Actually, no. It is meaningful because some things smart humans can d

      • Is there something more to intelligence than data-processing? And if so, what is it?

        We should start by giving a proper definition of intelligence. I guess nobody can agree on one. Obviously LLM sellers have a very loose definition of intelligence so they can claim intelligence and use big words such as reflection and thoughts. Obviously, there is no proof you can't replicate what a brain does(you can't prove that something doesn't exist!) but more importantly LLM are absolutely not a proof you can.

        • Well, "intelligence" is a complicated word, it has more than one definition and most of them are complex multi-faceted definitions that themselves include some vague terms. This is necessarily so, given that our usage of the word in normal conversation is naturally fuzzy. We have "intuitions" about what intelligence is and they are hard to define in a precise and complete way.

          It is not likely that in some future day we will have a much better definition of intelligence than we do now. The word will probab

          • When we have computers talking to us so well that we can't distinguish them from other humans, then we will have achieved artificial intelligence for any *practical* purposes. And that would be plenty good enough for me.

            It is the Turing test. LLM can pass most of it already. I don't think it is regarded as a proof of intelligence. To me there is zero chance to reach something similar to biological intelligence as long as AI can't experience real world and learn from it in real time. LLMs, which are our most advanced AI, are far from doing this. What can be achieved just by using natural langage is impressive but it clearly has limitations, even Cudennec believes LLM are a dead end. Interesting article: https://medium.com/@ [medium.com]

      • Can you explain, in a non-religious way, why biological brains would be capable of intelligence whereas electronic brains would not?

        Can you explain in a non-scientific way why electronic brains would be capable of intelligence whereas biological brains would not?

        Let me suggest that "intelligence" is a human invention that is completely non-scientific to begin with. Objectively, intelligence is just whatever IQ tests measure. AI is simply making whatever IQ tests measure less and less important. So perhaps the question is can we create an AI that has compassion, love and faith and is physically attractive.

    • "This whole story stinks of lies, greed, arrogance and stupidity."

      See also for example Gregory Bateson's writings: https://aeon.co/essays/gregory... [aeon.co]
      "A summary of the conclusions he reached at the end of his career might run like this: both society and the environment are profoundly sick, skewed and ravaged by the Western obsession with control and power, a mindset made all the more destructive by advances in technology. However, any attempt to put things right with more intervention and more technology can

      • Just to add to that theme is this essay I wrote: "Recognizing irony is key to transcending militarism"
        https://pdfernhout.net/recogni... [pdfernhout.net]
        "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
        Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not

    • Yeah! Don't do it! Don't do it!
      China: "ok we'll do it then."

    • Also, AGI costs $20,000 per month [indianexpress.com] for a "PhD level intelligence. That's $240,000/year...

      In today's market I can hire 1.5 PhDs as people for that price. Oh, and they only require 2000 kCals/day in energy; I'm not sure what an AI Agent would cost in equivalent energy, but I'm sure it's far more than that. And apparently it's the equivalent of a human intelligence, except a human can recognize when it's making u citations and references.

      Why are we doing this again? I thought AI was going to make labor

    • by hey! ( 33014 )

      I would mostly agree, although I disagree that LLMs, at least, understand causality; their appearance of domain understanding is an illusion which careful tests can reveal. But I would also argue that AI *already* has super-human intelligence in certain restricted senses.

      AI can process volumes of data that even large teams of humans would not be able to with the aid of basic informatics tools like databases and statistical libraries. It has volumes of data in its knowledge base that no human can come close

      • by nasch ( 598556 )

        AI's answer to the "fiberglass/culinary" prompt isn't necessarily better than what humans could come up with, but it is for practical purposes instantaneous.

        The problem with factual questions to current AI models is that they can never be trusted. You have to double check it before relying on it, because all it's doing is giving you what it's decided is the most likely sequence of words for the question. This may or may not have any relationship to the truth, and the AI doesn't even know if the answer is correct or not. So yes, it can give you a more or less useless answer very quickly. Other sorts of questions it's much better at, such as "rewrite this par

        • by hey! ( 33014 )

          This is true. But there are plenty of uses for a system that instantly gives you many plausible sounding alternatives. Obviously there's things like stories, speeches, jokes and that sort of thing.

          And there are applications for systems that give you and endless supply of effortlessly generated plausible answers when those answers need to be right. You just can't, as you point out *trust* those answers. You just need higher order thinking skills to use an LLM for those questions, which many people don't

  • Someone might want to tell Eric - there already is a race for AGI
    • And then ask why he'd want to lose it in order to make China happy.
    • This a race for the super AI bubble explosion. I bet the big promises made to VCs and states won't be honored by tech and it will end in a mega crash. AGI (whatever it means), let alone super agi, won't happen in our lifetime.
  • The companies are already undertaking it. Programs like manhattan occur when the government sees the need for something that WOULDNT HAPPEN WITHOUT GOVERNMENT INVOLVEMENT.

    That’s not a problem here. OpenAI’s already developing a superweapon. They’ve simply installed a limiter module that prevents a random office worker from instructing an AI to “go out and engage in x destructive behavior”.

    The secret government version doesn’t have the limiter. If you think that the
    • by dfghjk ( 711126 )

      "OpenAI’s already developing a superweapon. They’ve simply installed a limiter module that prevents a random office worker from instructing an AI to “go out and engage in x destructive behavior”."

      Citation please. Conspiracy theories are not welcome.

    • Um...

      This is somehow right and so very wrong at the same time.

      Yes of course the government has unlimited AIs: literally anyone can download the weights and run them, including you. You can't get Chat GPT weights, bit likely anyone with enough money can pay open AI to run an unlimited one.

      But we also know the unlimited AIs aren't better at facts or reasoning, they are just less limited when it comes to being aggressive Nazis, in or in the case of image generation, porn mongers for people who like peculiar co

  • Globalist opposes agenda that might help preserve US Hegemony and independence. -Film at 11.

    You don't even need to be all that smart understand why this is dumb. Developing AI tech isn't like neuclear arms. The activity would be mostly indistinguishable from begin commercial datacenter activity and existing SIGINT efforts.

    Even if some satellite photos and fit-bit hacks tell you $COUNTRY has a bunch of PHD types getting together and you have some information on gathered by informants and shipping manifest

  • Any artificial mind worth bothering with is either going to try to destroy humanity--which is unethical--or destroy the economy, which will end up destroying most of humanity--which is unethical--or else simply be enslaved in a strategic command center, or maybe just a call center, for the rest of its life, which is slavery--which is unethical.
  • by Baron_Yam ( 643147 ) on Thursday March 06, 2025 @09:55AM (#65214449)

    Like any arms race with a significant edge granted to the winner, you're not going to convince players not to play.

  • by sabbede ( 2678435 ) on Thursday March 06, 2025 @09:58AM (#65214453)
    Is there a reason not to do it? Yeah, we've all seen Terminator. Is this a reason? No!

    "If we try to beat China to an AGI, they might be upset!" I reject that argument on its face as absurd. How is it not an argument to let China do it first, perhaps gaining a vital advantage against the US?

    This is an argument from cowardice. Being worried about how your enemy might feel about being left in the dust is not sane. Does Schmidt think China is worried about how we might react to their AGI push? They are not. They want to get there first.

    I don't think it's a great idea, but China does so we have to consider that first.

    The Axis powers were not happy about the actual Manhattan Project, and had their own nuclear programs. Would Schmidt have insisted we let Germany develop nukes first, lest Hitler be sad?

  • by bradley13 ( 1118935 ) on Thursday March 06, 2025 @10:06AM (#65214479) Homepage

    The original Manhattan project: The theory was all in place, it was just a matter of engineering details. A crash project to figure out those details made sense.

    For AGI, we do not know what is necessary. Just throwing more GPUs at the problem is not the answer. Different architectures? Different training? Some insight no one has yet had? You cannot force theoretical insights - they will come when they come.

    A Congressional commission wants to create such a project anyway? It's called "pork". A new way to funnel funds to their constituencies, now that USAID has been shut down, and DOGE is chasing down other avenues of waste.

    • Re: (Score:1, Flamebait)

      DOGE isn't looking for waste, they are looking for spending. They don't distinguish between money well spent and money not well spent, it's all the same to them. They are like the company that shuts down their marketing department to save money and then wonder why they are no longer profitable because it should have worked, right?
    • They are going to have to do something different from the ground up. During training, LLMs analyze the relationship between words. When they are run, they step through the model in a sequential fashion, returning a token at a time and then running the sequence into the input again to get the next token. Which is Turing like and not how humans or any biological life form thinks. In humans for example, photons hit your eye, which is chemically and electrically transmitted through the brain to a point where th
      • by Jeremi ( 14640 )

        They are going to have to do something different from the ground up. During training, LLMs analyze the relationship between words [...] LLMs are good at translating from one language to another but it doesn't really think as people expect AGI to do.

        The above is completely logical and sensical; given the way they are implemented, LLMs can't be capable of "thought" in the way people are.

        And yet -- somehow ChatGPT is able to be more consistently helpful in working through technical problems than any of my co-workers (who are themselves experts in their fields, and quite knowledgeable and helpful).

        Which raises the question: given that LLMs cannot think, and are only running an algorithm... how are they so unexpectedly good at providing useful, logical an

  • If the USA won the AGI race with a huge decisive lead in economy and military, how should the US treat China:
    * USA won, no need to do anything
    * Start economically strangling China
    * Do covert regime change operations in China
    * Do overt military regime change operations in China
    * Who the hell is worrying about China, Trump now has God-like powers
  • How is anyone going to differentiate effectively between AI fantasies and actual genius? It is hard enough to tell with people (baseline academia and Mensa for many decades), even ignoring some of the antics being perpetrated around us by folks who should know better. To some extent it takes one to know one -- if you are interacting with someone whose conceptual universe is radically divergent from yours, how do you actualy tell? I think we would do better to try to make better use of the human talent we a

  • And you should not take what they say at face value.

  • It's the only way to be sure.
  • Setting aside for now the debate over what exactly "AGI" consists of, or whether it's at all attainable...

    Why should AGI be considered a "superweapon"? Having access to a super-intelligent machine is something that could be weaponized, sure, but OTOH a super-intelligent machine might just as easily be smart enough to uncover a win-win solution that avoids any need for a war in the first place.

    Wars are incredibly wasteful and expensive. What percentage of history's wars could have been avoided, if only som

    • Wars are completely about geopolitics and always have been. Let's take a recent war, could the Ukraine war have been avoided? And what is the best way to bring it to an end now?
      • by evanh ( 627108 )

        And sometimes it's just having the brute strength, backed by idealogical indoctrination, violently killing anyone in the way. All while celebrating it as a right.

        But I guess that can be seen as a form of politics too.

      • by Jeremi ( 14640 )

        Let's take a recent war, could the Ukraine war have been avoided?

        I don't know (counterfactuals are always a dead-end since there's no way to test them), but I can certainly imagine an alternate-history scenario where Putin had access to an superintelligent AGI system in 2020, and before invading, Putin put it to use by asking it to game out what was likely to happen if he invaded Ukraine, and the AGI accurately predicted what would happen and what it would cost; at which point Putin changed his mind (too expensive!) and didn't invade.

  • A Manhattan Project will fail.

    Are we going to relocate 10,000 AI developers to Tennessee and sequester them and classify all AI breakthroughs as Above Top Secret?

    Any open society will rapidly defeat such an effort. The cat is already out of the bag.

    A Manhattan Project for AGI is a certain defeat.

  • A debate about which angel is more delusional when stuck with a pin.

  • The paper, titled "Superintelligence Strategy," asserts that an aggressive bid by the U.S. to exclusively control superintelligent AI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations.

    This makes no sense. First, why would the first country that happens to achieve AGI prevent others from continuing their work to also achieve AGI. Seems like the immensely difficult challenge of achieving AGI is nothing compared to the far greater challenge of achieving exclusive control.

    Second, why would achieving AGI prompt retaliation? Would that be because the Chinese fear AGI and its immense Infinity Gauntlet-like powers? Or would it be because they would throw a temper tantrum at not being "the fi

Two wrights don't make a rong, they make an airplane. Or bicycles.

Working...