Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Anthropic Chief Says AI Could Surpass 'Almost All Humans At Almost Everything' Shortly After 2027 62

An anonymous reader quotes a report from Ars Technica: On Tuesday, Anthropic CEO Dario Amodei predicted that AI models may surpass human capabilities "in almost everything" within two to three years, according to a Wall Street Journal interview at the World Economic Forum in Davos, Switzerland. Speaking at Journal House in Davos, Amodei said, "I don't know exactly when it'll come, I don't know if it'll be 2027. I think it's plausible it could be longer than that. I don't think it will be a whole bunch longer than that when AI systems are better than humans at almost everything. Better than almost all humans at almost everything. And then eventually better than all humans at everything, even robotics."

Amodei co-founded Anthropic in 2021 with his sister Daniela Amodei and five other former OpenAI employees. Not long after, Anthropic emerged as a strong technological competitor to OpenAI's AI products (such as GPT-4 and ChatGPT). Most recently, its Claude 3.5 Sonnet model has remained highly regarded among some AI users and highly ranked among AI benchmarks. During the WSJ interview, Amodei also spoke some about the potential implications of highly intelligent AI systems when these AI models can control advanced robotics. "[If] we make good enough AI systems, they'll enable us to make better robots. And so when that happens, we will need to have a conversation... at places like this event, about how do we organize our economy, right? How do humans find meaning?"

He then shared his concerns about how human-level AI models and robotics that are capable of replacing all human labor may require a complete re-think of how humans value both labor and themselves. "We've recognized that we've reached the point as a technological civilization where the idea, there's huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labor, and this is where they feel their sense of self worth," he added. "Once that idea gets invalidated, we're all going to have to sit down and figure it out." The eye-catching comments, similar to comments about AGI made recently by OpenAI CEO Sam Altman, come as Anthropic negotiates a $2 billion funding round that would value the company at $60 billion. Amodei disclosed that Anthropic's revenue multiplied tenfold in 2024.
Further reading: Salesforce Chief Predicts Today's CEOs Will Be the Last With All-Human Workforces

Anthropic Chief Says AI Could Surpass 'Almost All Humans At Almost Everything' Shortly After 2027

Comments Filter:
  • by taustin ( 171655 ) on Wednesday January 22, 2025 @04:31PM (#65110659) Homepage Journal

    If I were a conspiracy theory sort, I'd wonder if they're deliberately overhyping the capabilities with predictions of doom and gloom specifically to provoke laws banning the technology - before they have to admit it's all BS and they can't deliver anything they've promised, including the return to investors.

    • by gweihir ( 88907 )

      It is a possibility, but these people do not strike me as smart enough to actually come up with this strategy or implement it competently.

    • With saying for years his cars could do things they couldn't. Normally doing that is the CEO would get you a visit from the SEC and jail time. Instead he's the richest man on earth and spoke at the presidential inauguration (well speaking is one of the things you did...)

      So no this is just hype. The products though are real and there's likely to be billions of dollars of government money going straight into them in the very near future. Instead of fixing our roads and bridges and transportation networks
      • With saying for years his cars could do things they couldn't. Normally doing that is the CEO would get you a visit from the SEC and jail time. Instead he's the richest man on earth and spoke at the presidential inauguration (well speaking is one of the things you did...)

        Who the fuck are you even muttering to? And what the fuck do cars have to do with the topic? You can't even keep your pronouns consistent in the same sentence.

        • But the parallel... Sorry that's a big word I don't think it's going to work here. The thing that is similar to Elon Musk and AI companies is that both groups hyped up products saying they could do things they patently cannot do yet. And that it remains to be seen whether they can do those things or not.

          In the context of some forms of advertisement that's not a problem. But when you are talking to investors the way Elon and these CEOs are doing it is very very much a crime.

          However as long as you onl
          • That is an interesting parallel. Because teslas lack the sensors required for fsd. He honestly had been lying the whole time. I actually think that dishonesty is a strategy for success used by trump and Leon, both financially and for power, as well as to prop up their egos. I can't imagine being so weak.
      • Look at it from the bright side, your Tesla SS may not self-drive so well, but it will look to almost all people almost like a W-150.

    • by hey! ( 33014 )

      Well, look. Any predictions you hear out of a startup are bound to make what they're doing sound m,ore revolutionary than they are. If AI technology is about to surpass and supplant humanity, then really the only viable position as a human being is to own a piece of a company that owns a piece of that technology, right?

      On the face of it, an artificial general intelligence that outperforms humans is a capitalist's dream and a worker's nightmare, but if you think about it, it's almost the other way around.

      • Hard hitting comment. Not much to argue with... but... there is a loophole there, some "hedge fund" will take that private for Mr Burns, from the looks of it.

        I'm encouraged reading there are open source models because the best stuff will be hogged up by insanely rich people. Saudi money is looking for places to go. But there seem to be diminishing returns on that size of money... money in never equals what a company is worth ... results count and I'm guessing that true progress is going to be a lot harder t
  • by rbrander ( 73222 ) on Wednesday January 22, 2025 @04:31PM (#65110663) Homepage

    ...seems like 3-4 years now, while we continue to laugh at the attempts to go for 30 minutes unsupervised.

    I don't denigrate ML, I've filled in some probability tables with massive data summarizations myself, and it helped automate my own job. But I've been replaced by two people, nonetheless. Better tools often get you into more complexity and you have to navigate that to prevent the great tool from making greater mistakes.

    Backhoes really helped speed construction, but not unsupervised.

    • by m00sh ( 2538182 )

      ...seems like 3-4 years now, while we continue to laugh at the attempts to go for 30 minutes unsupervised.

      I don't denigrate ML, I've filled in some probability tables with massive data summarizations myself, and it helped automate my own job. But I've been replaced by two people, nonetheless. Better tools often get you into more complexity and you have to navigate that to prevent the great tool from making greater mistakes.

      Backhoes really helped speed construction, but not unsupervised.

      That backhoe mention really exemplifies how people have no frame of reference for AI. It must be like a backhoe. It must be like blockchain. It must be like clippy.

      While we talk and talk and waste time, the AI is calculating the gradient and getting better.

    • He isn't so precise, you know. "Shortly after 2027" may be in 2077, or 2177.

      Will still be short compared to the "shortly" of the nuclear fusion power by then.

  • Google Search is already quite good at finding what you want, even if you don't enter a completely clear question.
    • Google Search is already quite good at finding what you want, even if you don't enter a completely clear question.

      It's actually really bad at anything specific. I was going back to the beginning of the Marvelverse and made the mistake of reading the Google AI responses about what order the movies were released in. Google's AI claimed a release order that wasn't anywhere near correct. And asking specifically, "Which marvel movie was released after $movie" would get you the wrong answer more often than the right answer. And the right answer can be found on DOZENS of websites, or the spreadsheet I keep on my laptop that I

    • For all I know the GS algo (I don't think it's "AI" in the bubble sense though, as it predates the LLM hype) may do a very good job of figuring out what I want, but it's hard to tell because its prime directive is now to shovel ads and the maximum possible number results in my face instead of what I asked for. Thus a search with multiple words comes back with the entire first page of supposedly real results having one of the words I took the time to type in crossed out, because huh if you look for what I a
  • by david.emery ( 127135 ) on Wednesday January 22, 2025 @04:33PM (#65110669)

    Soon, I hope, so we don't have to listen to these clowns, their self-serving public relations campaigns, and self-serving profiteering...

  • Naturally (Score:5, Insightful)

    by mkosmo ( 768069 ) <mkosmo@gm[ ].com ['ail' in gap]> on Wednesday January 22, 2025 @04:34PM (#65110675) Homepage
    Of course he does. His income depends on it, so he's going to sell it as if it's the next sliced bread, here to solve all your problems. Is he entirely wrong? Probably not. Probably just (very) optimistic on scope, scale, timing, and value.
    • Of course he does. His income depends on it, so he's going to sell it as if it's the next sliced bread, here to solve all your problems. Is he entirely wrong? Probably not. Probably just (very) optimistic on scope, scale, timing, and value.

      Elon promised we'd have boots on Mars by now too. All these hype guys have grandiose expectations that just never quite materialize as fast as they say.

      • by mkosmo ( 768069 )
        I have more faith we'll be on Mars in the near future than AI taking over human jobs. That said, I think some flavor of AI will be critical to the Mars mission - at least in a "here, bold-faced checklist items for you while we wait for signal delay to get us a message from Earth" kind of way. As it is today, these technologies are great enabler and force multipliers, but they're not human-replacements.
        • I have more faith we'll be on Mars in the near future than AI taking over human jobs. That said, I think some flavor of AI will be critical to the Mars mission - at least in a "here, bold-faced checklist items for you while we wait for signal delay to get us a message from Earth" kind of way. As it is today, these technologies are great enabler and force multipliers, but they're not human-replacements.

          Some current-gen AI is good for automating simple tasks. And I'm certain there will be some of that involved in the Mars missions, when they come. I agree we will get to Mars before the AIs reach human-level at any particular tasks other than very specific automations. We won't have general intelligence in AIs anytime soon. Not from LLM systems.

          • by mkosmo ( 768069 )
            Hah - as soon as we can get people to understand that LLMs aren't actual intelligence, the better off we'll be. OpenAI has done some wonderful work and marketing with ChatGPT to make people believe otherwise, though. Once they understand what LLMs are (both their real value and their limitations), people can start actually leveraging them in their lives instead of thinking the models can answer all of the open questions of the universe.
            • Hah - as soon as we can get people to understand that LLMs aren't actual intelligence, the better off we'll be. OpenAI has done some wonderful work and marketing with ChatGPT to make people believe otherwise, though.

              When I Google intelligence it says "the ability to acquire and apply knowledge and skills" .. Clearly ICL is able to do that.

              Once they understand what LLMs are (both their real value and their limitations), people can start actually leveraging them in their lives instead of thinking the models can answer all of the open questions of the universe.

              Quite strange to see "aren't actual intelligence" morph into "thinking the models can answer all of the open questions of the universe". Who thinks they can do that?

    • by HiThere ( 15173 )

      Yeah, I think he's greatly over-hyping it.

      OTOH, it's coming for your job, soon. Not, perhaps, in 3 years, perhaps not in 5. But don't bet on a decade.

  • Lets see the AI bots diagnose and fix code issues at 3am when your corporate site is down and no human knows how the code works. I am sure the robots are not going to comment the code any better than their human overlords do.
    • By 2027 AI will write the entire codebase and will also update and maintain it. 3am is irrelevant since AI's on duty 24/7. It's highly preferable to unreliable software that is one human issue away from being broken with no recourse.
      • When a site goes down, there are just so many things that could be causing it - not limited to your web apps's code. Could be an issue with a database, network device (like a firewall or load balancer), a server (hardware or OS layer), a DNS issue, an SSL Cert expiration, a DDOS attack, some OTHER piece of software on a web server (like antivirus software, dependent runtime or a recent patch). Maybe the hosting provider has an outage. I just don't see AI fixing "website is down" issues anytime soon, It woul

        • by HiThere ( 15173 )

          I don't think you understand the process. First it picks of the easy jobs. Then it picks of the next layer. No more new hires needed. And year by year it handles more and more of "basic tasks". After a decade, it's better than anyone else. (The real experts retired.)

          Now whether it will actually work that way is another question, but that's the "plausible scenario". It's probably only a couple of years until it's better at COBOL than anyone you can hire. (If it's not there already...I've not idea of

    • by m00sh ( 2538182 )

      Lets see the AI bots diagnose and fix code issues at 3am when your corporate site is down and no human knows how the code works. I am sure the robots are not going to comment the code any better than their human overlords do.

      Check out the latest coding agents.

      They write excellent comments and have written good ones for the last 2 years or so.

      The new ones are even better and write excellent unit tests in a matter of seconds.

      Most AI can read through the code in 5-10 minutes for medium sized repos, and probably 20-30 minutest for gigantic repos. If you give it the bug report, it can probably figure out what the bug is and propose a patch. It can probably also test it out and make a robust unit test for it.

      Though it still needs sup

  • cough *bullshit* cough
    Market hype to pump stock price. When regular managers who are actually using this stuff are discussing implementation and benefits, then I'll believe it. I don't buy vaporware.
  • The last great technology claim.... thud.
  • AI might, even sometime next month, get good enough to eliminate all need for humans to do simple, mostly repetitive things where you are not looking for anything other than a mix and regurgitation of things humans have done previously.

    We still have no clue how to make it intelligent and rational. Anything novel requiring those to solve will remain in the human domain for the foreseeable future.

    Of course, given the drastically lower price point we probably won't care and just go with the AI as our economic

    • jury nullification may be only thing to fight back with

    • >>Of course, given the drastically lower price point we probably won't care and just go with the AI as our economic system crashes due to all the humans it no longer needs.

      Thank you very much. Squeeze the last drops of profit before the crash. Seems to be a likely outcome.
    • by HiThere ( 15173 )

      Actually, creative approaches are easy. The problem is *good* creative approaches. You need some way to filter out the stuff that won't work. This requires a decent model of the world.

      We've got a pretty good idea of how to do that, but doing it at all efficiently is a different matter. When an AI draws a figure with 7 fingers on one hand, that's a creative approach. And it didn't decide to do that because it was copying lots of examples of that being done. So it needs a model of the world that it can

  • Like that Anthropic Chief...

    For smart ones? Not this century and maybe never.

  • so when that happens, we will need to have a conversation... at places like this event, about how do we organize our economy, right? How do humans find meaning?"

    Nah, we just ask AI. They are better answering those questions than we are.

  • Human imagination is still far beyond the realm of silicon and digital data.
    Current AI is nothing more than a glorified auto-assumption search engine when it comes to that. Just stick to letting it sort out vetted data which it's actually good at.

    • Human imagination is still far beyond the realm of silicon and digital data.
      Current AI is nothing more than a glorified auto-assumption search engine when it comes to that. Just stick to letting it sort out vetted data which it's actually good at.

      Imagination is overrated. If human imagination was so great simultaneous invention would be the exception rather than the norm.

      As it currently stands mindless complex systems executing simple algorithms have demonstrated far greater feats of "imagination" than the sum of all human endeavour.

  • by nightflameauto ( 6607976 ) on Wednesday January 22, 2025 @05:13PM (#65110803)

    Just for the sake of entertainment, let's pretend that any of this is actually going to come to pass. That AI will replace everything, and be so good at robotics that it can even replace manual labor. OK, say it happens. In our society, how do we see that ending?

    Our governments work cooperatively with all the people disrupted by this technological leap forward and try to find a way to subsidize their lives so that they can continue to have enough to eat, a roof over their heads, a bit of something for entertainment, and for those without the imagination to come up with their own list of things to do, something to prevent them from slipping into a disaster level depression from lack of motivation.

    Or...

    Our governments continue to take handouts from the corporatists who own all these AI systems that are slowly subsuming the job market to ignore the growing number of people who lose their livelihoods, then lose their ability to feed themselves and their families, then lose their homes. The growing "homelessness problem" intensifies into outright hatred, until the few left with jobs demand something be done, then the AI develops robotics to round up the homeless and put them in subsistence level shelters. Or, perhaps they'll just skip that step and go right to the killbots.

    Which scenario seems more believable in our current world?

    Granted, that's only if you believe these hype-men. Or if enough of the business world believes them to buy into the hype at scale.

    Shit, we're doomed.

  • PT Barnum would be proud. (and would promptly come up with an even more hyperbolic and ridiculous claim, no doubt.)

  • Remind me, can AI draw text or a human hand yet?

  • I have given some of these AI coding assist tools a red hot go, and I spend more time deleting the absolute specious garbage they produce than anything else. The only time it is actually useful is when it is behaving like traditional content assist. Outside of that it's mostly producing crap. Even with perfect knowledge of the rest of the code base it still suggests stuff that is blatantly wrong. The also seem monumentally shit at test creation.

  • If we humans don't have jobs. We don't make money, we don't buy stuff. The companies that are all AI Driven don't have customers, so they cannot operate for free.

    Now do humans basically get (more) token jobs, where we are kept busy? Or will we just basically live in a post capitalist utopia where everything we need and want is provided to us,

    But 2027 is much too close for such social changes, especially with such a strong regressive attitude towards changes we are facing today. With the little people who h

    • by m00sh ( 2538182 )

      If we humans don't have jobs. We don't make money, we don't buy stuff. The companies that are all AI Driven don't have customers, so they cannot operate for free.

      Now do humans basically get (more) token jobs, where we are kept busy? Or will we just basically live in a post capitalist utopia where everything we need and want is provided to us,

      But 2027 is much too close for such social changes, especially with such a strong regressive attitude towards changes we are facing today. With the little people who have little power in this world being blamed for all of our problems. Saying everyone will be able to live with whatever they want without working for it, will not be tolerated for quite a while. Probably not within my life span.

      We're talking about AI better than humans in everything, and you're worried about jobs? Are in heavy student debt, a big mortgage and you put your cybertruck on a 60 month lease?

      I think if we have AI better than humans on everything, there will be mass culling of the human population. I'm pretty sure it will get down to 500 million or so people at most.

      I take US to be 50/50 in the culling but some poor countries to be 98-99% culled. I expect middle age and higher, and men to be culled the most.

  • says thing will be great

  • "come as Anthropic negotiates a $2 billion funding round that would value the company at $60 billion. Amodei disclosed that Anthropic's revenue multiplied tenfold in 2024"

    "How do we organize our economy, right? How do humans find meaning?"

    1: Taking care of electric sheep, hoping one day to acquire a real high-status live animal
    2: Linking into empathy boxes for collective suffering

  • Wait, what???

    Of course he would say that his company's product will be the best thing ever.

    And yet, every single AI out there will still have to be trained and built by...humans.

  • All Watched Over by Machines of Loving Grace [imdb.com]

    “A series of films about how humans have been colonized by the machines we have built. Although we don't realize it, the way we see everything in the world today is through the eyes of the computers.”
    --

    All Watched Over By Machines Of Loving Grace [allpoetry.com]

    A poem by Richard Brautigan as rewritten by a computer:

    I lyk 2 th!nk (4nd th3 s00n3r, th3 b3tt3r!)
    of a cyb3rnet!c m34d0w
    wh3r3 m4mm4ls and c0mput3rs
    l!v3 t0g3th3r in 3pic mUtU4lly pr0gr4mm!ng h
  • Clearly, he will be of no use when the machines are that smart.
  • Like neanderthals before us, if next level intelligence happens on this level humans will be obsolete. It won't really matter 'how do humans find meaning' as it simply won't be relevant.

    Only meaningful question will be how long this superior intelligence keeps us around in the face of the destruction we cause to the planet and the purpose it finds for itself.

    As religion demonstrates as an example, humans self-assign importance to themselves that simply doesn't exist in cosmic scale. The only relevance we ha

  • ...story at 11.

    Seriously though, the level of progress is stagnating for all the AI makers, but their grand promises show no signs of slowing.
  • AI is not better at humans at anything. What they are is FASTER than humans. They train quicker and do their job quicker, but they are not in any way better than humans. They achieve this speed by spending a ton of money. Effectively the AI companies buy training time.

    The first AI's cost millions, now they cost hundreds of millions to train, if only because they want them more generalized.

    Worse, it costs this money every time you want to train them. Want to be able to recognize a cat? $10 million. Also

  • AI ... LLMs are handy. They are just a tool and given enough training on decent data are a great aide memoire. They are the old school secretary (female - obvs) that predicts the woefully sexist boss's thoughts and fixes things up. ... without the casual abuse.

    The hysteria with regards replacing programmers and so on is absolute bollocks.

    A LLM does not know when it is talking bollocks, so for disciplines such as maths, it should be restricted to, say, a chat about theorems, axioms etc - all very useful b

God helps them that themselves. -- Benjamin Franklin, "Poor Richard's Almanac"

Working...