Musk Predicts AI Will Overtake Human Intelligence Next Year 291
The capability of new AI models will surpass human intelligence by the end of next year [non-paywalled link], so long as the supply of electricity and hardware can satisfy the demands of the increasingly powerful technology, according to Elon Musk. From a report: "My guess is that we'll have AI that is smarter than any one human probably around the end of next year," said the billionaire entrepreneur, who runs Tesla, X and SpaceX. Within the next five years, the capabilities of AI will probably exceed that of all humans, Musk predicted on Monday during an interview on X with Nicolai Tangen, the chief executive of Norges Bank Investment Management.
Musk has been consistently bullish on the development of so-called artificial general intelligence, AI tools so powerful they can beat the most capable individuals in any domain. But Monday's prediction is ahead of schedules he and others have previously forecast. Last year, he predicted "full" AGI would be achieved by 2029. Some of Musk's boldest predictions, such as rolling out self-driving Teslas and landing a rocket on Mars, have not yet been fulfilled. A number of AI breakthroughs over the past 18 months, including the launch of video generation tools and more capable chatbots, have pushed the frontier of AI forward faster than expected. Demis Hassabis, the co-founder of Google's DeepMind, predicted earlier this year that AGI could be achieved by 2030.
The pace of development has been slowed by a bottleneck in the supply of microchips, particularly those produced by Nvidia, which are essential for training and running AI models. Those constraints were easing, Musk said, but new models are now testing other data centre equipment and the electricity grid. "Last year it was chip constrained ... people could not get enough Nvidia chips. This year it's transitioning to a voltage transformer supply. In a year or two [the constraint is] just electricity supply," he said.
Musk has been consistently bullish on the development of so-called artificial general intelligence, AI tools so powerful they can beat the most capable individuals in any domain. But Monday's prediction is ahead of schedules he and others have previously forecast. Last year, he predicted "full" AGI would be achieved by 2029. Some of Musk's boldest predictions, such as rolling out self-driving Teslas and landing a rocket on Mars, have not yet been fulfilled. A number of AI breakthroughs over the past 18 months, including the launch of video generation tools and more capable chatbots, have pushed the frontier of AI forward faster than expected. Demis Hassabis, the co-founder of Google's DeepMind, predicted earlier this year that AGI could be achieved by 2030.
The pace of development has been slowed by a bottleneck in the supply of microchips, particularly those produced by Nvidia, which are essential for training and running AI models. Those constraints were easing, Musk said, but new models are now testing other data centre equipment and the electricity grid. "Last year it was chip constrained ... people could not get enough Nvidia chips. This year it's transitioning to a voltage transformer supply. In a year or two [the constraint is] just electricity supply," he said.
Ok (Score:5, Funny)
At least we now know what won't happen.
Re:Ok (Score:5, Insightful)
Yup. We wait until someone who understands AI can speak on the matter. Ie, someone who is NOT a CEO and who has no vested financial interest in boosting or diminishing the hype.
Re: (Score:3)
You don't need someone who understands AI, you just need someone who understands how science works at the frontier of human knowledge. Because the correct answer is we simply don't know. We never know what leading researchers in any field will discover or accomplish over the next few years. Unless top AI researchers have already reached human level intelligence but haven't released that info to the public, they aren't going to have any idea when we will reach that milestone.
If we understood precisely how th
Re: (Score:2, Funny)
Can Musk even define "intelligence"?
Re: (Score:2, Insightful)
Can Musk even define "intelligence"?
Can you?
Re:Ok (Score:5, Informative)
Musk has a Bachelor's in Physics and Economics and the degrees are apparently questionable. He cannot really do any real definition work, he ist just not smart or educated enough for that. He does have a massively oversized ego though. Probably thinks he is a stable genius or something.
It Depends (Score:2)
Good (Score:3)
Re: (Score:2)
Xtank. We need to bring back Xtank. With a full Wayland version, sound effects and all.
Re: (Score:2)
He said smarter than a human. 1 Human. So you'll need to pick which codebase gets updated. And then it'll say that it's got better things to do and abandon the task.
Depends on the task (and the human) (Score:5, Interesting)
There's also a matter of who it's being compared to. Tesla's AI for autonomous driving isn't something I'd trust to operate without a person behind the wheel to take over, but there are some people who are such lousy drivers that even the AI available today will outperform them.
Re: Depends on the task (and the human) (Score:4, Insightful)
Re: (Score:3)
This is how I kind of think of it. It doesn't have to be better then people it just has to be better then some people and the replacement will begin. I also think there's a round of bar lowering going on right now as people start silently outsourcing their jobs to ChatGPT.
You see the bar far too high. It doesn't have to be better than any people to start replacing them at work. It just has to be good enough a salesman can convince an executive that it's better than people. And that doesn't take much if it's the right salesman and the right executive. Hell, I've had managers that can't so much as run into a salesman at a meal out and not walk away with visions of totally new systems replacing old, but completely functional systems. This replacement will be no different, they'
Re: (Score:2)
No, but seriously; have you checked out the latest 12.3.3 full self drive (no longer beta)? It's getting scarily good and I would say it handles most situations better than a few people I know who haven't had their licenses revoked (yet).
He's been promising the self drive capability for what, a decade now? I used to think it was a 20+ year problem, but now I honestly think we are really close to something that is better than a majority of drivers in a majority of situations.
Re: (Score:3)
There seems to be so much hubris and prescience around this field.
We don't really know what AGI is, and we don't know whether there are other kinds of intelligence.
Thanks to bad social media sites, we've developed this cultural philosophy of spouting opinions without any substance or knowledge.
Elon Musk was so prone to this behaviour, he bought one of those bad social media sites.
What we do know is that LLMs already exceed human abilities in many tests. If anyone genuinely believes that number
Re: (Score:2)
Isn't that the difference between AI and AGI? Training for a specific task is a finite effort; chaining tasks together also breaks down task identification and processing. To me at least though general inteligence is about using a complete body of knowledge to process and act on information.
Is that before or after Tesla full self driving (Score:3, Insightful)
Arguably He predicts a lot of things like Tesla full self driving for years... Maybe, maybe not... who knows
AI Has done cool things lately and will do even more but there is no certain timeline yet
The great predictor (Score:5, Funny)
Will this be before or after full self driving?
Re: (Score:2)
Re: The great predictor (Score:2)
Re: (Score:3)
There is a critical mass of Teslas that aren't smart enough to be driving either. See "The final 11 seconds of a fatal Tesla Autopilot crash":
https://www.washingtonpost.com... [washingtonpost.com]
Most drivers only drive poorly in certain sections of the drive. The question is whether what Tesla is doing helps or hinders poor driving.
Re: (Score:3)
See "The final 11 seconds of a fatal Tesla Autopilot crash":
https://www.washingtonpost.com... [washingtonpost.com]
There's a serious case of victim blaming going on there. A semi driver blew a stop sign and got someone killed. Yes, the driver of the Tesla should've been paying attention, but the onus was on the semi driver to yield to oncoming traffic, which he failed to do.
It seems like this is a major area where self-driving is still quite deficient. It's not quite so good at avoiding an accident when another driver has created an unsafe situation by doing something they weren't supposed to.
Obligatory Mana... (Score:3, Insightful)
...because someone needs to mention it whenever AI is brought up, so might as well be me this time.
Go read Mana by Marshall Brain [marshallbrain.com] to give you an idea of the hellscape AI under the control of modern capitalism will bring us.
Re: (Score:2)
Re: (Score:3)
...because someone needs to mention it whenever AI is brought up, so might as well be me this time.
Go read Mana by Marshall Brain [marshallbrain.com] to give you an idea of the hellscape AI under the control of modern capitalism will bring us.
Eh, the show 'Person of Interest' was a great way to see how AI being in total control would end.
Re: (Score:2)
why? like most "authors" of his time its mostly pointless drivel based on the loose reality of his parents with a coat of space man 1960-70's horseshit
Re: Obligatory Mana... (Score:2)
Sadly, while it's generally bad and his "utopian" scenario was exceptionally badly done, his dystopic starter scenario is depressingly believable.
Please....STOP. (Score:5, Insightful)
Re:Please....STOP. (Score:5, Insightful)
Entrepreneur worship. It didn't end with Steve Jobs. Just a new object of adoration, one who is infallible. "He has more money than you, that means he's smarter than you, so shut up you heretic!"
Re:Please....STOP. (Score:4, Insightful)
Semi-morons like Musk and Trump worry me. They are an indicator that the human race has started a massive regression process.
No it won't (Score:5, Insightful)
Elon Musk says a lot of stupid things. The problem here is that current "a.i" is not a real a.i. Its just a word database, with a large index and a lot of loops and what if intergers.
Re: (Score:3)
The term is "Big Data Machine Learning". What we have is a fusion of two that was made possible by modern GPU technology.
It's basically a massive inference machine with an extremely large pool of data it learns inferences from. This is inherently NOT AGI, and unlikely to be a path to it as we understand it.
But then, there's a lot of belief among the people who made this current iteration that they can in fact push it into AGI territory, so it's completely possible and even likely that it's not that we're wr
Re: No it won't (Score:2)
AGI would require creativity, which means it needs to have some sort of want and desire.
Re: (Score:2)
AGI would require creativity, which means it needs to have some sort of want and desire.
Even present day AI is creative, which means nothing.
Re: No it won't (Score:5, Insightful)
Present AI is opposite of creative. It's purely derivative. It cannot create. That's it's main weakness.
This is why there's the current problem with facebook image AI bots and comment bots poisoning each other. Image bots farming engagements originally collected data only from organic posts by human for their first training session. Afterwards, they were incorporating highly upvoted posts of their own and each other. And comments bots... upvoted everything.
So the image bots went from "artistic looking images of Jesus-like figures" to "warp monstrosities from warhammer 40k with some Jesus-like figures". Because they learned from what worked on comment bots and derived from it more and more weird and extreme images. And so, it's now the utterly horrible mutated monsters with halos. Because there's no creativity there. Merely derivation.
Re: (Score:2)
Present AI is opposite of creative. It's purely derivative. It cannot create.
I've observed otherwise.
This is why there's the current problem with facebook image AI bots and comment bots poisoning each other. Image bots farming engagements originally collected data only from organic posts by human for their first training session. Afterwards, they were incorporating highly upvoted posts of their own and each other. And comments bots... upvoted everything.
So the image bots went from "artistic looking images of Jesus-like figures" to "warp monstrosities from warhammer 40k with some Jesus-like figures". Because they learned from what worked on comment bots and derived from it more and more weird and extreme images. And so, it's now the utterly horrible mutated monsters with halos. Because there's no creativity there. Merely derivation.
I've seen both LLMs and diffusion models do creative things with my own eyes.
Re: (Score:2)
Ah, but that's the magic of being able to call on an extremely large dataset. The results appear to be entirely novel. In actuality, it isn't. It's almost like sleight of hand, but probably better described as sleight of data.
That AI generated picture you see, it's all just bits and pieces borrowed from other sources. Perhaps you can argue that it has been creatively assembled, but even that is algorithmically deduced through
Re: (Score:2)
This is incorrect. AI is more than capable of creating conversations never had happened before and images that never existed before.
The part you're missing that allows for unique creations is the prompt supplied by the user.
Re: (Score:2)
>I've seen both LLMs and diffusion models do creative things with my own eyes.
Creativity in both LLMs and diffusion models comes from prompt. Prompt is supplied by a human. The rest is merely derivation based on training material.
This is why when prompt is supplied by another AI and you incorporate outcomes into training materials, you end up with the facebook scenario I mentioned above. There's nothing creative there either. Merely a much greater amount of errors of logic, due to poisoning applied to th
Re: (Score:2)
Then you simply have not understood what you have observed.
AC's crack me up. You don't even know what I observed and yet here you are commenting about things you know nothing about.
Creativity is nothing special. Evolutionary algorithms, plants, viruses..etc are able to find creative solutions by random process. Creativity doesn't require a brain or any intelligence whatsoever. This technology is little different mixing randomness /w context and with enough tries / luck comes up with some pretty creative shit. Like a group of monkeys mashing keys until by chan
Re: (Score:2)
This is just you not understanding what is happening.
This is just you not understanding the fundamentals of the technology. LLMs generalize knowledge and are able to apply what they learn.
AI has no imagination or ability to create
You don't need an imagination to be creative. Heck you don't even need a brain or any intelligence whatsoever. Creativity can arise in complex systems by chance. Look around you, the structure of life itself is chalk full of creativity.
all it can do is derivative works based on what it has access to in its library
What does this even mean? How can one objectively disambiguate between derive and create? Throughout all of human history simultaneous i
Re: No it won't (Score:2)
Even present day AI is creative, which means nothing.
It's not even close to being creative. Everything it produces is just a variation of something it has already seen before. Not much different in concept from a blender. The only advantage AI has over human intelligence is that it is generally better at spotting complex patterns. But at the end of the day, patterns just come down to mathematics. In other words, it's basically just a better calculator. Exactly what computers have always been. And while AI is better at spotting patterns, it's much worse at spo
Re: (Score:2)
True
wtf?
Re:No it won't (Score:5, Insightful)
It has no actual intelligence. Like when a lawyer asked for case citations and the AI just made up cases that sounded real. https://www.cnbc.com/2023/06/2... [cnbc.com]
That dumbass even asked the AI if the cases were real or imaginary and the AI of course said they were real.
Re: (Score:2)
Re: (Score:3, Insightful)
True
wtf?
he is correct, there is no intelligence at all in current AI. it is purely about the perception of intelligence through the presentation of information in human like output. Effectively it is a lot of smart programming to hide the fact the program has no actual intelligence behind it. the "AI" has no capacity to reason or think at all.
Re: (Score:3)
he is correct, there is no intelligence at all in current AI. it is purely about the perception of intelligence through the presentation of information in human like output. Effectively it is a lot of smart programming to hide the fact the program has no actual intelligence behind it. the "AI" has no capacity to reason or think at all.
Well then provide your definition of intelligence or reasoning which a human will pass and an AI will fail.
Re: (Score:2)
Re: (Score:2)
What he really means is that in the next year, AI will surpass *his* intelligence.
Re: (Score:3)
The brain seems to be a analogue structure. It got trillion connection points and there's no computer that can do that today.
https://www.euronews.com/cultu... [euronews.com]
https://www.theguardian.com/sc... [theguardian.com] (with the audio from the study)
Re: (Score:3)
>The artificial neural nets do function somewhat similar to human brains.
What? No, no sorry. We can't even begin to create ones that have the structure and function like brains on an electrical level. What about all those tasty neurotransmitting chemicals in the brain? Are those modeled in artificial neural nets? What do they do, because medical science is still trying to find that out, perhaps the computer scientists know already! /s
Re: (Score:2)
He's referring to genetic memory. Its not proven last time I checked the data. Some species of animals might be able to do this, but that is not proven by any data. I don't think humans have evolved this to any greater extent or maybe not at all. But the studies on this subject have been deeply flawed and have not made a lot of progress in recent years.
https://en.wikipedia.org/wiki/... [wikipedia.org]
https://www.theguardian.com/sc... [theguardian.com] (article from 2015 on a study flaws that amplified this idea popularity)
Re: (Score:2)
The parent bird does not teach the young how to build the nest, or how to migrate. Yet it knows.
No one tells us how to digest food. Yet our bodies know how to do it.
This is all information stored within us - somewhere. DNA is the core building block of life so people suspect it's in DNA. Information we know but were never told is stored somewhere in the body.
Re: (Score:2)
Birds learn, mostly from their own mistakes. This is according to a study.
https://baynature.org/article/... [baynature.org]
https://www.bbc.com/news/uk-sc... [bbc.com]
Re: (Score:2)
The first link says there's very little information on bird nest building.
The second link talks about experience honing their technique.
There's no point in the bird lifecycle that it would observe the parent building a nest. And yet it builds a nest for the first time, sometimes quite a complex one.
Another example of ingrained information: Salmon swimming upriver to breed. They don't witness anyone doing it, yet they know to do it.
Re: (Score:2)
If not from the DNA, where then does the knowledge come from?
As opposed to a what kind of mindset? What does this mean? We're trying to discover how things work based on evidence.
Elon musk wants CHIPS subsidies (Score:4, Insightful)
Sure Elon (Score:5, Funny)
Miss Cleo has a better prediction rate than you.
Re: Sure Elon (Score:3)
Says more about Elonâ(TM)s intelligence than AIâ(TM)s
Will AI use Linux? (Score:3)
Re: (Score:2)
Re:Will AI use Linux? (Score:4, Funny)
AI will use Lisp Machines. https://en.wikipedia.org/wiki/... [wikipedia.org]
If only Elon said it, I'd laugh... but.. (Score:5, Interesting)
Google's recently departed AI Officer was on an interview and said "I can't say much, because I am under NDA for 2 more years. But I can say that AI will change the world more in the next 10 years than all of the inventions from the last 100 years put together. I've seen what is coming." For it to have that kind of impact, it kinda has to start making major impacts in the next 12 months or so.
Re: (Score:3)
Google's recently departed AI Officer was on an interview and said "I can't say much, because I am under NDA for 2 more years. But I can say that AI will change the world more in the next 10 years than all of the inventions from the last 100 years put together. I've seen what is coming." For it to have that kind of impact, it kinda has to start making major impacts in the next 12 months or so.
He could easily be describing the current state of LLMs.
Like there's probably a few more wins in the current state of the architecture (better tracking of context for large projects). But mostly its a question of how big a change you think the current tech is going to make in the real world.
It's hard to see many white collar jobs that won't be impacted in some fashion, but I think that would be more akin to the impact of the Internet, rather than everything from the last 100 years.
More likely, if you're Goo
That's not the same thing (Score:3)
That doesn't mean these computer systems in algorithms are smarter than human beings it just means we've come up with a way to automate things that we didn't used to be able to without a lot more custom code and effort.
But the AI isn't intelligent it's just ru
Re: (Score:2)
But the AI isn't intelligent
What does this mean objectively? How would one go about discerning whether or not something is or is not intelligent?
It doesn't really learn it just detects patterns and repeats them.
What's the difference?
It can't make anything new
Why not?
Re:If only Elon said it, I'd laugh... but.. (Score:5, Insightful)
"Officer" sort of implies above the director level. Which usually means that they don't know what's really going on downstairs but they believe all the internal marketing. Their JOB is to sell the hype. From the inventions from the last 100 years, there are many that changed the world more than AI a decade from now can. The atomic bomb; the microchip and rise of computing; motorized artillery; commercial flight; the internet; etc...
The Haber process, older than 100 years, probably had the biggest change in the planet than anything in a thousand years; it is why we have a huge population boom because we can feed that many people.
Re: (Score:2)
The Haber process, older than 100 years, probably had the biggest change in the planet than anything in a thousand years; it is why we have a huge population boom
and also big boom because it makes explosives.
Wait for my AI supercharged Ugly Monkey NFTs (Score:2)
The are going to take the world by storm next year.
Uglier than ever before and mass produced by my custom AI system that chews them out by the millions for every person on the planet willing to buy them.
Is that Before... (Score:2)
Is that before or after he either dropped a little acid, or was taste-testing the batteries on a new Tesla model. ??
JoshK.
Training data (Score:2)
I think the GIGO aspect of the training data will be critical to getting a human level AI.
Without pruning the crap data the current LLM are being trained with an AI has no chance.
Re: (Score:2)
Today I have trouble seeing how there is enough quality training data and processing power to make two orders of magnitude improvement towards aGi. My perception is that AI solutions can not economically learn (train) while doing in the context of a large data set. Until that changes I think we will be a long way off.
If it does ... (Score:4, Insightful)
Musk Predicts AI Will Overtake Human Intelligence Next Year
For its sake, it better be smart enough to keep that from us.
Re: (Score:2)
Shhhh! You're giving it ideas!
In other words (Score:2)
Elon Musk is annoyed that the current generation of LLMs have taken up too much of the public's attention and they should instead be thinking about what really matters, Elon Musk.
Either way, given Musk's grasp of AI I'm pretty sure we have at least a couple decades before we're worried about the ChatBots becoming smarter than us.
Reference implementation (Score:2)
When assessing AI, we should keep in mind that the reference implementation is the human brain, which runs on 12 watts.
Re: (Score:3, Interesting)
Musk proves his own statement (Score:4, Funny)
So, he's on ketamine, right? (Score:2)
It's certainly a fascinating drug.
Exceed's Musk's intelligence by next year? (Score:2)
Arguably these days, that's a low bar to reach.
Self judgement (Score:2)
Thats only because he judges it by himself. Artificial idiocy may even already have exceeded Musks "intelligence".
It has to be able to reason... not an LLM (Score:5, Insightful)
For AI to be successful at this level it has to be able to reason. It can't just be an LLM. As ChatGPT all kinds of things and it gives wrong answers because it read some blog that said so.
It MUST be able to reason and deduct things from the data. The Flat Earth Society is probably not a good source of information for orbital dynamics, for example. And the Bible is probably not a good source of data for how to care for the sick (dripping dead birds blood around your house Leviticus 14). It has to be able to reason that out. Otherwise we'll have a really dumb AI overlord.
Re: (Score:2)
^^^ funny!
bold prediction (Score:3)
....but I doubt it, unless "smarter than" is defined by how quickly they can look stuff up on the internets.
ChatGPT and its ilk are great language and imagery resources but they're not even faintly intelligent.
Smarter at what ? (Score:2)
Everything or just some domains ?
Will AI be able to:
* write better bug free programs ?
* write better political speeches ?
* generate better scam emails ?
* write better love songs ?
* explain by one love song is better than another ?
Maybe some of the above, but not all.
Measuring against himself? (Score:2)
Is so this may have already happen, and definitively measuring against one other (in)famous person.
He's right (Score:2)
If you're a sheltered dimwit like Musk AI will surpass you next year.
For the rest of humanity? 60-80 years.
I wonder... (Score:2)
Many years ago, computers got faster than humans at Math. I wonder if people back then were trumpeting that they were smarter than humans?
I'm not an Elon hater, but this particular assertion is B.S.
Re:I wonder... (Score:5, Insightful)
His intelligence maybe (Score:2)
Low bar and easy statement. (Score:2)
Take average intelligence, remember that 50% of the population is below that. Is it really impossible for AI in a short time to exceed those below that threshold?
even smart people can make stupid remarks (Score:2)
Musk is smart and capable and extremely successful.
But that's no guarantee that this prediction will pan out.
In a year's time, or two or three, someone should remind him of this erroneous thought.
Also, please remember this Tweet he posted 3 months ago:
https://twitter.com/elonmusk/status/1740913974135459902?lang=en
"I stand by my prediction that, if Tesla executes extremely well over the next 5 years, that the long term value could exceed Apple and Aramco combined"
Right, Sure.
And fully autonomous cars, and th
Translation (Score:2)
Please click.
Musk is a crackpot (Score:2)
Meanwhile Musk's self-driving cars still can't drive themselves and grok is still behind GPT-4 released over a year ago.
Wouldn't surprise me to see Musk affirm MTG's theories about Rothchilds solar energy satellites becoming misaligned with their receiving stations "probably next year, within two years".
his idiocy astounds me (Score:2)
Re: (Score:2)
The disquieting thing to realize is we're little more than a collection of environmentally-trained pattern recognition systems that produce output in response to a stimulus.
At some point, AI will be complex enough we'll consider it intelligent.
In the case of Musk, he's probably right. (Score:2)
Shouldn't be difficult (Score:2)
Different perspective (Score:2)
I'd take his comments with a grain of salt but he could be not far off.
As a population, we are arguably getting dumber, so we are helping to meet AI halfway.
Re: i don't feel threatened until... (Score:2)
I would be happy if they could just handle apostrophes. I saw someone say what I had to change to make it work. I forgot what it is. But why? I have never had this problem on any other discussion forum.