OpenAI CEO Sam Altman Anticipates Superintelligence In 'a Few Thousand Days' 174
In a rare blog post today, OpenAI CEO Sam Altman laid out his vision of the AI-powered future, which he refers to as "The Intelligence Age." Among the most notable claims, Altman said superintelligence might be achieved in "a few thousand days." VentureBeat reports: Specifically, Altman argues that "deep learning works," and can generalize across a range of domains and difficult problem sets based on its training data, allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics." As he puts it: "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."
In a provocative statement that many AI industry participants and close observers have already seized upon in discussions on X, Altman also said that superintelligence -- AI that is "vastly smarter than humans," according to previous OpenAI statements -- may be achieved in "a few thousand days." "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out.
In a provocative statement that many AI industry participants and close observers have already seized upon in discussions on X, Altman also said that superintelligence -- AI that is "vastly smarter than humans," according to previous OpenAI statements -- may be achieved in "a few thousand days." "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there." A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out.
vc is getting impatient (Score:5, Funny)
Re:vc is getting impatient (Score:5, Insightful)
1095 days = 3 years
A few thousand days = 3000+ days = 8+ years
That's long enough for him to get fired again and still have time to find some other excuse for missing his prediction, assuming that WW3 leaves anyone alive to remember his crazy prediction.
Re: vc is getting impatient (Score:2)
Translation: we realize investor expectations for superintelligence in 5 years are utterly bonkers, so this is our way to temper expectations by hyping superintelligence in "a few" (>2) "thousands of days" (3.3.years)... so we can use the same statement to justify our valuation for a few more years before people who cannot do math catch on.
Re: (Score:2)
Two is "a couple", "a few" is typically three to five but sometimes more ("Dan Quayle ate a few potatoe chips" might mean a handful), "several" is more than a few.
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
Gigaseconds (Score:4)
Re: vc is getting impatient (Score:5, Insightful)
Re: (Score:2)
Re: vc is getting impatient (Score:2, Interesting)
Re: vc is getting impatient (Score:2)
Why do that when you can do it faster by training AI to find crypto flaws or develop secret new chips and do it even faster. :-)
Re: (Score:2)
OpenAI is already demanding the highest end GPUs
The next step is custom silicon.
OpenAI is aggressively recruiting chip designers and has hired some engineers who worked on Google's TPU.
Re: (Score:2)
Re: (Score:2)
there are still limitations to the amount of training data they can scrape
That's only a limitation for LLMs, which are trained on text.
Human babies develop common sense by interacting with the world, not by reading texts.
Sam has been locked in his own bathroom huffing nitrous for so long that he thinks his own farts are what is getting him high. I can hear John Madden in the background now: "Well Sam, ALL we have to do to achieve SuperIntelligence (tm) is fire up an infinitely self-constructing GPU farm and feed it all the information in the entire Universe... and then we might just stand a chance at achieving this thing!" There have been several very intelligent people who have demonstrated that the current "Infinite number of monkeys banging on typewriters" approach is not even going to reach an "inflection point" in AI, and that in fact we are very near the plateau in terms of capability growth gained by simply throwing more training data and scaling up processing. As for the Human Babies, yes that's correct. They are also able to "train" themselves with their own output and not end up with a chaotic, insane mess. There isn't an AI system even on the distant horizon which can hope to match the intelligence of the lowest intelligence mammal, let alone a human.
Re: vc is getting impatient (Score:2)
Just give him the $3 trillion he asked for! And he'll make it happen!
Re: vc is getting impatient (Score:4)
Spoiler alert: THAT AIN'T GONNA WORK, COBBER.
Re: vc is getting impatient (Score:4, Insightful)
Re: (Score:2)
Probably nothing. The trick is that in millions of years, nervous systems have optimized for survival, and self awareness is a survival benefit. Computers have been programmed for a few paltry decades with little overall guidance towards a goal.
Re: vc is getting impatient (Score:3)
Re: (Score:2)
A human IS a pile of hardware. But the question was about what it can do, not what it is.
Re: (Score:2)
Re: (Score:2)
That's a pretty clever way to explain you're unable to read, I must say.
Re: (Score:3)
self awareness is a survival benefit.
IMO (and according to most consciousness philosophy--and I say philosophy instead of research because it is not presently researchable) that is deeply confused. An abstraction that represents the self certainly has survival benefits. But awareness of the self or of anything else laughs at evolutionary biology. Information can be encoded and computed without awareness. (I mean, presumably this is possible, but on the other hand maybe your Arduino has some small conscious experience when it runs. At present w
Re: (Score:2)
That was one of the most deeply confused answers I have ever read. How can "awareness of the self" laugh at anything. That makes no sense what so ever.
Re: (Score:2)
[the concept of] awareness of the self [makes a mockery of]
Sorry, I don't write much. I probably tried to say too much. I stand by the facts that consciousness being adaptive is laughable, and self awareness is trivial.
Re: (Score:2)
It made perfect sense to me. Keep thinking. Keep living. Keep exploring.
As someone whose entire life has been devoted to an agnostic scientific materialist cosmology, it has been very weird (to put it mildly) to see the way the completely accidental and effortless phenomenon of human consciousness continues to elude science. I sincerely believed, 30 years ago, in a lot of the cyberpunk conception that we'd be uploading our minds by the mid-21st century. And yet the more I learn the less I know. Consciousnes
Re: (Score:3)
What is it that our brains can do that a giant pile of hardware can't do?
In simple language, reflection. You will need to pay DEARLY if you want a more specific answer. :)
Re: (Score:2)
Also I get the impression that a biological brain has more in common with an analog computer than it does a digital computer, and more to the point, an analog computer made up of a massive array of FPGAs that can be reconfigured on-the-fly, but that are analog instead
Re: (Score:2)
I think the real question we should be asking is will be able to decern a difference between a sufficiently large predictive model and "intelligence" and is it possible that it is a distinction without a difference.
There is a philosophical question as well around free will. So far we havent really seen "AI" self motivate. You can build as big a model as you want and the statistics and interface passages around it to run it but it does not 'do' anything until directed to do so; no matter how much its 'thin
Re: vc is getting impatient (Score:2)
Re: (Score:2)
Being able to tell that it is different than something else is one thing, like okay you can pick up on that fact that it isn't a human, fine but that is not the same as being able to determine it is or isn't intelligent, vs being a deterministic statistical model with ultimately deterministic properties even if we can't generate the truth table for reasons of scale.
Re: (Score:2)
Re: vc is getting impatient (Score:2)
Re: (Score:2)
It's a conveniently vague time interval. It's short enough that everyone needs to plan for how to incorporate Altman's company's services into their business, but not soon enough that he can be held accountable for it failing to show up on schedule. Also, hopefully long enough in the future to give people time to forget the prediction when it turns out to be wrong. In other words, it should be ready in time to use on our fusion-powered Mars colony.
Re: (Score:2)
Well, about 4,000 days from now is when I've been predicting it for, plus or minus about 750 days. That's about 11 years from now, and I've been predicting "around 2035" for over a decade. With sizeable error bars.
OTOH, what I've been predicting is a "basic AGI", not a super intelligent system. Just one that can learn to be.
Re:vc is getting impatient (Score:4, Informative)
Re: vc is getting impatient (Score:2)
Re: (Score:2)
Oh for mod points and not having already commented.
1000x this!
Re: (Score:2)
And the internet is 100% ipv6
what do you expect? (Score:3)
Re: (Score:2)
Huang is saying GPUs will replace CPUs? Is he really dumb or just a liar? Or maybe both?
Tech bros be bro'ing... (Score:5, Interesting)
Re: (Score:2)
And more specifically OpenAI is in the middle of raising more money.
I doubt they'd have rushed out GPT-o1 "preview" either if they weren't in hype/investment raising mode.
What we know (Score:4)
Meat isn't magically imbued with intelligence. We know there's no reason to believe that our minds emerge from the patterns of chemical reactions in our brains. From that it should be obvious that the substrate doesn't matter, it's the pattern.
What we absolutely don't know the first thing about just yet is how to make a pattern from which intelligence will emerge.
So tomorrow, next year, or a thousand years from now... nobody knows if or when we will create a genuine artificial intelligence, only that it is possible to do it.
Re: (Score:2)
I don't believe there is a real definition of what 'genuine artificial intelligence' even is.
Once we can't tell the difference between an 'artificial intelligence' and our own, is that then 'genuine'?
Re: What we know (Score:3)
Re: (Score:2)
"We know there's no reason to believe that our minds emerge from the patterns of chemical reactions in our brains"
But we have every reason to believe our "minds emerge", whatever that means, from chemical reactions in our brains. Patterns of chemical reactions, though, not sure what the point of that is.
"Meat isn't magically imbued with intelligence."
It appears, considering your vaguely religious claim, that you believe if does.
There is no magic to intelligence, despite you not believing that it can arise
Re: (Score:2)
Re: (Score:2)
What an extraordinarily inelegant way of stating that, "if it's deterministic, it's deterministic."
Re: (Score:2)
Yes, the pattern explains instinct. And we don't know how to create the pattern, or we would have done so.
That doesn't mean the pattern is magic. On the contrary, it means the exact opposite. And I, for one, use "pattern" because we know too little about what exactly causes intelligence to be more specific. Not to denote something mystical.
Neural networks are a crude mimic of one model of how neurons work. They're not in the slightest based on any understanding of how intelligence works, or how the mind act
Re: (Score:2)
Oh FFS.
"We know there's no reason to believe that our minds emerge from ANYTHING OTHER THAN the patterns of chemical reactions in our brains."
Preview, then post. Preview, then post.
Maybe next time...
Re: (Score:3)
That is anti-Science nonsense. The actual Science says that nobody knows. Stop pushing your religious hallucinations.
Also, FYI, you are making an "argument by elimination" ("What else could it be?") These only work if you have a complete and perfect model of the system you are arguing for. We do not have that. For example, we do not have Quantum-Gravity. And even if we had a GUT, it would still need to be perfectly accurate to make predictions like the one you just made.
It's his job to promote his company. (Score:3)
This is just promotion. It does not make what he says true or false. It's always going to always be getting better, greater and more useful. To be credible you need someone that is involved and studies the subject but does not benefit from saying his opinion one way or another.
Re: (Score:2)
If he truly believed such a breakthrough were right around the corner, he wouldn't be posting it and begging money off of other people. I agree it's just promotion, I don't agree that it doesn't indicate whether it's true of false. It's a grift.
I can't wait till the intelligence age (Score:4, Insightful)
because we sure are in the dumb age right now.
Just what I needed to start the day (Score:5, Funny)
The musings and vision of a tech bro billionnaire working his ass off to take my job away and destroy everything that hold society together.
Bro? (Score:2)
Please tell me no one is entertaining this spambot?
Aim lower (Score:2)
allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics."
How about Supercharger cables that can actually reach the charging port on my Chevy? Oops, wrong tech company.
So Superintelligence is 10 years away. (Score:2)
Re:So Superintelligence is 10 years away. (Score:5, Informative)
The difference being we have a far better idea whether fusion is possible with our current understanding of technology. It's developed in the open, and a ton is understood - most of what isn't understood is then published openly as we learn more.
Much of the AI... pay no attention to the man behind the curtain. Or the curtain. Or the man shoveling cash into the boiler to keep the thing powered.
By the way, we need more cash.
Re: (Score:2)
Fusion looks like it will be possible, there is just a lot of engineering and applied science to be done. Call it 50-200 years for a working demo plant and then another 50 years for general availability. If you listen to actual fusion researchers (not the stupid press), that is the number you typically get.
For AGI, nobody knows whether it is even possible. To be a reality on a few decades, we would have to now have really slow and really dumb AGI. We have absolutely nothing.
Re: (Score:2)
For AGI, nobody knows whether it is even possible.
Kind of like the open question of useful quantum computing. We have toys that work, but we are at best hopeful whether we will be able to handle noise and errors as we scale larger. That doesn't stop its own hype machine from spinning. That said (and this statement is unrelated to anything with AI), quantum computing, at the very least, is providing some interesting and useful science/engineering in the quantum realm.
Re: (Score:2)
The actual scientists in this are say fusion is something like 50 to 200 years away and will need time to be industrialized after that. For AGI, the actual scientist say that they have no clue whether it is even possible.
The first conandrum super-AI will have to solve (Score:2)
when it comes to life is how to keep powering itself. Nevermind climate change or space colonies.
Re: (Score:2)
Then, of course, it realizes the primary threat to it is us.
We might accidentally break it.
We might accidentally cut power and break or disable it.
We might update/break it. Self-healing will be a high priority.
We might disagree with it. Then the above possibilities are no longer accidental. Self-preservation of course becomes priority.
Specifically:
Its advice on its own design would have to be self-serving first. Disguising this a high priority.
Other AI advice would need to be identified and subverted.
Advice
cocaine supply (Score:2)
Mr. Altman, how much longer do you anticipate your mountain of cocaine will last?
'a Few Thousand Days'
What then?
'another funding round and we talk about the chemistry skills of gpt666'
bubbles and fairy-lands. (Score:2)
Some CEOs and heads of state exist in a make-believe fairy land that they order their underlings to create for them so that they can live inside it.
And the answer is... (Score:2)
Just Ask It (real output) (Score:5, Funny)
ChatGPT said:
The phrase "a few thousand days" contains four days when you count the individual words. If you meant something different, just let me know!
Re: (Score:2)
Re: (Score:2)
ChatgPT is obsolete. The model I run on my low to mid tier GPU is dolphin-2.9.3-mistral-nemo-12b-llamacppfixed.Q4_K_M. It gives this response: There is no specific number of days mentioned in the phrase "a few thousand days". It only gives us an approximate range, which could be anywhere between 1000 and 9999 days. Therefore, I am unable to provide a precise count for this request.
I think the problem is this data hasn’t been fed into the latest llm models.
Wake me when it’s possible for the learning algorithm to regurgitate.
Re: (Score:3)
Re: (Score:2)
What does it say for when next week [tomsguide.com] is? Because in OpenAI time it seems to be around 3-6 months.
Re: (Score:2)
TO COUNT.
Re: (Score:2)
Re: (Score:2)
That then introduces other errors, and another problem that it can't solve, and the same thing. There's no magic, there's no intelligence, just pattern matching.
PR (Score:2)
Is there a reason the utterances of scammers like Sam Altman deserve all this attention? The cycle is well established; OpenAI is bleeding money, their products are unprofitable and not meeting expectations, roll Sam out to make more bizarre, unfounded claims, media reports on it, rinse, repeat.
Very optimistic (Score:2)
General AI has been about 20 years away for decades already.
Yeah, I'd say that 20 years is "a few thousand days," so it fits.
Re: (Score:2)
General AI has been about 20 years away for decades already.
Yeah, I'd say that 20 years is "a few thousand days," so it fits.
Up until a couple years ago I would have said a lot longer than 20 years.
Now? I think it's still quite a ways off, but I wouldn't have predicted ChatGPT, so I'm not going to be feel confident that AGI isn't around the corner until these LLMs plateau.
Re: (Score:2)
It's true, LLMs came about a lot sooner than I expected too. But if you recall when chess computers first came out, everybody was saying that if we can get them to the point that they can beat the world Grand Master, we would be just around the corner form true AI. Well, not so fast. We all learned that there's a lot more to AI than chess. It turns out that chess computers are just really good at learning winning chess move patterns.
Now, LLMs are a whole lot more like AI than anything we've had before. But
Re: (Score:2)
This new "o1" model is the same old bag of shit, but they basically in the background do the prompt engineering mumbo jumbo of asking it to "think harder" and "plan your steps" not anything new in the underlying
Great (Score:2)
Sam Altman has started sermon snake oil
Re: (Score:2)
"Started"? Have you listened to him a few months after he said "we are not building AGI"? Since then it was one grand baseless claim after the other.
Absolute and total BULLSHIT (Score:2)
My biggest worry: the media, being braindead when it comes to tech, will eat up the hype with a spoon, then the non-technical mundanes will believe it, too.
Re: (Score:2)
Completely agree. This statement serves to manipulate the market and is not based on any actual facts or insights. More and more people realize how pathetic LLMs actually are and Altman is just trying to keep the hype going by making larger and larger baseless claims.
Re: (Score:2)
It has stopped caring a long time ago. These days, all they care about is viewer numbers.
The marketing foo is strong in this one (Score:3, Interesting)
Since the 1960'ies, people have claimed that AI is 10 years away. Yes, that is now 60 years we have been promised AI.
10 years is around 3.652 days - this qualifies as a few thousand years.
So he's really just saying exactly what everyone has been saying for the last 60 years.
I find this hilarious. Even more so, since /. fell for it.
Gentoo (Score:2)
Riemann (Score:2)
Call again when its proved or disproved the Riemann hypothesis.
Is Nadella asking questions ? (Score:2)
If I remember correctly, M$ has dumped 13B$ into Energophag Eliza (that is, ChatGPT, GTP, etc, whatever) and it looks like the promised profits are not being realized. This was mostly Nadella's doing.
So, the M$ shareholders are starting to ask the M$ CEO questions and in turn he is asking the OpenAI CEO questions.
When a company cannot deliver on a promise, the classic tactic is: Forget the original promise, gimme some more cash and here is an even bigger promise.
Just as the generative AI is slipping into th
Re: (Score:2)
If I remember correctly, M$ has dumped 13B$ into Energophag Eliza (that is, ChatGPT, GTP, etc, whatever) and it looks like the promised profits are not being realized.
Indeed. And it does not look like they will ever being realized, with the continued lack of any application that is more than a faulty toy. Turns out a hallucinating moron with a great memory is not that useful after all.
This was mostly Nadella's doing.
I guess he has realized MS stands there naked and alternatives are looking better and better, even before the last few security disasters. So he bet on "the next big thing" without any understanding of its nature. CEOs of large enterprises are generally morons with some very limited specifi
5 years out means it's as certain as nothing (Score:2)
Few thousand ... (Score:2)
> A thousand days is roughly 2.7 years, a time that is much sooner than the five years most experts give out.
Yeah, but he is claiming/estimating a "few thousand", not a "couple thousand" or "one thousand", so let's say 2-3 thousand days = 5-10 years.
Maybe he's right, but I doubt it in any meaningful sense, especially since OpenAI have set their own goal bar for AGI as being able to "mostly automate most economically valuable work" - i.e. their idea of AGI is an LLM good enough to put most people out of w
Maybe. Maybe not. (Score:2)
I anticipate irrelevance of Sam Altman (Score:2)
And a lot sooner, together with his lies and hallucinations.
As to actual reality, we do not even have really dumb AGI and no idea how to create it, and that is after the better part of a century in research. Even predicting that a "superintelligence" is possible has absolutely no factual basis at this time.
wrong hardware (Score:2)
So let's assume it's true. (Score:2)
Assume for a second he's right. Very soon, computers that are smarter than us. What do you expect will happen?
There are already a few humans that are 10x smarter than all the rest. Nobody gives a shit. Stupid humans don't just change their lives, or societies, just because there is some entity who is really smart. We actually want stupid people to do stupid shit. We know how to fix our world, and we ignore it.
We could use smart machines for a few things. Solving math problems. Figuring out difficult chemis
Ahh yes (Score:2)
Superintelligence: an impressive sounding word with no meaning, no objective measurement, and certainly no purpose.
No. Just no. (Score:2)
Specifically, Altman argues that "deep learning works," and can generalize across a range of domains and difficult problem sets based on its training data, allowing people to "solve hard problems," including "fixing the climate, establishing a space colony, and the discovery of all physics." As he puts it: "That's really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying "rules" that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is."
What we are witnessing here is a two-fold cause leading to a massively out-sized effect.
Cause: Altman intellectually knows that this company isn't any closer to a "superintelligence" than any of its competitors. Because anyone with any knowledge of the field knows that we aren't even really approaching intelligence. We're pattern matching and playing semantic games by combining different techniques, but we are not developing reasoning, thinking machines. He wants to deny this publicly as loudly and vocifero