Microsoft-OpenAI Deal Defines AGI as $100 Billion Profit Milestone (theinformation.com) 48
OpenAI CEO Sam Altman is negotiating major changes to the company's $14 billion partnership with Microsoft. The companies have defined artificial general intelligence (AGI) as systems generating $100 billion in profits [non-paywalled source] -- the point at which OpenAI could end certain Microsoft agreements, The Information reports.
According to their contract, AGI means AI that surpasses humans at "most economically valuable work." The talks focus on Microsoft's equity stake, cloud exclusivity, and 20% revenue share as OpenAI aims to convert from nonprofit to for-profit status. The AI developer projects $4 billion in 2024 revenue.
According to their contract, AGI means AI that surpasses humans at "most economically valuable work." The talks focus on Microsoft's equity stake, cloud exclusivity, and 20% revenue share as OpenAI aims to convert from nonprofit to for-profit status. The AI developer projects $4 billion in 2024 revenue.
It's not AGI if it needs an expert for handholding (Score:3)
Until it can take an actual project from beginning to end with only some manegerial and customer input, it's not AGI.
Of course at that point it could take a project like world domination from beginning to end too.
Re: (Score:3, Interesting)
Did you miss the point? Did they just redefine AGI in terms of sales, so it's just a matter of how much lying they can do to persuade you they have AGI?
Re:It's not AGI if it needs an expert for handhold (Score:5, Insightful)
I think AGI is going to be defined even more narrowly than that. First guy to own an AI that can win all the games in stock markets and futures markets (and derivatives, etc.) will wind up with all the imaginary money there is. And I think it will happen in a few hours before any of us humans can even figure out what is going on. Have a nice day.
No, I don't know the algorithm. I just suspect it will involve tricks of manipulating share prices and futures in ways that profit on both sides of the wagers. The game will probably involve losing lots of money, but making new money faster than it's being lost? Imaginary and virtual money that somehow gets collapsed into reality before anyone can stop it?
Re: (Score:2)
Fortunately, it has been proven about 30 years ago that such an algorithm cannot be made. Yes, you can do some level of prediction, but it stops working when you do trades based on it. The best we have today is to create a hype (like LLMs) and then use that to profiteer.
Re: (Score:2)
It does not need to predict the stock market, it may find a way to manipulate human responses through a series of transactions and PR. Legally and openly or illegally and clandestinely, it's not beyond reasonable speculation.
Re: (Score:2)
This AGI is already invented and has a name - the Congress of the United States - the only financial institution that has shown consistent, long-term ability to "beat the market".
The algorithm's name is also well-known, it is called "insider trading" or "insider information". It has also passed that puny mark in profits long, long ago.
With the purchase of the last election from the "free and the brave", the first lady, Elona Trump and her friends will now try to remove most of the few and weak brakes befor
Re: (Score:2)
Did you miss the point? Did they just redefine AGI in terms of sales, so it's just a matter of how much lying they can do to persuade you they have AGI?
No, you missed the point. They can discover what should be declared AGI and delay determining as such until they have 100 billion in profits, and profits can be prevented by simply investing more money into hardware and acquisitions. So they can delay declaring AGI indefinitely.
The for-profit subsidiary is redefining terms so that the non-profit violates their charter.
Re: (Score:2)
The hole idea of "declaring" something AGI is a lie. What you need to do is "finding" something to be AGI when you analyze it and then give extraordinary evidence for that extraordinary claim. Anybody can declare anything to be AGI. Does not make it true.
Re: (Score:2)
Yep. Obviously the whole thing is a blatant, bald-faced lie, nothing else. It serves to obscure that they cannot deliver AGI and instead only have a pretty dumb thing that used to be called "automation", several language-corruption steps back. We now probably have to call the real thing "True AGI" or maybe "non-OpenAI, non-Microsoft AGI". Assholes at work.
Re: (Score:2)
They are just lying by misdirection and corrupting the language in order to make more money. Obviously, whether it is AGI or not has absolutely no relation to how much money it makes. Next we have to use "True AGI" or something like it when we mean machines with actual insight and understanding. Crappy people doing crappy things, all for a buck.
Re: (Score:2)
Until it can take an actual project from beginning to end with only some manegerial and customer input, it's not AGI.
You say that as if humans are particularly good at it.
Article paywalled.Why have Slashdot editors? (Score:5, Insightful)
Here's a link to an article you can actually read with a better summary than Slashdot too: https://www.msn.com/en-us/tech... [msn.com]
Editors, could you please stop posting articles that are paywalled? Or, do the bare minimum effort to look up the non-paywalled source? Unless youve already been replaced by AI bots, which is unlikely as the AI would prob will do the job better at this point
Re: (Score:1)
How Dare You?!
The Editors here ARE doing the bare minimum effort.
And getting away with it.
That's lowballing AGI, but it doesn't matter (Score:2)
But that doesn't matter, because if "Open"AI and Microsoft are sticking with just throwing vast GPU resources at glorified predicitive text as their route to AGI, they will never get to AGI.
AGI might be reached one day, but not the "Open"Ai way.
Which is a good thing, because Musk and Zuckerberg are right:Sam Altman is a dirty, theiving cunt.
Re: (Score:3)
100 billion is more than one order of magnitude out for what AGI will bring in. But that doesn't matter, because if "Open"AI and Microsoft are sticking with just throwing vast GPU resources at glorified predicitive text as their route to AGI, they will never get to AGI. AGI might be reached one day, but not the "Open"Ai way. Which is a good thing, because Musk and Zuckerberg are right:Sam Altman is a dirty, theiving cunt.
They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing. So? They'll declare victory at a made up money line, and tell humanity that they won, created AGI, and we're saving the universe by creating more profits. And what's really sad? Based on the number of people who believe all the AI hype, most people will believe them when they declare they've done it. It's sorta funny they're putting this out into the public awareness now, by saying how they're redefining A
Re: (Score:2)
They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing. So? They'll declare victory at a made up money line, and tell humanity that they won, created AGI, and we're saving the universe by creating more profits. And what's really sad? Based on the number of people who believe all the AI hype, most people will believe them when they declare they've done it.
Alas, this is the kind of Newspeak we have seen before from Microsoft. I thought they had learned their lesson.
Re: (Score:2)
They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing. So? They'll declare victory at a made up money line, and tell humanity that they won, created AGI, and we're saving the universe by creating more profits. And what's really sad? Based on the number of people who believe all the AI hype, most people will believe them when they declare they've done it.
Alas, this is the kind of Newspeak we have seen before from Microsoft. I thought they had learned their lesson.
They did. They learned that they can do whatever they want, whenever they want to, and nobody will reprimand them for it in a way that actually impacts their profit margins enough to matter. Since the profits continue to come in? They've learned the only lesson they needed to. Capitalism refined down to its purest essence.
Re: (Score:2)
Re: (Score:2)
They redefinined AGI because they needed to agree on an actual definition in a legal contract. OpenAI obviously wants to have contract-peeping media write stories about how they "achieved AGI" and Microsoft wants $100 billion of revenue so they don't care what it's called.
Re: (Score:2)
They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing.
Yep, really crappy people doing really crappy things in order to mislead and then defraud others. How repulsive.
Re: (Score:2)
They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing.
Yep, really crappy people doing really crappy things in order to mislead and then defraud others. How repulsive.
But there's money to be scammed! That makes it all A-OK based on our society's outlook.
We really need to find some moral compass other than profit potential. It seems that as our only moral framework is leading us into some weird circle of hell that Dante missed, where words don't really mean anything, and information is changed based on corporate need.
Re: (Score:2)
But there's money to be scammed! That makes it all A-OK based on our society's outlook.
We really need to find some moral compass other than profit potential. It seems that as our only moral framework is leading us into some weird circle of hell that Dante missed, where words don't really mean anything, and information is changed based on corporate need.
Wasn't there some circle were people got fitted with a second anus in their mouth and whenever they talked they caused the level of shit they were in to rise? Or maybe that was some more modern retelling.
Re: (Score:2)
There is no AGI. Nobody competent knows whether it is even possible. No, Physicalism is not Science. Nobody knows how smart humans do it and until that changes it cannot be used as scientifically valid evidence that AGI is possible.
Actual AGI, if cheap to make and run, would bring in a lot more than $100B initially and may then well completely crash the economy and lead to global war because nothing works anymore. It would need really slow and careful introduction and probably a generous UBI and other measu
That's not AGI at all (Score:3, Interesting)
A business/marketing milestone is not the definition of general intelligence. Unless you're actually working in AI where you know AGI is bullshit for the foreseeable future so you need to entirely redefine a well understood term to suit your business needs.
What a bunch of scammy crap. AI was my field of study and research. I'm quite certain none of my professors ever said that if my crappy LLM can replace $X worth of people's jobs that I will have achieved the golden AGI milestone.
Re: (Score:2)
A business/marketing milestone is not the definition of general intelligence. Unless you're actually working in AI where you know AGI is bullshit for the foreseeable future so you need to entirely redefine a well understood term to suit your business needs.
What a bunch of scammy crap. AI was my field of study and research. I'm quite certain none of my professors ever said that if my crappy LLM can replace $X worth of people's jobs that I will have achieved the golden AGI milestone.
We redefined what AI means to make it seem more in-reach. Now we're doing the same thing with AGI. I can't wait until we redefine what putting humans on Mars means based on some profit goal! PROGRESS IS AWESOME!
Re: (Score:2)
We already put humans on Mars. Didn't you see the movie "The Martian". It made $630 million worldwide, so it achieved the profit goal, which is the only thing that really seems to matter to some people.
Re: (Score:2)
What a bunch of scammy crap. AI was my field of study and research. I'm quite certain none of my professors ever said that if my crappy LLM can replace $X worth of people's jobs that I will have achieved the golden AGI milestone.
I though about it back when, some 30 years ago. But I decided to stay away. Irrational prediction, money-over-truth, "researchers" that were completely delusional in their public statements and seemed to actually believe the crap they claimed turned me off completely. I still remember claims like the one by Marvin "the moron" Minsky, where he claimed that once computers had more transistors than humans had braincells, they would magically turn conscious and intelligent. What utter crap. (Yes, I am aware Min
AGI = number of jobs killed (Score:2)
Hey, I'll redefine flying cars as cars (Score:2)
Re: (Score:2)
Then Tether is an AGI (Score:2)
OpenAI and Microsoft's AGI Dream (Score:2)
Martin Luther King, Jr.: "I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character."
OpenAI, Microsoft Execs: "I have a dream that my four little children will one day live in a nation where AGI generates Saudi Aramco and Apple-sized profits [axiomalpha.com] for us."
OMFG. Please tell me that Marx was wrong. (Score:4)
Okay, so we're now considering stealing a database of Internet knowledge from the sheer good will of the public and producing a statistically generated result from said database general AI?
This confirms that corporate marketers still do a lot of cocaine. This is a hype bubble being amped up to 11 for a broken product that is dependent on the theft of work.
Please remember: This product is fundamentally broken without the labor of people using it. If you use LLMs, you should be getting paid for the training data you provide. They know that the product is broken and are unwilling to pay the nanny to raise it. Childcare is low-cost labor after all. Now it's free!
I used to chuckle about Marxists' comments on "end-stage capitalism." This is end-stage. Steal labor. Generate lower quality labor, from it, to replace it. Pay next to nothing for the labor, once you have your machines established and demand low-cost energy. Charge everyone for the low-quality labor.
That is a formula for a massive uprising.
Forget AI killing a bunch of people, people are going to be killing a bunch of people over this. If there ever was a justification for "International Revolution," this is it.
Re: (Score:2)
Indeed. The real problem is that the scam is getting large enough to produce massive economic damage when it collapses. Well, maybe Microsoft and Google will not survive. That would be at least one positive result.
Re: (Score:2)
Revenue? (Score:2)
Where is openai making money now? I thought they were loosing money per query and the only thing keeping them afloat was fresh investment?
Re: (Score:3)
They keep afloat by promising even bigger things, based on hot air. They have now taken the Big Lie (https://en.wikipedia.org/wiki/Big_lie) approach to basically the maximum it can do. I give it maybe 2-3 years before investors finally realize they have been scammed all along and it all comes crashing down.
So, yes, OpenAI does not and never has made a profit and it does very much not look like they ever will be making one.
One key factor for selling $100bn worth of AI shit (Score:3)
is to not make the entire world population jobless and unable to buy anything.
Re: (Score:3)
No, you see [waves hands], new jobs will magically appear. We didn't have influencers before the industrial revolution, did we?
The truth is, there has always been something we wanted - or would want - given the resources to get it. Every time in the past that technology made a job obsolete, the economy adapted and we got a new level of 'stuff'.
The problem is that we are really close to being able to provide all the 'stuff' anybody could possibly want, and it's only going to take a tiny percentage of the p
Re: (Score:2)
No, you see [waves hands], new jobs will magically appear. We didn't have influencers before the industrial revolution, did we?
Haha, yes that lie. Personally, I have stopped expecting a massive job-loss to happen though. LLMs have proven much more incapable than I expected and it seems they are already starting to get worse.
My initial take was that LLMs could possibly be used to automatize bureaucracy (something that produces nothing) and hence could have cost a lot of jobs. Obviously, since bureaucracy is not something we need more of and something that adds no value, there would not have been any replacement jobs. But even that s
And if you scroll down two articles (Score:4, Informative)
Re: (Score:2)
Later stage enshittification at work. We really need to get rid of Microsoft if we want actual advances.
Great, more shameless lies (Score:2)
Obviously, whether something is AGI or not has zero connection to how much profit it generates. As these assholes pretty much know they cannot create AGI (because nobody competent has even the slightest idea how it could be done at this time), they now start lying by redefining the term. As has happened before. Remember, it used to be "automation". Then it became "AI" and now the same, dumb, no insight, no understanding thing gets redefined as AGI.
What is next? Do we now need to use the term "True AGI" beca
Re: (Score:2)
The term 'AGI' is so disconnected from the current state of the art because we can't even agree on what consciousness is, let alone intelligence. Thus the proliferation of 'benchmarks,' so that developers can tout their scores without actually addressing the elephant in the room.
Re: (Score:2)
Indeed. The classical terminology is machines do "automation", humans (well, smart ones) do intelligence. If machines can do what smart humans can do, then that is called artificial intelligence. But since the AI field cannot deliver that, they have turned, time and again, to lies. That gave us "general intelligence" and we probably now need to use "true general intelligence" or maybe "non-OpenAI general intelligence" to mean the real thing.
But how will they spend it? (Score:2)
AGI means AI that surpasses humans at "most economically valuable work."
So they get paid once the world economy collapses as capitalism ends. Brilliant.