Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Microsoft Businesses

Microsoft-OpenAI Deal Defines AGI as $100 Billion Profit Milestone (theinformation.com) 54

OpenAI CEO Sam Altman is negotiating major changes to the company's $14 billion partnership with Microsoft. The companies have defined artificial general intelligence (AGI) as systems generating $100 billion in profits [non-paywalled source] -- the point at which OpenAI could end certain Microsoft agreements, The Information reports.

According to their contract, AGI means AI that surpasses humans at "most economically valuable work." The talks focus on Microsoft's equity stake, cloud exclusivity, and 20% revenue share as OpenAI aims to convert from nonprofit to for-profit status. The AI developer projects $4 billion in 2024 revenue.

Microsoft-OpenAI Deal Defines AGI as $100 Billion Profit Milestone

Comments Filter:
  • by Pinky's Brain ( 1158667 ) on Thursday December 26, 2024 @11:22AM (#65040665)

    Until it can take an actual project from beginning to end with only some manegerial and customer input, it's not AGI.

    Of course at that point it could take a project like world domination from beginning to end too.

    • by blue trane ( 110704 ) on Thursday December 26, 2024 @11:44AM (#65040739) Homepage Journal

      Did you miss the point? Did they just redefine AGI in terms of sales, so it's just a matter of how much lying they can do to persuade you they have AGI?

      • by shanen ( 462549 ) on Thursday December 26, 2024 @12:19PM (#65040831) Homepage Journal

        I think AGI is going to be defined even more narrowly than that. First guy to own an AI that can win all the games in stock markets and futures markets (and derivatives, etc.) will wind up with all the imaginary money there is. And I think it will happen in a few hours before any of us humans can even figure out what is going on. Have a nice day.

        No, I don't know the algorithm. I just suspect it will involve tricks of manipulating share prices and futures in ways that profit on both sides of the wagers. The game will probably involve losing lots of money, but making new money faster than it's being lost? Imaginary and virtual money that somehow gets collapsed into reality before anyone can stop it?

        • by gweihir ( 88907 )

          Fortunately, it has been proven about 30 years ago that such an algorithm cannot be made. Yes, you can do some level of prediction, but it stops working when you do trades based on it. The best we have today is to create a hype (like LLMs) and then use that to profiteer.

          • It does not need to predict the stock market, it may find a way to manipulate human responses through a series of transactions and PR. Legally and openly or illegally and clandestinely, it's not beyond reasonable speculation.

            • by shanen ( 462549 )

              With regards to gweihir's Reply, he mostly reminded me of the famous quote about the old expert saying it's impossible... Let me websearch a moment...

              Ah, I was quickly led to the first of Arthur C Clarke's "Three Laws" and an associated webpage. https://en.wikipedia.org/wiki/... [wikipedia.org] Apparently published in 1962 when I was just a wee lad.

              The third law is also quite relevant. Whatever happens next year, I am sure I can safely wager that it's going to look like magic to a lot of people. That could actually be an i

        • This AGI is already invented and has a name - the Congress of the United States - the only financial institution that has shown consistent, long-term ability to "beat the market".

          The algorithm's name is also well-known, it is called "insider trading" or "insider information". It has also passed that puny mark in profits long, long ago.

          With the purchase of the last election from the "free and the brave", the first lady, Elona Trump and her friends will now try to remove most of the few and weak brakes befor

          • by shanen ( 462549 )

            Basically concurrence except that I don't think there's anything artificial about human greed.

        • First guy to own an AI that can win all the games in stock markets and futures markets (and derivatives, etc.) will wind up with all the imaginary money there is.

          I remember someone quipping that every time there's a pattern or reason in the stock market, someone will exploit it, essentially ending that pattern, so as a result the stock market is optimized for unpredictable irrationality.

          Anyway there are a few tricks guaranteed to make money more or less. One is shove it all in indexed trackers, which are es

          • by shanen ( 462549 )

            Partial concurrence, but I should try to clarify that I'm sure there are various other factors. If I was seriously interested in money then I would know much more about how it is going to work... But the essence is "the scam to end all scams".

            One important factor that wasn't mentioned clearly is price manipulation to control the timing of the trades. The attacking AGI/superintelligence has to be driving prices up and down, but by understanding the oscillations more precisely and in advance of any other mark

      • Did you miss the point? Did they just redefine AGI in terms of sales, so it's just a matter of how much lying they can do to persuade you they have AGI?

        No, you missed the point. They can discover what should be declared AGI and delay determining as such until they have 100 billion in profits, and profits can be prevented by simply investing more money into hardware and acquisitions. So they can delay declaring AGI indefinitely.

        The for-profit subsidiary is redefining terms so that the non-profit violates their charter.

        • by gweihir ( 88907 )

          The hole idea of "declaring" something AGI is a lie. What you need to do is "finding" something to be AGI when you analyze it and then give extraordinary evidence for that extraordinary claim. Anybody can declare anything to be AGI. Does not make it true.

      • by gweihir ( 88907 )

        Yep. Obviously the whole thing is a blatant, bald-faced lie, nothing else. It serves to obscure that they cannot deliver AGI and instead only have a pretty dumb thing that used to be called "automation", several language-corruption steps back. We now probably have to call the real thing "True AGI" or maybe "non-OpenAI, non-Microsoft AGI". Assholes at work.

    • by gweihir ( 88907 )

      They are just lying by misdirection and corrupting the language in order to make more money. Obviously, whether it is AGI or not has absolutely no relation to how much money it makes. Next we have to use "True AGI" or something like it when we mean machines with actual insight and understanding. Crappy people doing crappy things, all for a buck.

    • Until it can take an actual project from beginning to end with only some manegerial and customer input, it's not AGI.

      You say that as if humans are particularly good at it.

  • by blahbooboo2 ( 602610 ) on Thursday December 26, 2024 @11:37AM (#65040709)

    Here's a link to an article you can actually read with a better summary than Slashdot too: https://www.msn.com/en-us/tech... [msn.com]

    Editors, could you please stop posting articles that are paywalled? Or, do the bare minimum effort to look up the non-paywalled source? Unless youve already been replaced by AI bots, which is unlikely as the AI would prob will do the job better at this point

    • "Editors, could you please stop posting articles that are paywalled? Or, do the bare minimum effort to look up the non-paywalled source?"

      How Dare You?!

      The Editors here ARE doing the bare minimum effort.
      And getting away with it.
  • 100 billion is more than one order of magnitude out for what AGI will bring in.

    But that doesn't matter, because if "Open"AI and Microsoft are sticking with just throwing vast GPU resources at glorified predicitive text as their route to AGI, they will never get to AGI.

    AGI might be reached one day, but not the "Open"Ai way.
    Which is a good thing, because Musk and Zuckerberg are right:Sam Altman is a dirty, theiving cunt.
    • 100 billion is more than one order of magnitude out for what AGI will bring in. But that doesn't matter, because if "Open"AI and Microsoft are sticking with just throwing vast GPU resources at glorified predicitive text as their route to AGI, they will never get to AGI. AGI might be reached one day, but not the "Open"Ai way. Which is a good thing, because Musk and Zuckerberg are right:Sam Altman is a dirty, theiving cunt.

      They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing. So? They'll declare victory at a made up money line, and tell humanity that they won, created AGI, and we're saving the universe by creating more profits. And what's really sad? Based on the number of people who believe all the AI hype, most people will believe them when they declare they've done it. It's sorta funny they're putting this out into the public awareness now, by saying how they're redefining A

      • They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing. So? They'll declare victory at a made up money line, and tell humanity that they won, created AGI, and we're saving the universe by creating more profits. And what's really sad? Based on the number of people who believe all the AI hype, most people will believe them when they declare they've done it.

        Alas, this is the kind of Newspeak we have seen before from Microsoft. I thought they had learned their lesson.

        • They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing. So? They'll declare victory at a made up money line, and tell humanity that they won, created AGI, and we're saving the universe by creating more profits. And what's really sad? Based on the number of people who believe all the AI hype, most people will believe them when they declare they've done it.

          Alas, this is the kind of Newspeak we have seen before from Microsoft. I thought they had learned their lesson.

          They did. They learned that they can do whatever they want, whenever they want to, and nobody will reprimand them for it in a way that actually impacts their profit margins enough to matter. Since the profits continue to come in? They've learned the only lesson they needed to. Capitalism refined down to its purest essence.

          • by Rujiel ( 1632063 )
            Agreed but the profits actually aren't rolling in yet for their biggest investors like Microsoft.
      • by ceoyoyo ( 59147 )

        They redefinined AGI because they needed to agree on an actual definition in a legal contract. OpenAI obviously wants to have contract-peeping media write stories about how they "achieved AGI" and Microsoft wants $100 billion of revenue so they don't care what it's called.

      • by gweihir ( 88907 )

        They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing.

        Yep, really crappy people doing really crappy things in order to mislead and then defraud others. How repulsive.

        • They're redefining what AGI means because they *KNOW* they can't achieve AGI doing what they're doing.

          Yep, really crappy people doing really crappy things in order to mislead and then defraud others. How repulsive.

          But there's money to be scammed! That makes it all A-OK based on our society's outlook.

          We really need to find some moral compass other than profit potential. It seems that as our only moral framework is leading us into some weird circle of hell that Dante missed, where words don't really mean anything, and information is changed based on corporate need.

          • by gweihir ( 88907 )

            But there's money to be scammed! That makes it all A-OK based on our society's outlook.

            We really need to find some moral compass other than profit potential. It seems that as our only moral framework is leading us into some weird circle of hell that Dante missed, where words don't really mean anything, and information is changed based on corporate need.

            Wasn't there some circle were people got fitted with a second anus in their mouth and whenever they talked they caused the level of shit they were in to rise? Or maybe that was some more modern retelling.

    • by gweihir ( 88907 )

      There is no AGI. Nobody competent knows whether it is even possible. No, Physicalism is not Science. Nobody knows how smart humans do it and until that changes it cannot be used as scientifically valid evidence that AGI is possible.

      Actual AGI, if cheap to make and run, would bring in a lot more than $100B initially and may then well completely crash the economy and lead to global war because nothing works anymore. It would need really slow and careful introduction and probably a generous UBI and other measu

  • by iAmWaySmarterThanYou ( 10095012 ) on Thursday December 26, 2024 @11:46AM (#65040747)

    A business/marketing milestone is not the definition of general intelligence. Unless you're actually working in AI where you know AGI is bullshit for the foreseeable future so you need to entirely redefine a well understood term to suit your business needs.

    What a bunch of scammy crap. AI was my field of study and research. I'm quite certain none of my professors ever said that if my crappy LLM can replace $X worth of people's jobs that I will have achieved the golden AGI milestone.

    • A business/marketing milestone is not the definition of general intelligence. Unless you're actually working in AI where you know AGI is bullshit for the foreseeable future so you need to entirely redefine a well understood term to suit your business needs.

      What a bunch of scammy crap. AI was my field of study and research. I'm quite certain none of my professors ever said that if my crappy LLM can replace $X worth of people's jobs that I will have achieved the golden AGI milestone.

      We redefined what AI means to make it seem more in-reach. Now we're doing the same thing with AGI. I can't wait until we redefine what putting humans on Mars means based on some profit goal! PROGRESS IS AWESOME!

      • by leptons ( 891340 )
        >I can't wait until we redefine what putting humans on Mars means based on some profit goal!

        We already put humans on Mars. Didn't you see the movie "The Martian". It made $630 million worldwide, so it achieved the profit goal, which is the only thing that really seems to matter to some people.
    • by gweihir ( 88907 )

      What a bunch of scammy crap. AI was my field of study and research. I'm quite certain none of my professors ever said that if my crappy LLM can replace $X worth of people's jobs that I will have achieved the golden AGI milestone.

      I though about it back when, some 30 years ago. But I decided to stay away. Irrational prediction, money-over-truth, "researchers" that were completely delusional in their public statements and seemed to actually believe the crap they claimed turned me off completely. I still remember claims like the one by Marvin "the moron" Minsky, where he claimed that once computers had more transistors than humans had braincells, they would magically turn conscious and intelligent. What utter crap. (Yes, I am aware Min

    • This definition of AGI actually makes some sense, it's the number of people replaced by the software, which does indeed seem to be a measure of how "intelligent" the software is. As a goal it is horrendous though, encouraging taking jobs away from people. If the ultimate goal is achieved there will be no one left who can afford to buy products and the whole capitalist system collapes because there will be no consumers. Can AGI replace consumers/customers? Maybe we need some AI that filters out anything
  • ...as cars that make more than 50% of the industry cap. See how easy that was! We'll have flying cars by 2030 if you just invest in my company so we can hit that mark!
  • It has over $130 billion in made up "profits", all of it hallucinated in some blockchain.
  • Martin Luther King, Jr.: "I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character."
     
    OpenAI, Microsoft Execs: "I have a dream that my four little children will one day live in a nation where AGI generates Saudi Aramco and Apple-sized profits [axiomalpha.com] for us."

  • by Torodung ( 31985 ) on Thursday December 26, 2024 @12:54PM (#65040899) Journal

    Okay, so we're now considering stealing a database of Internet knowledge from the sheer good will of the public and producing a statistically generated result from said database general AI?

    This confirms that corporate marketers still do a lot of cocaine. This is a hype bubble being amped up to 11 for a broken product that is dependent on the theft of work.

    Please remember: This product is fundamentally broken without the labor of people using it. If you use LLMs, you should be getting paid for the training data you provide. They know that the product is broken and are unwilling to pay the nanny to raise it. Childcare is low-cost labor after all. Now it's free!

    I used to chuckle about Marxists' comments on "end-stage capitalism." This is end-stage. Steal labor. Generate lower quality labor, from it, to replace it. Pay next to nothing for the labor, once you have your machines established and demand low-cost energy. Charge everyone for the low-quality labor.

    That is a formula for a massive uprising.

    Forget AI killing a bunch of people, people are going to be killing a bunch of people over this. If there ever was a justification for "International Revolution," this is it.

    • by gweihir ( 88907 )

      Indeed. The real problem is that the scam is getting large enough to produce massive economic damage when it collapses. Well, maybe Microsoft and Google will not survive. That would be at least one positive result.

    • never mind I'm sure the 1%er's will give their security teams insurance.
    • They will tell you that complaints are for woke people who like DEI, which will magically make AGI the New Deal. Any objections? Fine, you're a communist traitor and will be shot.

      I'm happy I already have a purple clearance. And keep my laser handy.

  • Where is openai making money now? I thought they were loosing money per query and the only thing keeping them afloat was fresh investment?

    • by gweihir ( 88907 )

      They keep afloat by promising even bigger things, based on hot air. They have now taken the Big Lie (https://en.wikipedia.org/wiki/Big_lie) approach to basically the maximum it can do. I give it maybe 2-3 years before investors finally realize they have been scammed all along and it all comes crashing down.

      So, yes, OpenAI does not and never has made a profit and it does very much not look like they ever will be making one.

  • is to not make the entire world population jobless and unable to buy anything.

    • No, you see [waves hands], new jobs will magically appear. We didn't have influencers before the industrial revolution, did we?

      The truth is, there has always been something we wanted - or would want - given the resources to get it. Every time in the past that technology made a job obsolete, the economy adapted and we got a new level of 'stuff'.

      The problem is that we are really close to being able to provide all the 'stuff' anybody could possibly want, and it's only going to take a tiny percentage of the p

      • by gweihir ( 88907 )

        No, you see [waves hands], new jobs will magically appear. We didn't have influencers before the industrial revolution, did we?

        Haha, yes that lie. Personally, I have stopped expecting a massive job-loss to happen though. LLMs have proven much more incapable than I expected and it seems they are already starting to get worse.

        My initial take was that LLMs could possibly be used to automatize bureaucracy (something that produces nothing) and hence could have cost a lot of jobs. Obviously, since bureaucracy is not something we need more of and something that adds no value, there would not have been any replacement jobs. But even that s

  • by JoshuaZ ( 1134087 ) on Thursday December 26, 2024 @01:17PM (#65040995) Homepage
    And if you scroll down two articles more on the front page, there's an article about how Microsoft is forcing some people to have AI bundled with software and raising their fees. So apparently part of that $100 billion is using AI as an excuse to extort more from people.
    • by gweihir ( 88907 )

      Later stage enshittification at work. We really need to get rid of Microsoft if we want actual advances.

  • Obviously, whether something is AGI or not has zero connection to how much profit it generates. As these assholes pretty much know they cannot create AGI (because nobody competent has even the slightest idea how it could be done at this time), they now start lying by redefining the term. As has happened before. Remember, it used to be "automation". Then it became "AI" and now the same, dumb, no insight, no understanding thing gets redefined as AGI.

    What is next? Do we now need to use the term "True AGI" beca

    • by marcle ( 1575627 )

      The term 'AGI' is so disconnected from the current state of the art because we can't even agree on what consciousness is, let alone intelligence. Thus the proliferation of 'benchmarks,' so that developers can tout their scores without actually addressing the elephant in the room.

      • by gweihir ( 88907 )

        Indeed. The classical terminology is machines do "automation", humans (well, smart ones) do intelligence. If machines can do what smart humans can do, then that is called artificial intelligence. But since the AI field cannot deliver that, they have turned, time and again, to lies. That gave us "general intelligence" and we probably now need to use "true general intelligence" or maybe "non-OpenAI general intelligence" to mean the real thing.

  • AGI means AI that surpasses humans at "most economically valuable work."

    So they get paid once the world economy collapses as capitalism ends. Brilliant.

What is research but a blind date with knowledge? -- Will Harvey

Working...