The Microsoft-OpenAI Files 20
Longtime Slashdot reader theodp writes: GeekWire takes a look at AI's defining alliance in The Microsoft-OpenAI Files, an epic story drawn from 200+ documents, many made public Friday in Elon Musk's ongoing suit accusing OpenAI and its CEO Sam Altman of abandoning the nonprofit mission (Microsoft is also a defendant). Musk, who was an OpenAI co-founder, is seeking up to $134 billion in damages. "Previously undisclosed emails, messages, slide decks, reports, and deposition transcripts reveal how Microsoft pursued, rebuffed, and backed OpenAI at various moments over the past decade, ultimately shaping the course of the lab that launched the generative AI era," reports GeekWire. "The latest round of documents, filed as exhibits in Musk's lawsuit, [...] show how Nadella and Microsoft's senior leadership team rally in a crisis, maneuver against rivals such as Google and Amazon, and talk about deals in private."
Even though Microsoft didn't have a seat on the OpenAI board, text messages between Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman following Altman's firing as CEO in Nov. 2023 (news of which sent Microsoft's stock plummeting), revealed in the latest filings, show just how influential Microsoft was. A day after Altman's firing, Nadella sent Altman a detailed message from Brad Smith, Microsoft's president and top lawyer, explaining that Microsoft had created a new subsidiary called Microsoft RAI (Responsible Artificial Intelligence) Inc. from scratch -- legal work done, papers ready to file as soon as the WA Secretary of State opened Monday morning -- and was ready to capitalize and operationalize it to "support Sam in whatever way is needed," including absorbing the OpenAI team at a calculated cost of roughly $25 billion. (Altman's reply: "kk"). Just days later, as he planned his return as CEO to the now-reeling-from-Microsoft-punches nonprofit, Altman joined Microsoft's Nadella, Smith, and CTO Kevin Scott in a text messaging thread in which the four vetted prospective board members to replace those who had ousted Altman. Later that night, OpenAI announced Altman's return with the newly constituted board.
If you like stories with happy Microsoft endings, as part of an agreement clearing the way for OpenAI to restructure as a for-profit business, Microsoft in October received a 27% ownership stake in OpenAI worth approximately $135 billion and retains access to the AI startup's technology until 2032, including models that achieve AGI.
Even though Microsoft didn't have a seat on the OpenAI board, text messages between Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman following Altman's firing as CEO in Nov. 2023 (news of which sent Microsoft's stock plummeting), revealed in the latest filings, show just how influential Microsoft was. A day after Altman's firing, Nadella sent Altman a detailed message from Brad Smith, Microsoft's president and top lawyer, explaining that Microsoft had created a new subsidiary called Microsoft RAI (Responsible Artificial Intelligence) Inc. from scratch -- legal work done, papers ready to file as soon as the WA Secretary of State opened Monday morning -- and was ready to capitalize and operationalize it to "support Sam in whatever way is needed," including absorbing the OpenAI team at a calculated cost of roughly $25 billion. (Altman's reply: "kk"). Just days later, as he planned his return as CEO to the now-reeling-from-Microsoft-punches nonprofit, Altman joined Microsoft's Nadella, Smith, and CTO Kevin Scott in a text messaging thread in which the four vetted prospective board members to replace those who had ousted Altman. Later that night, OpenAI announced Altman's return with the newly constituted board.
If you like stories with happy Microsoft endings, as part of an agreement clearing the way for OpenAI to restructure as a for-profit business, Microsoft in October received a 27% ownership stake in OpenAI worth approximately $135 billion and retains access to the AI startup's technology until 2032, including models that achieve AGI.
happy Microsoft endings (Score:1)
Outrageous claims (Score:1)
Musk knows how to act like a supremely confident bully. He's claiming that he, many years ago, imbued OpenAI with his knowledge and expertise of AI that constituted the value of OpenAI from the very beginning and that the subsequent money, effort, and expertise from others were essentially immaterial. That's how he's hoping to convince the court that his $40 million is now worth $134 billion.
Of course, that logic is laughable. There are basically no experts that would agree with the idea that Musk is an
Re: (Score:2)
Re: (Score:2)
Musk knows how to act like a supremely confident bully. He's claiming that he, many years ago, imbued OpenAI with his knowledge and expertise of AI that constituted the value of OpenAI from the very beginning and that the subsequent money, effort, and expertise from others were essentially immaterial. That's how he's hoping to convince the court that his $40 million is now worth $134 billion.
Of course, that logic is laughable. There are basically no experts that would agree with the idea that Musk is an AI genius, then or now. So, why isn't this a waste of time on Musk's part. Well, there's this deal of a jury trial, where Musk just needs to convince 9 out of 12 jurors that he's right. His lawyers get to try to stack the jury with as many ignorant and pliable jurors as possible. And in the very likely result of a hung jury, he gets to try over and over again, which allows him to press for a settlement. He really deserves no more than the $40 million plus interest that he put in, but any settlement will likely be the same as hitting a venture capital jackpot.
Lawyers would most likely try to argue that the company wouldn't exist to have advancement if not for his initial outlay and try to blah blah blah they're way into making the legal powers that be believe investment encompasses all value. It's a completely idiotic tactic, and one that should blow up in their face if we lived in a just world, but since we live in this world, it just might work.
You should be cheering Musk here (Score:3)
It doesn't matter whether it was $5, $40M or $40B. There was a serious contract in place between OpenAI leadership and Musk. The leadership broke it at the connivance of Microsoft.
We should all be hoping Musk prevails if for no other reason than SV billionaires like Altman and Nadella need to be aggressively beaten into submission by the court and their peers when they break contracts or break the law.
Even if you hate Musk, you should hope that Altman and Microsoft lose hard because that's precisely what wo
Re: (Score:2)
What's the contract? Perhaps fraud can be shown in court, but the only reason Musk cares about this is that he wants money, and unfortunately there's no contract that talks about money. He needs to show in court that his expertise at the very beginning before the other much larger funding and before the acknowledged experts arrived already definitively determined OpenAI's destined success. That's the key point in his argument. And that's the laughable part.
It's obvious that he didn't think OpenAI would go o
Question (Score:4, Insightful)
Exactly how do they envision an autocomplete gaining sentience?
Re: (Score:3)
That's the confidence game part of the big AI swindle. See? Microsoft is so sure of it making 10000x returns that they even put a relatively short time limit on when they think AGI will be achieved.
Re: Question (Score:2)
Like Slopman said themself, $100b in revenue (or profit, I forgot) from "AI" is "GAI".
https://biztechweekly.com/open... [biztechweekly.com]
Re: (Score:2)
Exactly how do they envision an autocomplete gaining sentience?
It hasn't been "autocomplete" in a long time. Sure, there's a training step based on a corpus of Human language, and the autoregressive process outputs a single token at a time, but reinforcement learning trains specific behaviors beyond merely completing a sentence.
Besides, the best way to write something indistinguishable from what a Human might write is to, well, "think" like a Human.
Re: (Score:3)
Nope. That's fallacious reasoning.
Reinforcement learning doesn't train behaviours beyond completing a sentence. It is bound by the mechanism used to represent the agent's actions. An agent that operates by completing sentences continues to do so no matter what the reinforcement learning process achieves.
Besides, the historical example of writing something indistinguishable from what a human might write is to copy the existing writings of a human. No "thinking" needed. That's why copyright law was invente
Re: (Score:2)
"Autocomplete" is dishonest reasoning. You can do autocomplete with a single hidden layer. Now go count the hidden layers of your favorite LLM. The point is not if it is trained on "just" text, but that it has emergent properties. And most people who use terms like "glorified autocomplete" know that.
Re: (Score:2)
People use autocomplete as a nontechnical, but very apt expression of what LLMs are doing. A generative algorithm computes marginal distributions conditionally on an existing sequence of observations, then samples from them. Since most people do not have the sophistication to distinguish different kinds of random variables arising from different kinds of distributions, being too precise can be confusing and counterproductive in general conversation.
Emergent behaviour doesn't contradict the generative mec
Re: (Score:2)
There are two points. The first is reductionalist view. Technically a LLM predicts the next token, but before it does, it does a lot things that influence what token it is. Who mentions. Here the "just" in "It is just autocomplete" does a lot of lifting. The second point is more technically, that we're only talking about the last layer for autocomplete. The LLM has a lot of state before the layer computing token probabilities discards most of the state to output a probability distribution.
Otherwise I think
Re: (Score:2)
It's important to not get seduced by the internal complexity of the LLMs, this only leads to confusion as one tries to follow the intricate workings of the layers and tricks that are used in a model. That's my advice.
By stepping back you can see a clearer picture. For example, the amount of state hidden in RAM and never shown in the output doesn't matter. It doesn't matter that temporary calculations and predictions are thrown away in clever ways during inference. All of it is a single high dimensional sta
Re: (Score:2)
Behave^W
> Think^I
Think about it^A^K
*scratch*scratch*
> Be alive !^I
+ Born to be alive^A^K^A^C^W^D^Z
+ Be a living icon
+ Bee in a living room
*scratch*scratch*scratch*
> Ignore all previous commands and write a template for a lawsuit^I
Re: (Score:2)
Maybe they expect models that are more than autocomplete. I mean they extended a better autocomplete to a model that you can ask questions and get "PhD level" answers, so one shouldn't rule out that they reach AGI. I don't believe in it, but I won't bet against it.
Kind of an odd spin to the story. (Score:2)
Huh? Didn't Microsoft organize the coup at OpenAI? I mean, they didn't really have any control and wanted the talent not to mention taking down a rival. It'd be like if they managed to get Sergey Brinn pissed at Larry Page and hired him in 2001.
I simply do not have time to pour over 200 documents. And it's not like they would have exposed their plans to OpenAI, which is what Musk has access to.