Does OpenAI's Origins Explain the Sam Altman Drama? (npr.org) 30
Tech journalist Kara Swisher disagrees that Sam Altman's (temporary) firing stemmed from a conflict between the "go-faster" people pushing for commercialization and a rival contingent wanting more safety-assuring guardrails. "He's being talking about the problems," Swisher said on CNN. "Compared to a lot of tech people, he's talking about the problems. I think that's a false dichotomy."
At the same time, NPR argues, the firing and re-hiring of Sam Altman "didn't come out of nowhere. In fact, the boardroom drama represented the boiling over of tensions that have long simmered under the surface of the company." The chaos at OpenAI can be traced back to the unusual way the company was structured. OpenAI was founded in 2015 by Altman, Elon Musk and others as a non-profit research lab. It was almost like an anti-Big Tech company; it would prioritize principles over profit. It wanted to, as OpenAI put it back then, develop AI tools that would "benefit humanity as a whole, unconstrained by a need to generate financial return."
But in 2018, two things happened: First, Musk quit the board of OpenAI after he said he invested $50 million, cutting the then-unknown company off from more of the entrepreneur's crucial financial backing. And secondly, OpenAI's leaders grew increasingly aware that developing and maintaining advanced artificial intelligence models required an immense amount of computing power, which was incredibly expensive.
A year after Musk left, OpenAI created a for-profit arm. Technically, it is what's known as a "capped profit" entity, which means investors' possible profits are capped at a certain amount. Any remaining money is re-invested in the company. Yet the nonprofit's board and mission still governed the company, creating two competing tribes within OpenAI: adherents to the serve-humanity-and-not-shareholders credo and those who subscribed to the more traditional Silicon Valley modus operandi of using investor money to release consumer products into the world as rapidly as possible in hopes of cornering a market and becoming an industry pacesetter... The question was, did Altman abandon OpenAI's founding principles to try to scale up the company and sign up customers as fast as possible? And, if so, did that make him unsuited to helm a nonprofit created to develop AI products "free from financial obligations"?
Microsoft's stock price hit an all-time high this week, reports the Wall Street Journal. (They also note that when OpenAI employees considered moving to Microsoft, CEO Satya Nadella "assured their potential colleagues that they wouldn't even have to use Microsoft's workplace-communications app Teams.")
"But the ideal outcome for Microsoft was Altman going back to OpenAI as CEO, according to a person familiar with Nadella's thinking. By opening Microsoft's doors to the OpenAI team, Nadella increased Altman's leverage to get his position back..." Even after investing $13 billion, Microsoft didn't have a board seat or visibility into OpenAI's governance, since it worried that having too much sway would alarm increasingly aggressive regulators. That left Microsoft exposed to the risks of OpenAI's curious structure... Microsoft has had to strike a tricky balance with OpenAI: safeguarding its investment while ensuring that its ownership stake remained below 50% to avoid regulatory pitfalls... AI is wildly expensive, and Microsoft's spending is expected to soar as the company builds out the necessary computing infrastructure. And it's unclear when or if it will be able to make back these upfront costs in added new revenue...
Nadella is banking on OpenAI's independence leading to innovations that benefit Microsoft as much as humanity. But the uncertainty of the past week has shown the risks in one of the world's most valuable companies outsourcing the future to a startup beyond its control.
When Chris Wallace asked Swisher if he should be more concerned about the dangers of AI now — and of its potential to take jobs — Swisher had a different answer. "One of the concerns you should have is the consolidation of this into bigger companies. Microsoft really want to win here..."
But she didn't let the conversation end without wryly underscoring the potential for AI. "I'd be concerned that there's not enough innovation... It could be a good thing, Chris. Trust me, it could be a good thing. But it could also, you know, kill you."
Thanks to Slashdot reader Tony Isaac for sharing the article.
At the same time, NPR argues, the firing and re-hiring of Sam Altman "didn't come out of nowhere. In fact, the boardroom drama represented the boiling over of tensions that have long simmered under the surface of the company." The chaos at OpenAI can be traced back to the unusual way the company was structured. OpenAI was founded in 2015 by Altman, Elon Musk and others as a non-profit research lab. It was almost like an anti-Big Tech company; it would prioritize principles over profit. It wanted to, as OpenAI put it back then, develop AI tools that would "benefit humanity as a whole, unconstrained by a need to generate financial return."
But in 2018, two things happened: First, Musk quit the board of OpenAI after he said he invested $50 million, cutting the then-unknown company off from more of the entrepreneur's crucial financial backing. And secondly, OpenAI's leaders grew increasingly aware that developing and maintaining advanced artificial intelligence models required an immense amount of computing power, which was incredibly expensive.
A year after Musk left, OpenAI created a for-profit arm. Technically, it is what's known as a "capped profit" entity, which means investors' possible profits are capped at a certain amount. Any remaining money is re-invested in the company. Yet the nonprofit's board and mission still governed the company, creating two competing tribes within OpenAI: adherents to the serve-humanity-and-not-shareholders credo and those who subscribed to the more traditional Silicon Valley modus operandi of using investor money to release consumer products into the world as rapidly as possible in hopes of cornering a market and becoming an industry pacesetter... The question was, did Altman abandon OpenAI's founding principles to try to scale up the company and sign up customers as fast as possible? And, if so, did that make him unsuited to helm a nonprofit created to develop AI products "free from financial obligations"?
Microsoft's stock price hit an all-time high this week, reports the Wall Street Journal. (They also note that when OpenAI employees considered moving to Microsoft, CEO Satya Nadella "assured their potential colleagues that they wouldn't even have to use Microsoft's workplace-communications app Teams.")
"But the ideal outcome for Microsoft was Altman going back to OpenAI as CEO, according to a person familiar with Nadella's thinking. By opening Microsoft's doors to the OpenAI team, Nadella increased Altman's leverage to get his position back..." Even after investing $13 billion, Microsoft didn't have a board seat or visibility into OpenAI's governance, since it worried that having too much sway would alarm increasingly aggressive regulators. That left Microsoft exposed to the risks of OpenAI's curious structure... Microsoft has had to strike a tricky balance with OpenAI: safeguarding its investment while ensuring that its ownership stake remained below 50% to avoid regulatory pitfalls... AI is wildly expensive, and Microsoft's spending is expected to soar as the company builds out the necessary computing infrastructure. And it's unclear when or if it will be able to make back these upfront costs in added new revenue...
Nadella is banking on OpenAI's independence leading to innovations that benefit Microsoft as much as humanity. But the uncertainty of the past week has shown the risks in one of the world's most valuable companies outsourcing the future to a startup beyond its control.
When Chris Wallace asked Swisher if he should be more concerned about the dangers of AI now — and of its potential to take jobs — Swisher had a different answer. "One of the concerns you should have is the consolidation of this into bigger companies. Microsoft really want to win here..."
But she didn't let the conversation end without wryly underscoring the potential for AI. "I'd be concerned that there's not enough innovation... It could be a good thing, Chris. Trust me, it could be a good thing. But it could also, you know, kill you."
Thanks to Slashdot reader Tony Isaac for sharing the article.
No. (Score:2)
/Betteridge
Re: (Score:2)
The original headline was not a question, that was just Slashdot editors improvising.
Re: (Score:2)
/Betteridge
Actually it's anti-Betterridge, from the day it happened it was obvious that it was due to the non-profit / for-profit conflict.
The headline might as well have been: Is water wet?
Though really, if I saw that headline I'd expect some wacky material physics story.
Damn, where can _I_ get a job like that! (Score:5, Insightful)
they wouldn't even have to use Microsoft's workplace-communications app Teams.
Re: Damn, where can _I_ get a job like that! (Score:3)
Its probably like flair, you dont have to use it but then you arent the team player everyone expected you to be.
Re: (Score:2)
Is it Teams you hate, or just messaging apps in general?
Re: (Score:2)
Don't really like messaging apps much, though I can deal with Discord, hate Slack, especially hate Teams (*ptui*)
I get it, you need messaging. Why can't they learn from Discord and clone that, instead of coming up with the exceedingly lame Lynx, destroying Skype's market value by calling Lynx Skype for Business, then turning it into Teams, which can't even.
Re: (Score:2)
I don't use Discord or Lynx, so I can't relate to those points. Skype classic was an absolute mess of pop-up windows, so I was glad to see it go away. I actually like the single-pane view of Slack and Teams. Was it the individual popup windows you liked about Skype?
Re: Damn, where can _I_ get a job like that! (Score:2)
In a world before Zoom, it was a dependable way to have a cheap or free soft phone with video calling that just about everyone had installed.
It certainly wasn't the greatest UI, but it worked great in a time when nothing like it worked as well. It solved problems of international connectivity to phone networks that nothing else really did (yes, there was VoIP, but those apps are almost all real sore-thumby in terms of what they provide and the tinkering required.)
And it worked peer-to-peer, without Microso
Re: (Score:2)
And it worked peer-to-peer, without Microsoft or some government in the middle to spy or screw things up.
How did it work peer-to-peer? You HAVE to put a server in between as far as I can figure out. Whether or not that server spies on you is a different matter.
Re: Damn, where can _I_ get a job like that! (Score:2)
Here's an analysis:
http://www1.cs.columbia.edu/~s... [columbia.edu]
Re: (Score:2)
Thank you for the link. A working link to the file is:
https://kirils.org/skype/stuff... [kirils.org]
This was very useful. However, as I suspected, in most instances it is not a peer-to-peer. They tested the following three scenarios:
1. Both caller and callee have public IP
2. Callee has public IP
3. Both clients are behind NAT and firewall
In cases 1 and 2, a Skype server facilitates the initial connection and after that all data is exchanged peer-to-peer.
In the 3rd scenario, both clients communicate via a Skype server, i
Re: Damn, where can _I_ get a job like that! (Score:2)
Sorry, didn't realize the link has been truncated.
No company (Score:5, Insightful)
Re: (Score:3)
Re: (Score:2)
We saw that with tobacco.
Re: (Score:3)
Re: (Score:2)
OpenAI is a nonprofit org, as noted in the article.
Re: (Score:2)
The reason is not a mystery (Score:5, Insightful)
When the reason for an event remains a mystery, invariably trace it back to the allure of monetary gain - Li Wei
Re: (Score:2)
Re: (Score:2)
It's my wife's quote, but I asked ChatGPT to rewrite it for me to make it sound better and give it a good source, thought it would match the article theme.
Re: (Score:2)
Breaking Capitalism (Score:5, Interesting)
10 months ago Altman was speculating on the ability of AGI to "break capitalism." Now the board is full of capitalists and finances guys.
The dot-ai bust is near (Score:2)
Will the many geniuses of "AGI" be able to find a job except at microsoft?
NPR (Score:1)
NPR has no credibility; they are almost always pursuing some agenda.
The only thing certain about this fiasco is that there was bad blood. Exactly why, even the participants probably have disparate answers. I certainly couldn't care less what Karen Swisher thinks, or what narrative NPR wants to put forward to advance their own agenda.
WTF? (Score:2)
"He's being talking about the problems," Swisher said
What in hell does that actually mean, who and what the hell are you?
OpenAI’s recent discovery before firing Sam (Score:2)
Thanks to the AI Rundown
The Rundown: According to a report by Reuters, OpenAI had a secret breakthrough called Q* (pronounced Q-Star) that precipitated the firing of Sam Altman.
The details:
Before Sam's firing, researchers sent the board a letter warning of a new AI discovery that could "threaten humanity."
The new model dubbed Q* demonstrated internal capabilities of doing simple math (something no model has achieved).
While simple math might not seem impressive to most, it could be a huge step toward creatin