Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Businesses

Sam Altman To Return as OpenAI CEO (reuters.com) 88

OpenAI said today it reached an agreement for Sam Altman to return as CEO days after his ouster, capping a marathon discussion about the future of the startup at the center of the artificial intelligence boom. From a report: In addition to Altman's return, the company agreed in principle to partly reconstitute the board of directors that had dismissed him. Former Salesforce co-CEO Bret Taylor and former U.S. Treasury Secretary Larry Summers will join Quora CEO and current director Adam D'Angelo, OpenAI said. Under an "agreement in principle," Altman will serve under the supervision of a new board of directors.

"I love OpenAI, and everything I've done over the past few days has been in service of keeping this team and its mission together," Altman wrote on the social media site X in response to the announcement. "When I decided to join Microsoft on Sunday evening, it was clear that was the best path for me and the team." Microsoft chief Satya Nadella hired Altman after he was sacked.

With the "support" of the new OpenAI board and Nadella, Altman said, he looked forward to "returning to OpenAI, and building on our strong partnership with Microsoft." Nadella said he was "encouraged by the changes to the OpenAI board" and believed that the decision was the "first essential step on a path to more stable, well-informed, and effective governance."
This discussion has been archived. No new comments can be posted.

Sam Altman To Return as OpenAI CEO

Comments Filter:
  • by DrMrLordX ( 559371 ) on Wednesday November 22, 2023 @02:58AM (#64023329)

    MS nearly absorbed all of OpenAI's talent without even trying. Assuming everyone who left with Altman or threatened to leave has returned.

    • by bramez ( 190835 )

      times have changed. What took Apple a decade to fix i.e. firing the wrong guy, openAI did over a week-end by re-hiring the guy with a vision.

      • "re-hiring the guy with a vision" - "the vision" in this case appears to be just plain old MMF, as in "make money fast".

        • Re:Jobs (Score:4, Funny)

          by war4peace ( 1628283 ) on Wednesday November 22, 2023 @04:13AM (#64023427)

          Incorrect. The proper acronym is "LLM" ("Loads and Loads of Money").

        • Re:Jobs (Score:4, Insightful)

          by gweihir ( 88907 ) on Wednesday November 22, 2023 @04:32AM (#64023459)

          Indeed. As somebody else here pointed out, the whole thing is a slow pump & dump scam. At some time most people will figure out that LLMs are not actually useful for most things they now "envision" it will "revolutionize", but this may still take a while.

          • by Viol8 ( 599362 )

            LLMs right now are little more than improved search engines with a few mashup ideas and solver abilities thrown in to the mix. Technically extremely impressive but hardly something that'll replace most human workers.

            • by gweihir ( 88907 )

              And LLMs are basically _already_ at the limits of what they can do. They may even have peaked, because model-collapse will soon prevent LLM training on public data.

              • Re:Jobs (Score:4, Interesting)

                by iAmWaySmarterThanYou ( 10095012 ) on Wednesday November 22, 2023 @07:24AM (#64023689)

                IMHO, based on just gut feeling, there will be a next major version ("chatgpt 5", I guess) which will have so much new shit in it that it will never stabilize and they already barely understand how current versions come up with the output it generates now.

                V5 will be so much more complex and overloaded with money making feature warts it will ultimately end up even less reliable than v3 and v4.

                LLM have value but they're definitely not the future of change everything these guys claim it to be. I don't know what vision they're possibly talking about.

                OpenAI was founded with the idea of creating AGI. We don't hear much about any work they might be doing there. Their AI people certainly know LLM won't ever become an AGI so it looks like the standard Silicon Valley startup scam to me.

                I assume they flushed the board members who wanted AGI in favor of the usual money guys.

                • by gweihir ( 88907 )

                  OpenAI was founded with the idea of creating AGI. We don't hear much about any work they might be doing there. Their AI people certainly know LLM won't ever become an AGI so it looks like the standard Silicon Valley startup scam to me.

                  I assume they flushed the board members who wanted AGI in favor of the usual money guys.

                  Yep, pretty much.

              • This "LLMs are unlikely to improve further" idea seems mistaken to me and is inconsistent with what our ML people here are saying. The rate of improvement appears to be rapid and accelerating. A bit too rapid, actually.
                • by jythie ( 914043 )
                  Eh, I've mostly been seeing ML based AI hit a wall of diminishing returns for the last decade or so. The ML community also has a bit of an inferiority complex after being the redheaded stepchild of AI for so long.. so combine that with all the money flowing into the field and you get a lot of hype, hope, and 'everyone knows the line goes up to the moon' thinking.
              • And LLMs are basically _already_ at the limits of what they can do.

                You should tell G**gle, ClosedAI, F****b**k, Mistral et el so they don't waste any more time and money training new models.

                They may even have peaked, because model-collapse will soon prevent LLM training on public data.

                Yeeaaa any day now...

            • Re:Jobs (Score:5, Insightful)

              by Kokuyo ( 549451 ) on Wednesday November 22, 2023 @05:06AM (#64023499) Journal

              Considering the state of google, Bing and just about any application search (windows, confluence etc what have you), a better search engine could be a multi-dozen-billion market...

              • by gweihir ( 88907 )

                You have a point.

              • by sinij ( 911942 )

                Considering the state of google, Bing and just about any application search (windows, confluence etc what have you), a better search engine could be a multi-dozen-billion market...

                Issues with the sorry state of search providers today, SEO or monetization of searchers, would also be present in LLM-based search. It is only doing better now because it is yet to get monetized and SEOed. If anything, I expect it to do worse as it doesn't provide ranked output list that the user can cognitively filter for relevance.

              • Thank you. Google is of limited use anymore, I use a search engine primarily to search for data - not find products made in sweatshop countries to buy, for the most part.
              • Considering the state of google, Bing and just about any application search (windows, confluence etc what have you), a better search engine could be a multi-dozen-billion market...

                Well, just wait till the LLM platforms start getting taken over by paid interests. (Which is what I believe this Altman clown-show is largely about.) I expect that's already going on beneath the covers of Bard and Bing Chat - er, Microsoft Copilot. It's the reason search engines are such crap.

                Originally they were nothing more than neutral search engines. They were a necessity for finding what you needed within the population explosion on what we called the "World Wide Web", long before we could carry it in

              • by KlomDark ( 6370 )
                Confluence? Fuck, I can't even find the documents I wrote myself with that shitty search.
            • Re: (Score:3, Interesting)

              We have already cancelled plans at work to hire a technical writer, and one more DevOps engineer due to our AI use. I call that a form of replacement.
            • You need to study this more. LLMs have been shown to reason and create new insight beyond what they have been fed as training examples. Read the Wikipedia article on LLMs to get started understanding that something spooky happens when feeding neural network more than 50 billion parameters.
              • by gweihir ( 88907 )

                Bullshit. Nothing spooky happens. The whole is not more than the sum of its parts. The only thing that can create "spooky" in this physical universe is quantum effects and LLMs certainly do not have those. There _are_ always a lot of people with weak or no fact-checking skills that want to believe any random crap, though.

                • While I agree that there is considerable debate on artificial cognition, I think you are jumping to a conclusion that I have not been immersed in this for years.

                  Here is a good primer in layman's terms for what I'm trying to say: https://www.quantamagazine.org... [quantamagazine.org]
                  • by gweihir ( 88907 )

                    You may be "immersed", but you are bereft of actual insight. That reference you give is just another hype pice. Everything an LLM gives you (or really any statistical AI tool) was in the training data. The mathematics used does not allow anything else. Sure, the training data size has grown enough that it probably is infeasible now identifying where exactly that information was, but that does not change the fact that it is in there.

                • There _are_ always a lot of people with weak or no fact-checking skills that want to believe any random crap, though.

                  If you are too jaded to be utterly blown away by some of the stuff we've seen LLMs do [youtube.com], you'll be among the first to be replaced by them.

                  • by gweihir ( 88907 )

                    Your attempts at manipulation (creating "fear of missing out") are both unsophisticated and tiresome. They obviously have zero factual value. And no, I am not "blown away" by simplistic stuff and that is all that was observable so far as being produced by LLMs. Sure, that is "simplistic" on _my_ level, which is a bit higher than average. I am also not easily impressed, there needs to be some real substance to do it. LLMs have consistently failed to deliver that. It just takes somebody with real skills to se

            • by jythie ( 914043 )
              Yeah.. I have yet to see anything to suggest they are good for much beyond what ML based AI is already good for : recommendation systems, search, and marketing.. any place where it does not matter if the answer is wrong or right, only if it keeps people engaged.
              • I disagree. I'm a retired IT guy and have no need to code or learn systems to the degree that I can support them. The new-fangled rollouts are not interesting to me. My hobby is researching astrophysics and cosmology along with quantum physics. I write about special and general relativity and I desperately need assistance in proposing hypotheses and fact-checking what I think I know.

                Traditional search is laborious and frustrating. I cannot ask a simple answer and get an extended answer. I am faced with the

          • I find it shocking how many people on /. are in denial about what LLMs could do. I see opinions daily from people who have not yet implemented the technology in successful applications. Those who have understand its promise.

            It reminds of the state of the internet in 1994 when few knew anything about the internet. People laughed at me when I said it was going to be bigger than Gutenberg and movable type. The naysayers then said that it would never be good for things like mass media, shopping or do what Com
            • I totally agree. People who dismiss ChatGPT and equivalent LLMs have simply not spent the time to use it.

              Faulting LLM output as being unreliable or hallucination-prone ignores the human operating the tool as an editor. The tool replaces the writer(s), not the editor, who ensures proper output per the intended context.

              All kinds of businesses are going to have to ramp up to incorporate LLM output or be at a disadvantage with their competitors in the marketplace. Take Christmas cards, for example. Families a
              • by gweihir ( 88907 )

                Next year, Shutterfly better have an option to generate a custom Christmas poem on the back about your family, because their competitors' card designer UI will have that.

                And the year after nobody will care because they will know this is not something personal but just something faked by AI....

                • Consider it an iterative improvement over the canned sentiments like "Seasons Greetings!" or just "Merry Christmas" that goes out on most peoples' cards. None of those are particularly 'personal'.
            • by gweihir ( 88907 )

              I find it shocking how many people apparently are not smart enough to see how very limited LLMs actually are. These people are probably so incredibly shallow in their insight-level and work results that a no-insight LLM may actually be able to replace them. But here is news for you buddy: Not everybody is a nil wit. And not everybody is conned by "fear of missing out" which you so blatantly try to push in an unsophisticated attempt at manipulation.

              • I apologize to anybody who thinks I am making any attempt at manipulation. I have no agenda or veiled intentions. I also am not suggesting that anybody here is a nil wit either. I'm simply offering a counter argument to the idea that LLMs are a fad that should be ignored and that its some useless parlor trick.

                I will however counter the argument that LLMs have no insight as they do provide many surprising difficult to explain insights, including some that are false.

                IMHO people who dismiss the coming imp
                • by gweihir ( 88907 )

                  I will however counter the argument that LLMs have no insight as they do provide many surprising difficult to explain insights, including some that are false.

                  LLMs have no insight. The mathematics they use do not allow insights. Sorry to burst your bubble but you are wrong, no matter what impression you got.

                  You may be fooled though because of actual insights in the training data the LLM regurgitates, possibly with some reformulation (which does not require insight and can be done by automation for, say, the last 20 years or so). The "insight" level of an LLM is similar (but worse) to what you get when browsing a good encyclopedia: There are many surprising and di

          • by ceoyoyo ( 59147 )

            I don't know, the quality of the scam e-mails I receive has gone up immensely.

            • by gweihir ( 88907 )

              Oh? I still get the same crap. Maybe all those "higher quality" ones get caught in my filters? I do admit I look only at the subject lines of those that get through, because there is never any doubt from that alone what they are.

        • by jythie ( 914043 )
          I was more thinking Cult of Personality.
      • And the 500 people who quit aren't getting their jobs back.
    • MS nearly absorbed all of OpenAI's talent without even trying. Assuming everyone who left with Altman or threatened to leave has returned.

      I'm thinking the conspiracy loons will have an absolute blast with this story. It'd be easy to twist this into OpenAI/Altman not wanting to do layoffs, but did this instead to get their entire workforce to leave. I mean, it would make about as much sense as anything else happening in the world right now.

    • No, this is better for Microsoft. They've still got the company, still got the CEO that they like, and now also have a complaint board who will rubber-stamp whatever they and Altman want. Plus, Microsoft can continue to pretend that OpenAI is an independent and responsible nonprofit which is focused on safety.

      The other poster who was speculating that conspiracy theorists were going to have a field day with this was right, but I think the nature of those conspiracies are going to be about what OpenAI is g
  • I don't know that guy, but don't y'all think that unpleasant business type from the Apple TV series "Invasion" is modelled on him, including the looks?

  • I think the technical term for what has taken place here is: shit show.
  • Irony? (Score:5, Interesting)

    by fahrbot-bot ( 874524 ) on Wednesday November 22, 2023 @03:22AM (#64023369)

    As noted in several articles, about 725 out of OpenAI’s around 770 employees signed a letter demanding Altman be re-instated otherwise they would all quit. As Ronny Chieng noted [youtube.com] on The Daily Show this evening, almost all the employees at the company developing the AI tech that will eventually replace everyone, are working hard to save this one guy's job.

    • Even the "architect" of his removal signed the letter.

      https://www.businessinsider.com/openai-cofounder-ilya-sutskever-deeply-regrets-participating-ousting-sam-altman-2023-11
      • Re:Irony? (Score:4, Interesting)

        by monkeyxpress ( 4016725 ) on Wednesday November 22, 2023 @06:31AM (#64023615)

        It's pretty obvious to me that they want to remove the weird non-corporate-saving-humanity stuff from their corporate structure. It just like how Google had committees on Ai safety etc right up until they realised their search nest egg was under threat from LLMs. Everyone at OpenAi is a multi-millionaire now, and the only thing standing between them and a huge payday is their enlightened corporate structure.

      • Better to jump on the train than stand in it's path?

        He will still get fired over his role in this, but he will go out rich. The alternative was to see the company crash and burn as all the employees left -a pyrrhic victory.

  • in 3...2...1 Payday for all signatories of the letter and for Altman ofc.
    • Non-profits don't do IPOs.

      • Re:IPO incoming (Score:4, Informative)

        by Entrope ( 68843 ) on Wednesday November 22, 2023 @06:18AM (#64023589) Homepage

        The company has been "capped profit" since 2019 and have a for-profit subsidiary, specifically allowing employees to get shares: https://openai.com/our-structu... [openai.com]

      • Re:IPO incoming (Score:5, Interesting)

        by monkeyxpress ( 4016725 ) on Wednesday November 22, 2023 @06:27AM (#64023607)

        Non-profits don't do IPOs.

        It's not a non-profit anymore. I don't believe it has been since 2019. It has a weird corporate arrangement, but it is able to attract investor funds and reward employees with equity.

        If anything, it looks like this whole thing is a power play to get rid of the remnants of its alternative corporate structure, and transition to a standard corporate model. This would make sense since there is now billions of equity value at play. It looks like they've been quite successful, and I assure you the next thing will be a rewriting of the corporate rules to remove all the hippie love stuff, though it will all be dressed up as 'assuring safe development of AI for the good of humanity' or whatever.

        • "Do no evil". Haven't heard that one in a couple of decades....

        • I bet you are correct. A shame they had to vampirize the identity of OpenAI to do it. If Altman and supporters went to MS, or started a new company, OpenAI could continue to have the watchdog role it was designed to have.

          • by ceoyoyo ( 59147 )

            "continue" lol. OpenAI turned into what it was designed to oppose quite a while ago. Certainly by the time they announced GPT 3, said it was "too dangerous" to release openly, and they'd be making deals with big companies to commercialize it, their transformation was complete.

            Fortunately they were replaced in their role by Facebook.

            Yeah, it sounds weird.

        • Exactly. The controlling nonprofit structure is getting too much in the way of money. I don't know what it takes to change a corporate charter, but what I expect is the board's balance will shift in favor of investors and profit.

          This quote from a Reuters story hit the nail on the head: '"The return of Altman consolidates his influence over the direction of OpenAI, and probably means it will be more bold and profit focused, but also potentially less risk averse," said Kyle Rodda, analyst at Capital.com.'

          And

        • Non-profits don't do IPOs.

          It's not a non-profit anymore. I don't believe it has been since 2019. It has a weird corporate arrangement, but it is able to attract investor funds and reward employees with equity.

          If anything, it looks like this whole thing is a power play to get rid of the remnants of its alternative corporate structure, and transition to a standard corporate model. This would make sense since there is now billions of equity value at play. It looks like they've been quite successful, and I assure you the next thing will be a rewriting of the corporate rules to remove all the hippie love stuff, though it will all be dressed up as 'assuring safe development of AI for the good of humanity' or whatever.

          Well the non-profit is still there and I don't think you can get rid of it.

          Basically OpenAI, the non-profit, launched OpenAI Global, a for-profit offshoot to fund their research.

          Investors can get 100x return and anything above that goes to the non-profit.

          I don't see the non-profit dissolves or loses their equity. But they'll probably give up some of their oversight, and the structure could allow OpenAI Global to break free and do an IPO.

          • It's not written in stone though - they can change all that with an agreement. And that's what Altman is doing - he's negotiating, and he used a big lever to do so, which will now result in him getting everything he wants. If the non-profit doesn't want to agree, they're faced with the same problem again - losing all their employees and basically becoming irrelevant. They have zero options, and nobody will want to touch the board positions with a barge pole, which means Altman and MS will be able to put in

  • What did he do to get dismissed in the first place? A board does not just fire a CEO, there have to be good reasons.

    • by ShanghaiBill ( 739463 ) on Wednesday November 22, 2023 @05:30AM (#64023537)

      What did he do to get dismissed in the first place?

      That is not public knowledge.

      A board does not just fire a CEO, there have to be good reasons.

      Not true. I was on two BODs. Plenty of decisions are made for stupid reasons, bruised egos, turf wars, spite, etc.

      • What did he do to get dismissed in the first place?

        That is not public knowledge.

        A board does not just fire a CEO, there have to be good reasons.

        Not true. I was on two BODs. Plenty of decisions are made for stupid reasons, bruised egos, turf wars, spite, etc.

        No specifics, but the board's mandate was safe AGI, and that didn't seem to be in good alignment with Altman's growing monetization push.

        I mean he was raising money from the Saudi's for a chip startup [bnnbloomberg.ca] (separate company). It's hard to see that as being in alignment with the safe AGI.

        The other thing that's been bugging me. It sounds like Altman didn't have shares in OpenAI Global, which is a weird position for the CEO and co-founder of the latest unicorn. Was there some kind of plan/negotiations under way to

        • None of the board members had shares, nor the CEO, that was supposed to be part of the check against recklessness. The idea was to prevent them from having a profit motive.

          It seems likely that that's all out the window now.
      • by gweihir ( 88907 )

        Well, correction: there have to be real reasons which may be good or bad.

    • Re: (Score:2, Insightful)

      by narcc ( 412956 )

      I'm guessing it was all just a publicity stunt to keep OpenAI in the headlines. Nothing else makes any sense. As you rightly point out, you don't boot your celebrity CEO without a damn good reason. WSJ is reporting that a 'board coup' lead by one of the other founders lead to dismissal, but that's doesn't tell us anything meaningful, it just gives the story a villain.

      • by gweihir ( 88907 )

        In a messed-up way that makes a lot of sense. Especially when this is a slow-motion pump and dump scam and they saw it not going quite as they like or saw the scam slowly becoming too obvious. LLMs are a nice stunt but hardly fit for any real use and model collapse may make them unusable in a few years.

    • They just needed him to be out of sight for a few days so they could replace him with the AI version.
    • What did he do to get dismissed in the first place? A board does not just fire a CEO, there have to be good reasons.

      The little bit I've been able to put together, with a bit of research, was that Altman was focused on growth and profit potential, while the board was still in non-profit mode and hoping for "the best for humanity" or some such market speak for "we want to do good things." Pretty much just profit seekers slamming up against do-gooders and guess who ultimately wins in that clash every time?

      • by gweihir ( 88907 )

        Indeed. Reminds me of a certain company that had company motto about not doing evil until they got large enough and could stop pretending.

  • by Bobknobber ( 10314401 ) on Wednesday November 22, 2023 @06:12AM (#64023579)

    Whether the board likes it or not, OpenAI is now conjoined to the MS corporate amalgamation. Whatever drama happened that made Altman temporarily lose his job, Nadella likely had a hand in re-instating him. And we can expect to have at least one MS representative as a board member to ensure compliance with their new bosses.

    For all that OpenAI spiel about AGI alignment, it was OpenAI that got re-aligned by MS. The irony just wills itself into existence with news like this.

    • by gtall ( 79522 )

      I think that is an apt assessment. Whether MS can keep the flock at OpenAI is another matter. They promised to quit if they didn't get Altman reinstated, but they may not like MS yanking their chains from now on. MS's quest for profit and dominance will determine OpenAI's future direction, not Altman. It isn't like the flock won't have plenty of other opportunities for employment in that sector.

      • If the employees are truly devoted to Altman then I can see them staying. The likely increased pay raises and stock options from MS only sweeten the deal.

        It should also be noted that Altman is the face of OpenAI. Neither Brockman nor Sutskever have the public appeal and mysticism that Altman brings like some new age techno prophet. Even if he is really just a glorified manager with no actual tech skills.

        • But but but AI will change EVERYTHING!!!!11111

          We had an article this week on how it will even change human evolution!

          It's a bunch of Silicon Valley pie in the sky crazy hype shit to make a few people rich in a giant pump n dump.

          Suckers are already using it to publish fake news articles and other stuff online which has generated super fail nonsense.

          • Oh, itll change quite a bit. Ignore all of the futurist-I-fantasize-about-The-Matrix types and the singularity cultists, and what you have here is the development of something akin to the electronic calculator. When it came around, it rendered some jobs obsolete but mostly just massively increased the productivity of other jobs. And people STILL needed to learn the math. ChatGPTv15 will be very much the same
    • You are of course correct, this is how Microsoft will stay relevant. They own 49% of OpenAI and it runs on Azure for a reason.

      Few companies have the resources to train AGI. Google, Apple, Meta and maybe Tesla & Amazon have what it takes. China will likely have something built for their domestic market. Microsoft is now is part of that club with Azure and OpenAI, but without a viable phone, they don't have the end user hardware needed to exploit the next logical stage which are agents that empower LLMs
  • ObTrivia (Score:4, Funny)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Wednesday November 22, 2023 @07:45AM (#64023703) Homepage Journal

    There are reports in the tech press that suggest that there's a religious cult within OpenAI that devoutly worships AGI, apparently with the chief scientist acting as priest. However, one should bear in mind that there's a lot of hostility towards said chief scientist right now, so the truth of the claim is hard to ascertain. Still, there have been tech geniuses who have gone completely insane before, so it's not impossible.

    The good news is that virtually everyone studying intelligence has come to the conclusion that LLMs are fundamentally incapable of producing actual intelligence.

    • I see this statement a lot, but the truth is, we can’t see the implications of llms in the future. There was no way for the first person who figured out what a wheel is to understand it would inform levers, gears, generators, and every tech built on them.

      Llms are powerful in their bitch already, just like the wheel was. We have to see where it takes us later.

    • The good news is that virtually everyone studying intelligence has come to the conclusion that LLMs are fundamentally incapable of producing actual intelligence.

      Of course, because our definition of "intelligence" is abysmally imprecise to the point of being useless.

  • What we found out was that it was actually OpenAI's *board* that had been replaced by AI. I mean, what other explanation is there for all this nonsensical drama!

  • So rumor has it the board wants to do a merger with another AI company. Altman said no, so he was ousted. The next CEO quit. The former Twitch guy wants proof that Altman was terminated for cause before taking the job. And that other company also say no.

Life is a healthy respect for mother nature laced with greed.

Working...