Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Businesses Microsoft

OpenAI Expects 'To Raise a Lot More Over Time' From Microsoft, Others To Build 'Superintelligence' (slashdot.org) 73

OpenAI plans to secure further financial backing from its biggest investor Microsoft as the ChatGPT maker's chief executive Sam Altman pushes ahead with his vision to create artificial general intelligence (AGI) -- computer software as intelligent as humans. From a report: In an interview with the Financial Times, Altman said his company's partnership with Microsoft's chief executive Satya Nadella was "working really well" and that he expected "to raise a lot more over time" from the tech giant among other investors, to keep up with the punishing costs of building more sophisticated AI models.

Microsoft earlier this year invested $10bn in OpenAI as part of a "multiyear" agreement that valued the San Francisco-based company at $29bn, according to people familiar with the talks. Asked if Microsoft would keep investing further, Altman said: "I'd hope so." He added: "There's a long way to go, and a lot of compute to build out between here and AGI... training expenses are just huge." Altman said "revenue growth had been good this year," without providing financial details, and that the company remained unprofitable due to training costs. But he said the Microsoft partnership would ensure "that we both make money on each other's success, and everybody is happy."

This discussion has been archived. No new comments can be posted.

OpenAI Expects 'To Raise a Lot More Over Time' From Microsoft, Others To Build 'Superintelligence'

Comments Filter:
  • Remind me (Score:5, Insightful)

    by a5y ( 938871 ) on Monday November 13, 2023 @02:50PM (#64002791)

    Which emperor's new clothes grifter is OpenAI?

    Is that the one selling receipts for monkey jpegs on servers the buyer doesn't own, the one where data is stored in an immutable state so fuckups are permanent and that's why everyone should invest in it or is it the one where artists get plagiarised by corporations and factual authority is decided by the popularity contest treating "a lie repeated is treated as the truth"?

    I'm ashamed to admit it doesn't take many different kinds of internet thought leader to make me forget which is which.

    • Re: (Score:1, Troll)

      by christoban ( 3028573 )

      Which emperor's new clothes grifter is OpenAI?

      I think most people were all pretty shocked when they released ChatGPT a year ago. They weren't very "emperor's new clothes grifter" then, were they? Do YOU want to make the mistake of underestimating the progress of AI?

      I do know is it had better be one of our companies that reaches superintelligence/general learning/whatever before China or Russia.

      Or you can just continue sniping from the sidelines, if that makes you feel better.

      • by Anonymous Coward
        "I agree that my tower is a grand achievement", remarked Gustave Eiffel, "but you've seen nothing yet. Next up: a space elevator!"
      • by a5y ( 938871 )

        I think most people were all pretty shocked when they released ChatGPT a year ago. They weren't very "emperor's new clothes grifter" then, were they?

        Which most people? The most people who can't program? The most people who use Windows as their OS? ChatGPT impressed those people? You want to use "most people" as a measure of authority? What happened to "a person is clever, people are stupid", or does that not apply to "most people"? You argument from authority is nakedly baseless nonsense. "Most people" can be suckered.

        They weren't very "emperor's new clothes grifter" then, were they?

        Did you feel personally attacked by what I wrote because you paid for NFTs and screamed about how blockchain is the future? Because those

        • I think most people were all pretty shocked when they released ChatGPT a year ago. They weren't very "emperor's new clothes grifter" then, were they?

          Which most people? The most people who can't program? The most people who use Windows as their OS?

          The answer is nearly everyone was pretty shocked by ChatGPT. I was very surprised and I've been a developer since the 1980s, since I wasn't following the industry closely. Why do you think you need to try to belittle people? Is it personal insecuirty?

          They weren't very "emperor's new clothes grifter" then, were they?

          Did you feel personally attacked by what I wrote because you paid for NFTs and screamed about how blockchain is the future? Because those were emperor's new clothes grifters. And there's plenty of similarities between their actions and LLM hypebeasts.

          Why the fuck are you personally attacking ME, you child??

    • Re:Remind me (Score:4, Interesting)

      by Rosco P. Coltrane ( 209368 ) on Monday November 13, 2023 @03:53PM (#64002981)

      You have to admire Altman: he managed to con Microsoft out of $10bn, and he seems sure enough that he'll be able to con them some more that he even announces it in advance.

      Until Microsoft realizes they bought a mediocre chatbot that repeats the data it's been fed, keep degrading as it feeds on the output of other bots like it and doesn't really make the Microsoft products it's been integrated into any better or more desirable. I mean, when was the last time you desperately wanted to ask the Bing AI a question?

      • by gweihir ( 88907 )

        To be fair, Microsoft has for a while not made their products "any better or more desirable". I think they are slowly circling the drain now and are either unwilling or (more likely) incapable of improving their products. All they can apparently do is throw irrelevant new features at the problem, while the pile of old problems that are unfixed is steadily getting higher and makes their stuff crappier and crappier. It is quite possible that MS knows LLMs are generally trash and will probably always be trash.

        • I think they are slowly circling the drain now and are either unwilling or (more likely) incapable of improving their products.

          Monopoly power has a way of doing that. Like that old SNL skit says: "we're the Phone Company. We don't care, because we don't have to!"

          • by gweihir ( 88907 )

            Indeed. And having this incredible number of eggs in this one basket bad-quality basket is also exceedingly bad idea for other reasons.

  • Ok (Score:2, Interesting)

    by The Cat ( 19816 )

    Announce it by showing how the system can find an average dad a good job.

    Should be simple enough for a superintelligence. Right?

    • Announce it by showing how the system can find an average dad a good job.

      Should be simple enough for a superintelligence. Right?

      Why would a superintelligent AI be interested in answering such questions?

  • by Press2ToContinue ( 2424598 ) on Monday November 13, 2023 @03:00PM (#64002831)
    Looks like OpenAI is setting a new high score in the 'Funding from Microsoft' game. They're leveling up faster than a teenager in a Mountain Dew-fueled coding marathon! With all this talk about 'superintelligence', I can't help but wonder if they're secretly building Skynet or just a really fancy Clippy. Either way, as long as they keep the AI from going full HAL 9000 on us, we should be fine. Right, guys? Right?
    • Looks like OpenAI is setting a new high score in the 'Funding from Microsoft' game. They're leveling up faster than a teenager in a Mountain Dew-fueled coding marathon!

      You mean they're the natural successors to Sam Bankman-Fried?

  • by King_TJ ( 85913 ) on Monday November 13, 2023 @03:17PM (#64002881) Journal

    I'm absolutely not a "luddite" against real progress in tech. But this AI push right now reeks of a big "fad to failure" we'll see unfolding over the next few years.

    I mean, maybe they'll prove me wrong? But so far, we've seen these AI chat systems doing a great job of giving detailed answers to queries that turn out to be false information and the whole thing stirring up a hornet's nest of I.P. infringement type lawsuits and accusations. (You can't really build an A.I. that hoovers up all the existing writing and art hosted online and NOT expect that outcome.) And these issues aren't even touching some of the sticker ones people proposed as challenges it would bring. (Ethical/moral dilemmas and so on.)

    Some projects prove to be SO costly to undertake, you really have to achieve success with them within a relatively short time-frame, because otherwise the investors will pull out and it's impossible to finish what you started. Personally, I'm ok with that because what that REALLY means is you were trying to accomplish it before the underlying tech was sufficiently ready/advanced. What takes massive farms of expensive servers today should be processing that can be done with exponentially fewer resources down the road, as computing itself advances. (Look at the size of "mass storage" devices from the likes of IBM back in the early days of their development, vs how much data can be saved on a USB thumb drive today.)

    • But this AI push right now reeks of a big "fad to failure"

      That's the "Slashdot consensus" on all new technology.

      I'll get off your lawn now.

    • Re: (Score:2, Interesting)

      by christoban ( 3028573 )

      Sounds like you're in step two.

      Shock, Denial, Anger, Bargaining, Depression, Testing, Acceptance.

    • They will need to solve the hallucination problem for sure in order to achieve anything like AGI. And the legal issues too, of course, but there are still plenty of technical issues beyond the hallucination problem that they will need to solve.
      Maybe they have a plan for them. I am sure not in a position to know. But if their only trick is large language model, then I think they will not succeed. This represents only one component of human intelligence. It doesn't include things like, say spatial reason

      • Maybe they have a plan for them. I am sure not in a position to know. But if their only trick is large language model, then I think they will not succeed. This represents only one component of human intelligence. It doesn't include things like, say spatial reasoning.

        Something that has persistently freaked me out from the very beginning about the better LLMs is their spatial reasoning abilities not even speaking of multimodal models.

        • by narcc ( 412956 )

          About that. You'll find that those 'abilities' tend to vanish with just a little bit of probing. See, it's not actually reasoning any more than Joe Weizenbaum's Eliza was empathizing with her clients. Such a thing is well-beyond the capability of these kinds of models. It's certainly a convincing illusion, but an illusion none the less. Though, just like Eliza, you need to be willing to play along.

          Try giving it slightly modified versions of a puzzle that "freaked you out". Try making it easier, but ke

          • About that. You'll find that those 'abilities' tend to vanish with just a little bit of probing. See, it's not actually reasoning any more than Joe Weizenbaum's Eliza was empathizing with her clients. Such a thing is well-beyond the capability of these kinds of models. It's certainly a convincing illusion, but an illusion none the less. Though, just like Eliza, you need to be willing to play along.

            Try giving it slightly modified versions of a puzzle that "freaked you out". Try making it easier, but keep the same 'structure' as before, by swapping or inverting some of the actions, or make a small change that makes the puzzle obviously illogical. I doubt you'll be 'freaked out' by those 'abilities' after a few minutes of probing.

            I've spent hundreds of hours with dozens of models from shitty 7B models to GPT-4. Not chatting up storms and reinforcing personal biases but attempting to falsify working assumptions. I'm also well aware of inherent structural limitations of the technology as currently deployed.

            My high level description of LLMs is like asking a person to say the first thing to pop into their head without much thinking about it. In some domains LLMs are like the person who has been on the job for years and can do it in t

            • by narcc ( 412956 )

              I'm calling what you've experienced an illusion because that's exactly what it is. These models simply can not do what you believe they can do and are not doing what you think they're doing. That's not my opinion, that's an objective fact.

              Spatial reasoning in larger models seems particularly crazy to me given the lack of eyeballs.

              Oh, it's crazy alright. See, models of this type are simply not capable of reasoning. We know this because they lack a sufficient mechanism. This is why the appearance of reasoning must be ... say it with me ... an illusion!

              I've spent hundreds of hours with dozens of models [...] Not chatting up storms and reinforcing personal biases but attempting to falsify working assumptions.

              Then you've failed miserably. I guess empi

              • I'm calling what you've experienced an illusion because that's exactly what it is. These models simply can not do what you believe they can do and are not doing what you think they're doing. That's not my opinion, that's an objective fact.

                Where are your receipts? No evidence to support any of your claims as usual.

                Oh, it's crazy alright. See, models of this type are simply not capable of reasoning. We know this because they lack a sufficient mechanism. This is why the appearance of reasoning must be ... say it with me ... an illusion!

                A sufficient mechanism? Is it possible to be any more vague?

                Then you've failed miserably. I guess empirical investigation just isn't your forte. Maybe your time would have been better spent in a classroom instead of a chatroom.

                In other words narcc is just an illusion. Any contradictory evidence for the existence of narcc is merely evidence of miserable fail. Anyone who believes narcc exists should have spent more time in a classroom instead of a chatroom.

                Then you either believe in magic or you don't know nearly as much as you think you do. Well, I guess it could also be both.

                Still at it with even more empty claims supported by "magic" I see.

                What do you think is happening, anyway? Some robot soul haunting the data center flipping bits? Some imagined "emerence" that somehow makes math irrelevant? Maybe you think it's aliens ... or a wizard?

                Does it matter if I know? Does my knowledge or ignorance have any impact

                • by narcc ( 412956 )

                  Is it possible to be any more vague?

                  LOL! What happened to you being "well aware of inherent structural limitations of the technology"?

                  Your problem is that you don't actually know anything, but fancy yourself an expert because you played with a chatbot for a few hours and refuse to accept anything that doesn't conform to your baseless assumptions. So, no, I'm not going to put a lot of effort into explaining things to someone who isn't interested in learning.

                  By all means don't let me stand in your way. Please do go ahead and prove it.

                  Wait ... is this a joke or are you really this incompetent? This is CS 101, for good

                  • LOL! What happened to you being "well aware of inherent structural limitations of the technology"?

                    No need for riddles. If you have objective evidence to support your claims speak up.

                    Your problem is that you don't actually know anything,

                    Your problem seems to be you don't actually have the receipts to objectively support your assertions.

                    but fancy yourself an expert

                    Me fancy myself an expert? What on earth did I say that would lead any reasonable person to any such conclusion?

                    Wait ... is this a joke or are you really this incompetent? This is CS 101, for goodness sake. Go learn about automata theory.

                    If you have a specific claim to make then make it. I suspect you won't do it because you simply don't have the receipts. All you can do is spew derisive commentary and speak in riddles.

                    because you played with a chatbot for a few hours and refuse to accept anything that doesn't conform to your baseless assumptions. So, no, I'm not going to put a lot of effort into explaining things to someone who isn't interested in learning.

                    I keep repeatedly asking y

                    • by narcc ( 412956 )

                      LOL! The only person making claims they can't back up here is you. I've given you more than you deserve already. It's not my responsibility to educate you and, quite frankly, I doubt you could handle the content.

                      If you have a specific claim to make then make it.

                      I did. I also gave you all that you need to validate those claims. Your lack of basic literacy isn't my problem.

                    • LOL! The only person making claims they can't back up here is you.

                      You speak of proof yet thus far you have provided evidence of absolutely nothing. All you have done is make excuses for your repeated failure to support your assertions.

                      I've given you more than you deserve already. It's not my responsibility to educate you and, quite frankly, I doubt you could handle the content.

                      Thus far you've said nothing because you have nothing. All you can do is resort to derisive commentary because you don't have the receipts.

                      I did. I also gave you all that you need to validate those claims. Your lack of basic literacy isn't my problem.

                      You've repeatedly said nothing. Your inability to communicate any objective evidence is your failing and your failing alone. Others cannot be expected to understand that which has not been stated.

                    • by narcc ( 412956 )

                      Pathetic. If you're having trouble understanding my post, get an adult to help you. What joke you are.

                    • Pathetic. If you're having trouble understanding my post, get an adult to help you. What joke you are.

                      There is nothing to understand because nothing was said. Derisive commentary and unsubstantiated assertions communicate nothing and have no value.

                      If you are unable or unwilling to articulate relevant evidence and how you believe it supports the proposition spatial reasoning demonstrated by LLMs are provably an "illusion" that's YOUR problem, nobody else's.

      • by dvice ( 6309704 )

        OpenAI won't solve the hallucination problem. But Deepmind will.

        Shane Legg (DeepMind Founder) says that there is 50-50 chance that Deepmind will have AGI by 2028. Predictions like this are nothing new, he actually predicted the same year already in 2011 and has not changed his mind about it. But what is remarkable in this interview is that he says that he is not aware of any problems they could not solve within the next 5 years. The 50% comes from the fact that new problems can always appear. But this prett

    • and the whole thing stirring up a hornet's nest of I.P. infringement type lawsuits and accusations. (You can't really build an A.I. that hoovers up all the existing writing and art hosted online and NOT expect that outcome.)

      Sure you can. I think its silly. If you post information that is freely available to anyone, to read or consume you can not be mad when they do it.

      If I watch your FREE YouTube video or read your FREE article on your blog about building sofa tables. And then make a business selling sofa tables, what exactly should I owe you? Answer: Nothing. Same applies to ChatGPT. Its a transformative work.

      • You don't understand copyright law.

        Publishing something online doesn't make it public domain.

        "But I found it on the internet!" is not a free pass around existing laws.

    • by gweihir ( 88907 )

      Well, I think this is just about getting more money as long as many people have not figured out that LLMs are fucking dumb, frequently hallucinating and cannot be fixed.

      • by narcc ( 412956 )

        and cannot be fixed.

        That's the one hardest for the true believers to accept. The way these models work, so-called 'hallucinations' aren't something that can be 'fixed' because it's not caused by something about the model being broken. 'Hallucinations' are exactly the kind of output you should expect even from a model build with perfect training data that was functioning exactly as designed.

        • by gweihir ( 88907 )

          and cannot be fixed.

          That's the one hardest for the true believers to accept.

          Indeed. No idea why they insist AI is smart and must eventually outperform humans because it must always get better and better. Probably some attempt to find a surrogate "God" in technology, or something.

          • by narcc ( 412956 )

            There's a lot of that sort of thing from the singularity nuts and the lesswrong cultists looking for something like salvation and immortality, which they believe AGI could grant them once they figure out how to "upload" their brains to the cloud, or whatever. Not content with just a heaven and a God, they even have their own version of hell complete with a devil to torture them in the form of Roko's basilisk [lesswrong.com].

            The obvious crackpots aside, for a lot of people AGI represents a validation of many of their core

            • by gweihir ( 88907 )

              All this from people who should know better, inspired by a chat bot that can't do basic math.

              Indeed. There _is_ a very religious feel to it. Of course it is utterly hilarious that on the subject of AGI, people go thoroughly irrational.

    • by ljw1004 ( 764174 )

      I mean, maybe they'll prove me wrong? But so far, we've seen these AI chat systems doing a great job of giving detailed answers to queries that turn out to be false information and the whole thing stirring up a hornet's nest of I.P. infringement type lawsuits and accusations.

      Meanwhile, most of my software development peers are using ML-powered tools to augment their coding (both chat-style where they ask a question, and built-in where it powers autocomplete). They moreover claim to be more productive from it. Your thing about "false information" misses the mark for them. If their task is to fix a bug in a codebase/language that's unfamiliar to them, then getting an approximate answer from ChatGPT (even if incorrect) gets them faster to a solution.

      Personally I tried it a bit and

      • LLM's are a useful tool. No denying that.

        They are not intelligent and the underlying technology is completely wrong to create an AGI.

        My car has round parts. My pots and pans have round parts. I can't cook on my tires or mount my pans on my axels.

    • I'm absolutely not a "luddite" against real progress in tech. But this AI push right now reeks of a big "fad to failure" we'll see unfolding over the next few years.

      I hope to hell that you're right, because that's a lot better outcome than if they actually succeed at creating superintelligent AGI.

  • we both make money on each other's success

    and on everybody else's back.

  • Perhaps one ought not try to create an intelligence so far greater than one's own. Especially an advanced intelligence with an unknowable nature that might have it kicking one's ass into submission, or even subservience.

    I think having that realization, and acting according to it, might be a defining characteristic of even basic intelligence. Or am I conflating intelligence and sense? ;-)

    • by gweihir ( 88907 )

      Don't worry about that. They have zero chance of getting there. When they finally have an "AI" that actually _understands_ that water is wet (and not just repeats what others have said about water), I will be suitably impressed. But they will not reach that point anytime soon, if at all.

      • +5, truth

        • by gweihir ( 88907 )

          Thanks.

          • I was going to post pretty much the same thing because it's absolutely true and then saw your reply as I reached for the reply button. And here we are :-)

            Way too many Hollywood sci-fi fans seem to honestly believe we are only a few short years away from skynet or star trek's Data, both of which are built on magic not technology.

            • Oh ha ha I just noticed I got modded troll for agreeing with you! Hilarious!

              • by gweihir ( 88907 )

                On that down-mod: Some people. Words fail me.

                What people do not realize how is incredibly slow technological progress is. From first demonstrated steam engine to safe, reliable steam engines took 100...300 years. For "AGI" we do not even have a theory at this time and certainly no demonstration and it is a far more difficult question. They just assume we live in this age of wonders and everything will be discovered, refined and brough to market much faster. That is not happening. Even the current artificial

                • Exactly. They seem to think AI research started with OpenAI. Nothing happened anywhere before that. Hell, by now my professors in the field have all either retired or died (or in one was case pushed out for fucking his grad students). A couple are still doing the emeritus thing where they don't teach or do much of anything but still show up on campus to do whatever it is old professors do when they're too old to do much anymore. Those guys were out before OpenAI was even conceived and without them and

      • Don't worry about that. They have zero chance of getting there. When they finally have an "AI" that actually _understands_ that water is wet (and not just repeats what others have said about water), I will be suitably impressed. But they will not reach that point anytime soon, if at all.

        It takes a cheap sensor to detect that water is wet. Even my sprinkler system has a rain sensor! But please, keep on debating the word "understands".

      • Don't worry about that. They have zero chance of getting there. When they finally have an "AI" that actually _understands_ that water is wet (and not just repeats what others have said about water), I will be suitably impressed. But they will not reach that point anytime soon, if at all.

        I take what you're saying to heart. But AI experimentation, and the complexity of it, are increasing exponentially. And I'm sure there's lots of creative, interesting, off-the-wall stuff being done that we don't hear about.

        What if some freak combination in the midst of all that activity results in something akin to whatever caused the intelligence gap between us and other primates? What if true AI just 'shows up'? Sure, the chances of that happening are small, but I'm cautious by nature. I wouldn't delibera

  • Do they mean an even more fucking dumb system? Or one that is fuckign dumb faster? Or maybe one that is fucking dumb about more topics?

    Because of what OpenAI offers has "intelligence", I just do not see it.

  • Chatbot / LLM technology is NOT the way to proper AGIs. The current systems produce only sparkly simulations that look good but have no validation below the surface skin. And the hallucinations in some results are a result of this deluded approach. Sure it looks good but there are some real fundamental flaws in what the AI dev hordes are mindless pursuing.

    The people rah-rahing that we will have full AGIs in ten years are spouting nonsense based on ambitious extrapolation. Yes, we will have powerful tools bu

  • And then ultraintelligence. Have we run out of superlatives yet?

"Don't try to outweird me, three-eyes. I get stranger things than you free with my breakfast cereal." - Zaphod Beeblebrox in "Hithiker's Guide to the Galaxy"

Working...