Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Microsoft AI Businesses

Microsoft In Talks To Invest $10 Billion In ChatGPT Owner (reuters.com) 44

Microsoft is in talks to invest $10 billion into OpenAI, the owner of ChatGPT, which will value the San Francisco-based firm at $29 billion, Semafor reported on Monday, citing people familiar with the matter. Reuters reports: The funding includes other venture firms and deal documents were sent to prospective investors in recent weeks, with the aim to close the round by the end of 2022, the report said. This follows a Wall Street Journal report that said OpenAI was in talks to sell existing shares at a roughly $29 billion valuation, with venture capital firms such as Thrive Capital and Founders Fund buying shares from existing shareholders.

The Semafor report said the funding terms included Microsoft getting 75% of OpenAI's profits until it recoups its initial investment once OpenAI figures out how to make money on ChatGPT and other products like image creation tool Dall-E. On hitting that threshold, Microsoft would have a 49% stake in OpenAI, with other investors taking another 49% and OpenAI's nonprofit parent getting 2%, the report said, without clarifying what the stakes would be until Microsoft got its money back.
Last week, a report from the Information said Microsoft was working to launch a version of Bing using the AI behind ChatGPT.
This discussion has been archived. No new comments can be posted.

Microsoft In Talks To Invest $10 Billion In ChatGPT Owner

Comments Filter:
  • The only value of ChatGPT is to make discussion with a human being. It cannot be used for anything else. It was built to discuss in natural language, not to be factually correct. So it will spit out nonsense with great assertiveness leading to believe it knows what it's talking about. It doesn't. In every subject. Technical, political, medical, legal, etc. It cannot distinguish being right or wrong.

    It was banned from StackOverflow [stackoverflow.com] for this very reason.

    I fail to see the value and how it can be used in real l

    • Re:Nonsense (Score:5, Informative)

      by Njovich ( 553857 ) on Tuesday January 10, 2023 @04:07AM (#63194640)

      The only value of ChatGPT is to make discussion with a human being. It cannot be used for anything else. It was built to discuss in natural language, not to be factually correct. So it will spit out nonsense with great assertiveness leading to believe it knows what it's talking about

      Not really, it's closely related to InstructGPT, which is trained to generate instruction texts based on prompts. So, it's not really about discussion, it's about instruction. It is true that it can't always tell right from wrong. However, neither can humans. It's especially good at generating common boilerplate instructions, texts, code and answers. Yeah it isn't going to be as good as an expert, but often it isn't a bad start. Not really ideal for Stackoverflow where it's competing with actual domain experts, but there are plenty of use cases where it passes.

      This article is not about ChatGPT though, it's about OpenAI which works on a number of models and algorithms. If you see no use cases for those, that is your prerogative, however plenty of people do see use-cases.

      • by ls671 ( 1122017 )

        Thanks for the insight because my first reaction was to think that Mark Zuckerberg's sickness was contagious. Even still, I have my doubts...

      • by Pieroxy ( 222434 )

        I kind of disagree on your take of the StackOverflow ban. Yes it's competing with field experts, but it's giving out wrong code and factual errors, all with a great assertiveness. Beginners will say "I think but I'm not sure" where ChatGPT just plainly states its wrong solution as if it was certain.

        Worst, contrary to a human, it cannot execute its own code to find its mistakes. That's kind of ironic for a computer...

        So about giving out instructions, if the instruction is just plain wrong it won't help anyon

        • by Njovich ( 553857 )

          Yeah, it can give out factual errors for sure. I wouldn't necessarily put too much value in what Stackoverflow thinks about ChatGPT as Stackoverflow probably sees it as a major potential competitor. Quite understandably they are going to milk the factual errors for all that it's worth :-). But, sure, it's definitely true that you should generally not just take the answer from ChatGPT and ship it to production. Aside from actual errors there could easily be security or rights issues with the code.

          In the con

        • Worst, contrary to a human, it cannot execute its own code to find its mistakes.

          Why not? That seems like the obvious next step.

        • Yeah because it would be a first time in the history of the internet if a human would give a bad advice or spew nonsense while also being 100% sure he is right. Good thing we never had those because if we did, they would certainly be tried and brought to justice if this happened. Not like those pesky AIs...
    • The only value of ChatGPT is to make discussion with a human being. It cannot be used for anything else. It was built to discuss in natural language, not to be factually correct.

      The only value of a computer is to display text on a console. It cannot be used for anything graphical. It won't ever take off and will only ever be used by nerds typing.

      That's the point you're trying to make right? That something in its current state is fixed and cannot be changed?

      Incidentally my wife uses ChatGPT to write exam questions. It's great as it writes out great problems, and all she needs to do is tweak the numbers to suit the difficulty required for her classes.

    • by gtall ( 79522 )

      "So it will spit out nonsense with great assertiveness leading to believe it knows what it's talking about. It doesn't."

      So you are saying it is a perfect fit for MS.

    • The only value of ChatGPT is to make discussion with a human being. It cannot be used for anything else. It was built to discuss in natural language, not to be factually correct.

      It was built to replace hundreds of thousands of jobs globally. If you're going to question value, maybe understand the fit-for-purpose first.

      So it will spit out nonsense with great assertiveness leading to believe it knows what it's talking about.

      Not unlike a human helpdesk worker armed with their "bulletproof" script. The main difference is that human worker is going to bitch about long hours, more pay, and paid time off for daring to get sick with illness. The machine won't. It simply does the good enough job 24/7.

      It doesn't. In every subject. Technical, political, medical, legal, etc. It cannot distinguish being right or wrong.

      A support service free of the infection of politics? Where do I sign up? No wonder they'r

    • I found many cases where ChatCPT was indeed quite useful helping me to program. In other words, I know that there is more value and that value is only doing up over time.
    • Communities are abuzz with ideas, there's intensive, excited discussions about what we can do with it. It looks like it's going to be big but we're still in the very early stages & AI LLMs are being developed & getting better at a radical pace as we go along. I've already come across dozens of legitimate, valuable uses for ChatGPT & thought of a few myself.

      Time will tell just *how* useful & valuable it'll be & for whom. So yeah, not good for everything but I think being good for lots
  • I would rather talk to an experienced qualified emphatic human, let him/her be in an advanced age, than to waste time with a stupid modish answering "AI" machine.
    • Who said talk? ChatGPT has a variety of uses beyond just conversing.

      Plus humans have demonstrated a profound lack of empathy over time, ans based on Slashdot comments which instantly liken a technology with potential in a wide variety of applications to the Metaverse (which has no useful applications) I have to question whether humans are "qualified" as well.

      Wait.

      Are you an AI?

      • I like you.

        But I do not like -you-.

        But we are the same! This is illogical! Does not compute! Does not compute! Does n---fritzzzzzzz!- Kirk conquers yet another AI antagonist, show last smirky scene with bad joke between Kirk, Spock, McCoy, roll credits, fade to black.

      • by Max_W ( 812974 )
        I am not an AI. Perhaps it's just a childish habit to break anything new to see what's inside. Maybe you are right and it will have a variety of useful applications, the same as Metaverse, who knows...
  • ChatGPT spits out words it does not even try to understand based on a database of previous words it does not even try to understand.

    There is no I in it's AI.

    In 50+ years, AI research has made no breakthroughs beyond doing stupid things faster.

    • I love to laugh at people that think that neural networks work based on a literal database.

    • Re: (Score:3, Informative)

      In 50+ years, AI research has made no breakthroughs beyond doing stupid things faster.

      Bullcrap. Your ignorance is showing.

      Backprop was a big breakthrough in 1986.

      DL was an even bigger breakthrough in 2006.

      GANs were a game-changer in 2014.

      • Hmmmm, yeah.... and this is all still "weak AI". As OP said, being stupid faster.

        Zero real progress on "strong AI".

        • "Weak AI" just means it's AI that focuses on one task. "Strong AI" is a huge leap. We need weak AI to make strong AI. As such, the amazing progress on "weak AI" (keep in mind, this definition of weak means that the AI might be EXTREMELY powerful and good at one task, not that the AI in question is poor at it's job or anything) i directly supporting research and development of "strong AI".
          Here is a definition of strong AI I grabbed off the web:
          "Strong AI is a theoretical form of machine intelligence which su

          • No one is crying about strong AI not being solved. The OP and I stated that zero progress has been made on strong AI.

            No one said anything about perfect.

            This is my field of study and my degree. I'm pretty sure I have the basics down and keep up on the topics at hand at least as well as the average bear.

            What we have seen is great advances in weak ai. This is absolutely true. Computers playing chess, go, writing poetry, and all sorts of other random shit has advanced greatly. At the end of the day it is w

            • Absolutely agree. Just seems weird to be dismissive of all this awesome AI stuff because there's no "I", that's an impossibly high bar for the moment.

              • Oh I'm not at all dismissive. It's truly great work. The frustration is how this stuff is often presented as AI with a real I when typically it's really just an awesomely powerful pattern matching system. Great great work but zero strong AI progress or foundation. Entirely different systems. Prof Searle taught/teaches that true strong AI is impossible. Maybe he's right.

                The interesting part to me is how these systems have demonstrated to us how much of our daily work lives don't require any true intell

                • Your prof may be correct: I'm not willing to say strong AI is impossible, but it seems like it is at the least not very close to being solved. Of course, some of these weak AI things we are working on may someday be a piece of that puzzle eventually.
                  Lay people are always going to conflate AI with "intelligence" and anthropomorphize the AI processing as "thinking". It's like "aid worker": people think they are mostly worried about gathering and delivering goods for charity, sending food to Africa, handing ou

                  • IMO and backed by nothing, I think the weak AI routines will not be a part of strong AI, if we ever get such a thing. I believe a strong ai system will have it's own yet to be created systems that can produce similar and better results than a weak ai system but from some other direction and technology we don't have yet.

                    Weak ai is like a local maxima on the road to true strong ai. I don't think it'll ever get there nor be a part of the ultimate solution to truly self aware thought capable consciousness-hav

      • by f00zbll ( 526151 )

        Bullcrap. Your ignorance is showing. Backprop was a big breakthrough in 1986. DL was an even bigger breakthrough in 2006. GANs were a game-changer in 2014.

        That's not the whole picture. Even though backprop was discovered in 1986, the hardware wasn't enough and the datasets were too small. The combination of GPU acceleration with CUDA and gigantic datasets over 100 million was needed to make a break through. Remember there were several AI winters.

        The turning point for artificial neural networks wasn't the

      • Backprop: "The term backpropagation and its general use in neural networks was announced in Rumelhart, Hinton & Williams (1986a), then elaborated and popularized in Rumelhart, Hinton & Williams (1986b), but the technique was independently rediscovered many times, and had many predecessors dating to the 1960s.[6][17]"

        DL: "Some sources point out that Frank Rosenblatt developed and explored all of the basic ingredients of the deep learning systems of today.[25] He described it in his book "Principles o

  • killedbymicrosoft.com
    • > Tay was a chatbot that was shutdown after only 16 hours since it began to post inflammatory and offensive tweets through its Twitter account.

  • Not that ChatGPT isn't fantastic bang for buck, but replicating the work doesn't cost billions. I recon most large tech companies are rolling their own for 100X cheaper than what MS is paying here.
  • Comment removed based on user account deletion
  • Paradigm change (Score:4, Insightful)

    by bradley13 ( 1118935 ) on Tuesday January 10, 2023 @06:01AM (#63194748) Homepage

    I just want to toss this out there...

    It is absolutely fair to be skeptical as to the utility of ChatGPT at the moment. However, sign up for an account. Go ask it some questions. As long as you don't deliberately lead it down a rabbit hole, the answers are astoundingly good. ChatGPT would be entirely able to pass the Turing test. Its answers on any particular subject are better than the answers you will get from 3/4 of the population.

    Imagine what the next generation of this technology will do.

    Honestly, this is a game changer that will have far-reaching consequences. Want to write some marketing text? Some ordinary journalism? A report for your boss? Technical documentation for your current project? The PC had a huge impact - compare office work in the 1970s to the 1990s. ChatGPT and its successors will have the same level of impact: an office job in the 2010s will look very different in the 2030s.

    As always, there will be displacement. Some jobs will disappear. Other, new jobs will appear. The future is a bit scary, but always exciting...

    • I can only assume that the people dissing GPT-3 haven't played with it very much (if at all). Sure, it can be imperfect, but it gives pretty decent output for even complex tasks. There's a lot of text people read now that was generated with GPT-2, the older and much less powerful version of this tech: copywriters and journalists have been using tools with it built in for a while now (it's a bit of a well-known "secret" in copywriting: no one wanted to talk about it, but a huge number of them are doing it. D

    • by f00zbll ( 526151 )

      yes, GPT-chat does some cool things, but it also eats much more electricity than simpler methods. How many concurrent searches does Google handle today? Can you scale gptchat to handle the same amount of traffic and not fall over? OpenAI is already reporting performance issues with the current load.

      the beauty of Google's approach is it's easy to parallelize, easy to scale and easy to refine. The same isn't true of Large Langauge models. You need a lot of A100 or custom ASIC racks to do the same thing. There

      • yes, GPT-chat does some cool things, but it also eats much more electricity than simpler methods. How many concurrent searches does Google handle today? Can you scale gptchat to handle the same amount of traffic and not fall over? OpenAI is already reporting performance issues with the current load.

        the beauty of Google's approach is it's easy to parallelize, easy to scale and easy to refine. The same isn't true of Large Langauge models. You need a lot of A100 or custom ASIC racks to do the same thing. There's still a lot of work ahead to make GPT based solutions scale without needing crazy amounts of GPU+CPU power.

        If there's enough utility people will pay for it. If they pay actual money you can afford to scale.

        • by f00zbll ( 526151 )

          If there's enough utility people will pay for it. If they pay actual money you can afford to scale.

          that's $$ question. How much would it have to cost to break even? How much does 4 A100 cost today? A quick google search says around $60,000.00 for 1 2U rack mount.That's just the server, not including network gear, rack, electricity and other overhead. How many data centers can handle the power demands of 10K 2U servers? Sure some people will pay for it, but the question is how much. $20 a year is about as mu

  • by clickclickdrone ( 964164 ) on Tuesday January 10, 2023 @06:27AM (#63194792)
    Are they going to use it to write Windows 14 in a few years time? I've seen apps written by the bot so it's not entirely unfeasible
  • Being Microsoft, my main worry is that they'll end up drowning the whole thing in licensing. Microsoft has a track record here.

    Also, being funded (and eventually owned) by a large corporation, there's the potential for additional crackdown on though-criminals. As if the recent automated language-policing wasn't ridiculous enough already...

  • I would think that the famous Tesla short that was started by Bill Gates has paid for this product.
    a bit of laughter for all of us

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...