Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Microsoft AI

Microsoft CEO Says AI Is a Tidal Wave as Big as the Internet (bloomberg.com) 111

An anonymous reader shares a report: In 1995, Microsoft co-founder Bill Gates sent a memo calling the internet a "tidal wave" that would be crucial to every part of the company's business. Nearly two decades later, Microsoft's current leader, Satya Nadella, said he believes the impact of artificial intelligence will be just as profound. "The Bill memo in 1995, it does feel like that to me," Nadella said on this week's episode of The Circuit With Emily Chang. "I think it's as big." Central to the latest attempt to transform Microsoft is OpenAI, a startup whose generative AI technology has created so much buzz that it snagged a $13 billion commitment from the software giant.

"We have a great relationship," OpenAI Chief Executive Officer Sam Altman said on The Circuit. "These big, major partnerships between tech companies usually don't work. This is an example of it working really well. We're super grateful for it." The alliance has plenty of critics. The loudest is Elon Musk, who co-founded OpenAI with Altman and then split from the company, citing disagreements over its direction and the addition of a for-profit arm. He has said OpenAI is now "effectively controlled by Microsoft." In response to a question about Musk's critiques and the prospect that Microsoft could acquire OpenAI, Altman said, "Company is not for sale. I don't know how to be more clear than that."

This discussion has been archived. No new comments can be posted.

Microsoft CEO Says AI Is a Tidal Wave as Big as the Internet

Comments Filter:
  • by mukundajohnson ( 10427278 ) on Thursday August 17, 2023 @04:50PM (#63775616)
    How's trying to get people to use Bing going?
  • "Salesman who's... (Score:5, Informative)

    by VeryFluffyBunny ( 5037285 ) on Thursday August 17, 2023 @04:51PM (#63775618)
    ...selling a product/service says it's big. Blah, blah, blah." FTFY.
    • CEOs get their position by being good at politics, not by understanding technology. These CEOs are all falling for, and spreading, the hype, based on FOMO.
    • Dude this is bigger than Blockchain!
      • I've got an idea. How about we create a marketplace for some kind of tokens that only AI can generate & every one is unique & special & only for rich, stupid* people?

        * stupid = Puts great value on things they don't understand & don't consult with independent, trustworthy 3rd parties for advice (AKA due diligence).
  • It's just a linear regression. With a shitton of parameters and costing a ton of energy. But it's a linear regression. And the results are meh for anything of importance.
    • It's just a linear regression.

      Yep. And aren't you upset you didn't think of it 5 years ago which would now have netted you stupid amounts of money. In other news advanced semiconductors are just physics so we can dismiss the transistor as insignificant right?

      • Yep. And aren't you upset you didn't think of it 5 years ago which would now have netted you stupid amounts of money.

        Who has earned money from AI? (And I mean, actually earned it, as opposed to selling their product to another company who loses money from it.)

        • AI? A lot of people. AI has been used for a while now, not in how you see it, but making decisions based on unstructured data. AI is what we’ve been training with Captchas all these years. Companies are selling visual analysis products using AI.

          LLMs? Not yet. I think $30/mo for Microsoft 365 will be one of the first to obtain revenue.
          Call centers are waiting, however, this is right around the corner for them.

          • The issue though, and one raised by other posters is that LLM tech is largely open-sourced. Which makes direct monetization difficult.
            Bundling it with a pre-existing product like MS365 is low hanging fruit, and not even guaranteed to meaningfully increase sales of the product. Most of the people who would have gotten 365 have likely already gotten it one way or another.
            Startups selling services that are effectively just piggybacking off of a pre-existing model like GPT-4 have also been facing seriously high

          • AI has been used for a while now, not in how you see it, but making decisions based on unstructured data

            That's data science.

        • Who has earned money from AI?

          Everyone working in the AI field. Literally they have been the top paid IT jobs for the past year.

          And I mean, actually earned it, as opposed to selling their product to another company who loses money from it.

          Well if you discount the creation of something which you sell to others as earning then no one. But then no one has ever earned anything other than a salary by your definition. And the highest IT salaries right now are in AI.

    • It's not linear regression, AI is inherently non-linear (more specifically neural nets a sigmoid is nonlinear). That is why it makes better more detailed fits and classification. The training is similar though.
      • by narcc ( 412956 )

        That's not accurate. AI is a very broad term that covers everything from decision trees to neural networks, linear regression included. There is nothing 'inherently non-linear' about AI.

        Neural networks can be non-linear, but that's not an absolute requirement. While non-linear activation functions, like sigmoid, are very common, they can also be linear. As it happens, a neural network that uses a linear activation function is a linear regression. That's where that all that "it's just a linear regression

        • I don't think so, machine learning is usually the term to refer to linear methods, neural networks are referred to as AI. The 'black box' of neural networks is non-linear, you can use any method you like to train it. https://medium.com/swlh/why-are-neural-nets-non-linear-a46756c2d67f
          • by narcc ( 412956 )

            I don't think so

            That's why I corrected you. It appears, however, that you'd rather stick with your uninformed assumptions. What a shame.

            machine learning is usually the term to refer to linear methods, neural networks are referred to as AI.

            Complete nonsense!

            The 'black box' of neural networks is non-linear,

            It's hardly a 'black box' and it's only non-linear if your activation function is non-linear. I've explained this to you already.

    • by timeOday ( 582209 ) on Thursday August 17, 2023 @05:49PM (#63775820)
      You're... a bit behind

      In 1969, a famous book entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. It is often believed (incorrectly) that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page on Perceptrons (book) for more information.) Nevertheless, the often-miscited Minsky/Papert text caused a significant decline in interest and funding of neural network research

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      • Exactly. The math is ancient. I've coded the perceptron before.
        • I'm 99% sure you're being coy, but you know deep nets are universal function approximators (not linear) because they use nonlinear activation functions between each of the several stacked linear layers, right?
    • by LordofWinterfell ( 90845 ) on Thursday August 17, 2023 @06:17PM (#63775870)

      Ok, i have to bite.

      The magic is in the interface. Who cares about the hype machine. Look at the reality of what this can do - hold a conversation.

      That is huge, not that it’s an amazing emoting machine, more that people can talk to it, and it can derive what they want, and provide a response. This is why voice portals suck so bad - they could only understand pre-defined responses. Key words, no context. LLMs can utilize the context and previous questions to refine its guess of what you want.

      For the masses, this means asking a question to an agent and getting a response in context. It means not having to keyword, or abstract your intentions.
      With specific db training, and not using the Internet, these LLMs can replace most telephone support, especially tier I.
      It can replace simple programming, and offer programming on the fly.

      It gives casual uses extraordinary power. A creative could define loose parameters, and have an AI develop a basic application based on their idea, which can be further refined - but the initial app can be created by someone with no programming experience. For example, an LLM trained on specific programming could create a new instrument plugin for Logic, for example, by being asked to create a plugin based on a specific sound or parameter.

      • Yeah on that front you're right. The achievement is that it can fool someone into thinking you're talking to a human. Sort of. The killer app is the chat itself.
      • For the masses, this means asking a question to an agent and getting a response in context. It means not having to keyword, or abstract your intentions.

        The masses are looking for an answer to the question. LLMs can't give an accurate answer.

      • by vyvepe ( 809573 )

        With specific db training, and not using the Internet, these LLMs can replace most telephone support, especially tier I.

        Well, considering that current LLMs mislead to you in about 30% of times, they are about as good as tier I support :)
        But we can hope LLMs will get better.

      • by mjwx ( 966435 )

        Ok, i have to bite.

        The magic is in the interface. Who cares about the hype machine. Look at the reality of what this can do - hold a conversation.

        That is huge, not that it’s an amazing emoting machine, more that people can talk to it, and it can derive what they want, and provide a response. This is why voice portals suck so bad - they could only understand pre-defined responses. Key words, no context. LLMs can utilize the context and previous questions to refine its guess of what you want.

        For the masses, this means asking a question to an agent and getting a response in context. It means not having to keyword, or abstract your intentions.
        With specific db training, and not using the Internet, these LLMs can replace most telephone support, especially tier I.
        It can replace simple programming, and offer programming on the fly.

        It gives casual uses extraordinary power. A creative could define loose parameters, and have an AI develop a basic application based on their idea, which can be further refined - but the initial app can be created by someone with no programming experience. For example, an LLM trained on specific programming could create a new instrument plugin for Logic, for example, by being asked to create a plugin based on a specific sound or parameter.

        Erm, "AI" has already replaced T1 phone positions in a lot of places... Hence we have the "in a few words, please tell me what you're calling about". It really struggles with regional accents in the UK (we have over 40 compared to the US's 6-ish).

        AI has already made a lot of careers dead or endangered. If you're looking for flights do you go to a travel agent or to Google Flights/SkyScanner(if you like getting ripped off)?

        When was the last time a door to door salesman poped round? You want something,

    • Maybe you're holding it wrong.

    • by vyvepe ( 809573 )

      It's just a linear regression. With a shitton of parameters and costing a ton of energy. But it's a linear regression. And the results are meh for anything of importance.

      That is not true. Large Language Models (LLMs) have Transformer Architecture. They are Recurrent Neural Networks (RNN). RNNs are Turing Complete. RNNs can evolve into General Artificial Intelligence (GAI). I'm sceptical about it. I think it is very unlikely to happen anytime soon. But there is possible that an RNN will turn into a GAI.

  • by Opportunist ( 166417 ) on Thursday August 17, 2023 @05:00PM (#63775656)

    In other words, try to create their own version, fail miserably to get people to buy into their more expensive and inferior version, hastily cobble together something that kinda-sorta-maybe works in the framework of the thing they tried to supplant, fail at that too, pretend to run their own version of the most popular application while actually running the competition's version (but selling their shoddy product to unsuspecting corporations), desperately buy out some startups to have at least something to show for, use their market monopoly in the OS market to push their own crappy version of an interface as the de-facto standard even though everyone has to create their content with a specific "if people come with the crappy MS version, run this code" passage in their server application...

    Thanks. But no thanks.

    • They do seem to already be following the classic Microsoft "Embrace, Extend, Extinguish" playbook with AI.

      Instead of making a free version of Netscape with Internet Explorer, this time they're making a free version of ChatGPT and calling it "New Bing".

      Hell... they're even forcing users to use Microsoft Edge to use it. Some things never change over there.

      • New Bing... is this like New Coke, i.e. an even crappier version so when they bring back the old one people think it's the best thing since sliced bread?

    • In other words, try to create their own version,

      Nah, Nadella knows their limitations. They just bought the tech from OpenAI.

      • So they do learn and take the shortcut this time? I.e. don't try to roll their own, fail, get egg on their face, pretend they use their own dog shit while actually using the working product (but selling the shit to their customers), but instead just omit that step and buy out someone who knows what they're doing right at the get-go?

        Aww. That's cheating.

        • but selling the shit to their customers

          They haven't gotten to that point yet. No one is buying so far.

          • Sure they are. Customers are buying OpenAI's API, and as OpenAI expands so does its footprint on Azure. OpenAI is basically just a friendly looking front for Microsoft.

            Microsoft doesn't have to worry about selling shit to customers and they don't have to worry about spearheading the technology. They've set it up so they're the landlord that owns the infrastructure.

            • I don't understand what you are saying. How is OpenAI a friendly looking front for Microsoft?
              • Because they are just propped up by Microsoft's money and cloud infrastructure. The Microsoft deals essentially make them owned by Microsoft in every sense except the 50% + 1 share sense. In political terms, they're a puppet state.

  • by e065c8515d206cb0e190 ( 1785896 ) on Thursday August 17, 2023 @05:01PM (#63775660)
    Microsoft CEO has something to sell you that's AMAZING and is gonna MAKE YOU TONS OF MONEY if you could only PAY HIM FIRST
  • by crunchy_one ( 1047426 ) on Thursday August 17, 2023 @05:02PM (#63775666)

    Now that the hyperbole machine is going into overdrive, it is a sure sign that the AI bubble is about to burst!

    "I think it's as big" says Nadella, comparing AI to the internet. Sure it is.

    Case in point: I've been sampling GitHub Copilot for a couple of months, and I can truly say it's a questionable product. Fully half its code suggestions are syntactically incorrect, and those that pass the syntax check had best be read very carefully due to serious semantic issues. It's almost as if Copilot was trained with code copied from random GitHub repositories without any regard for suitability.

    • by Petersko ( 564140 ) on Thursday August 17, 2023 @05:18PM (#63775728)

      I think the AI bubble bursting is a good metaphor, like the dot.com bubble of 2000. By that I mean that the change is coming, and it's coming very, very fast... the hype is just cresting prematurely. The avalanche will clear the decks, but it's not going to halt the change. I don't disagree with the premise.

    • There's no doubt that the bubble will burst at some point. However, I'm sort of with him on this one. I differ quite a bit on timing though - AI is at at the "300 baud modem and expensive phone line to talk to a BBS" stage of the Internet. That is, it's got a long way to go before anyone's even going to smell the sea air, let alone get wet from the tidal wave he's describing.

      I suppose from a CEO perspective, you need to be "in AI" to capitalise on whatever it becomes. It's a long, long game though - and the

  • by Okian Warrior ( 537106 ) on Thursday August 17, 2023 @05:05PM (#63775684) Homepage Journal

    Here's a recap from a previous post:

    In the beginning of May, the trained dataset LLaMA (from Meta) was leaked online. The open source community jumped on it and made several tremendous breakthroughs over the course of 2 months (!) to the point where anyone can have an unrestricted AI running on their home computer right now.

    As proof of concept, I downloaded and installed the "Stable Diffusion" text to image suite yesterday. With a couple of minor hiccups, the whole thing went seamlessly and now I have a text-image generator running on my home computer. Looking around for instructions I found a lot of "I'm a complete newbie, how do I install Stable Diffusion" posts form people who are wannabe artists, not engineers. The engineer community has made complete install scripts to do this - it just works.

    And the AI quality is nearly as good as the walled-garden versions. There was a breakthrough in training, where the open source people figured out how to get extremely good training data, and use LoRa differential training which reduces the size of the training matrices by a factor of 10,000 or so.

    You can now customize your AI for specific purposes, and retraining takes an hour (on a beefy laptop, or an evening on an older PC). There are communities online who specialize in designing targeted AI training sets to generate specific output topics. I grabbed the "star wars environment" and added it to my system and... yep, it generates some pretty good wallpapers in the manner of a Star Wars screenshot on command.

    All of this happened over the course of 2 months!

    Open source is eating everyone's lunch. The only hope the big corps have is to invent something new and not tell people about it, then release it to the public and gain market share for a couple of months until open source catches up.

    One could argue that the two (or three) months earlier this year open source advanced AI by 5 years of innovation and improvement.

    There's no way the big companies can keep up with this pace of innovation. Especially since anyone who makes an improvement can get impact from publishing a paper detailing their improvements.

    Big companies doing AI is a distraction. If you're really into AI, look at what the open source community is doing. (This one [nvidia.com], for example.)

  • The problem is only a select few know how to use it to its potential beyond "write a school paper on mitochondria."
  • by Yo,dog! ( 1819436 ) on Thursday August 17, 2023 @05:18PM (#63775726)
    1995 is nearly 3 decades ago, yet Bill Gates was already late to the game. The Mac supported TCP/IP long before Windows.
    • That's true. Unix and Linux were there long before Windows. But today, 95%+ of internet traffic is not from Linux desktops, but Windows, Mac, Android, and iOS. So while they were late, they were quite successful.

      In this case, Microsoft beat everyone else to the punch. They invested heavily in OpenAI before LLMs were known even in the tech community.

  • Not of his own volition, he was kind of coerced into it. It would have been awkward to say no. Watch the context around how he says it.

  • by kalieaire ( 586092 ) on Thursday August 17, 2023 @05:26PM (#63775744)

    with the internet, regardless of how you take a look at it, it's simply (and an oversimplification) a platform which you build everything on top of.

    ai, on the other hand, will be a collaborative scenario where it will work with us every step of the way.

    https://www.ted.com/talks/shya... [ted.com]

    eventually ai will evolve to the point that it can work independently of networks, as a companion, it'll take us to the stars.

    the internet might be mighty, but it's only a platform to connect everything together, with ai, it'll always be with you.

    • That's optimistic.
    • Agreed that it's bigger, but I doubt it will ever work independently of networks. It will always take a log of people to train AI, and it will always need to be served from large-scale server farms. It's just not economically feasible to shrink-wrap it and put it on a chip.

      • by narcc ( 412956 )

        Predicting the future is hard. Predicting the past, however, should be trivial. Still, you've somehow managed to fail. People have been running these things locally for several months now.

        • People have been running these things locally for several months now.

          Perhaps, in the sense that they are executing the software on a small, pre-trained data set. That's like saying you can run a search engine on a desktop. Sure, you can do that, but you cannot scale to do what Google does, on a desktop.

          Also, there is the aspect of training. Any AI that you can train, by yourself, in one location, is a small project indeed.

          • by narcc ( 412956 )

            You really don't have a clue, do you? Well, it's not like I actually expected your opinion to change in light of evidence to the contrary.

            • I'd love to see your evidence, if you have some! I'd a lot rather debate the evidence, than disparage each other. Maybe since I don't have a clue, you can help educate me!

              Do you have evidence that these "local" LLMs don't need training by large numbers of distributed people?

              • by narcc ( 412956 )

                Well, you could continue to wallow in your own ignorance, or you could do a quick search. Hell, you could just read the comments here, where other people have discussed the amazing work that hobbyists have managed.

                • Yes, I love to wallow in ignorance, thank you for your amazing insights! If it's so easy to come up with evidence, how about you copy and paste some links here...I sent you mine.

                  • by narcc ( 412956 )

                    Yes, I love to wallow in ignorance,

                    That's what I thought. I'm glad that you're finally able to admit this.

                    I should have known better than to give a known willfully ignorant troll the benefit of the doubt...

            • Here's my evidence:

              OpenAI hired at least 1,000 people to train their AI. https://siliconangle.com/2023/... [siliconangle.com]

              Do you have evidence that says your "local" AIs don't need this kind of distributed training?

              • by narcc ( 412956 )

                Does it hurt to be as stupid and willfully ignorant as you?

                • Yes, it hurts a great deal to be so stupid and willfully ignorant at the same time.

                  I take it that since you have nothing but insults, you have no actual sources to cite.

                  • by narcc ( 412956 )

                    Your ignorance is not my responsibility. Sill, just in case you're actually special needs: a tutorial [letmegooglethat.com]

                    • Very good, a link! Yeah, I've used LMGTFY before, very cool site!

                      So from the GitHub repo: https://github.com/tloen/alpac... [github.com] I find this little gem under Notes:

                      We can likely improve our model performance significantly if we had a better dataset. Consider supporting the LAION Open Assistant effort to produce a high-quality dataset for supervised fine-tuning (or bugging them to release their data).

                      And that link in turn leads to this page: https://projects.laion.ai/Open... [laion.ai]

                      which states:

                      Open Assistant is a project organized by LAION and developed by a team of volunteers worldwide. You can see an incomplete list of developers on our website.

                      The project would not be possible without the many volunteers who have spent time contributing both to data collection and to the development process. Thank you to everyone who has taken part!

                      So there you have it, your example "local" AI required training by a whole lot of people. That thing you install "locally" is just the output of all that labor, using distributed systems.

                      THAT was my point all along, these things require an entire ecosystem, it's no

    • You're describing general AI, which is several orders of magnitude beyond the stuff called "ai" today.
    • you havent grown hair on your chest have you?
      human beings are best in class sentient intelligence and we still need to keep learning after college.
      You think the A.I. trained to run the space shuttle is going to be able to handle a starship? cmon bro.
    • the internet might be mighty, but it's only a platform to connect everything together, with ai, it'll always be with you.

      No commercial AI system will be allowed to function without an Internet connection. These days you can't even play a game of Solitaire without an Internet connection.

      Internet > AI

  • So many people say it will be big that it won't be. The crowd is all hyped up and is rarely right until it is already obvious that something is big.

    Rarely does anyone accurately predict something is going to be big, no one sees that something that ends up changing everything until it has already changed everything. If you have to advertise/say it is going to be big, it has already failed.

    And right now with the development of AI (eliza on steroids). Everything seems to be build and hyped up on simple pa

    • Most of the AI stuff is already in place. However it is the end applications that make or break things. For example, will the latest AI innovations finally get McDonalds able to have a 100% automated kitchen? Will we see cars with level 5 autonomy where we can tell the vehicle to go from the parking garage to the cabin in the woods, with the vehicle not just handling urban, highway, and pavement driving, but going off-road in Moab-gnarly areas without flipping, snapping differentials or axles, or getting

  • He left out the part about being a tidal wave of bullshit
  • by sinij ( 911942 ) on Thursday August 17, 2023 @05:32PM (#63775766)
    Someone should come up with AI to complete forms for medical professionals. I heard from friends who are doctors that significant time spent on admin and not treating patients. Seems like a waste.
    • Simple solution: have the doctors enter their own data and skip the forms. Most other professional occupations seem to be able to handle it. Chemist, Pharmacist, Physicist, Engineer... are the doctors fingers broken or what?
      • by sinij ( 911942 )
        Data entry does not seems like a good use of doctor's time to me.
        • Tough titties. Everyone else has to do it. Doctors need to get over themselves and just get the fucking work done. Probably could be done in less time than they spend complaining about it.
          • by troff ( 529250 )

            The fucking work is being a doctor, diagnosing and treating people. The paperwork they REALLY need to do is fill out the patient history as they go. The rest of the forms and admin can be done by either practice administrators or software connected to the practice database like Primary Sense or Cubiko. Congratulations on saying the stupid shit only the ignorant morons say, tits for brains.

    • Or perhaps they could hire secretaries for doctors to do the admin.

    • the lawyers will love that one!
    • It is a waste, and that's the whole point. You can't create a technical solution to a political problem.

  • Buy my products in the next decade, they are important.
  • ... calling the internet a "tidal wave" --- Yet he apparently trivialized the Internet in the first edition of his book, "The Road Ahead," ---

    ... "I see little commercial potential for the internet for the next 10 years," Gates allegedly said at one Comdex trade event in 1994, as quoted in the 2005 book "Kommunikation erstatter transport." .

    Indeed, in his 1995 book "The Road Ahead," Gates would make one of his most well-known blunders: He wrote that the internet was a novelty that would eventually make way to something much better. "Today's Internet is not the information highway I imagine, although you can think of it as the beginning of the highway," Gates wrote. ...

    https://www.businessinsider.co... [businessinsider.com]

    • Technically, he may have been correct: it took 10+ years until the Internet came to be owned by a few commercial giants. And probably he imagined the information highway of today, where the gatekeepers squeeze a lot of money.

      • Not really. If he were correct, why did he change the book for the second edition? Face it, he missed the Internet.
  • To keep the attention off all the Musk cock you're gobblin'. Mmm. Mm.

  • It's easy to identify a tidal wave when it has already washed over you. Bill was a few years late in his "prediction".

  • You work for a tech website and are this stupid?
  • Somebody is desperate to stay relevant after they demonstrated they are incapable of running a cloud securely.

  • I think people will find out that, like all new technologies in history, there will turn out to be problems that are well suited to AI, and problems that are not.

Money will say more in one moment than the most eloquent lover can in years.

Working...