Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Microsoft

Salesforce CEO Benioff Says Microsoft's Copilot Doesn't Work, Doesn't Offer 'Any Level of Accuracy' And Customers Are 'Left Cleaning Up the Mess' (x.com) 37

Salesforce founder and chief executive Marc Benioff has doubled down on his criticism of Microsoft's Copilot, the AI-powered tool that can write Word documents, create PowerPoint presentations, analyze Excel spreadsheets and even reply to emails through Outlook. In a post on X, he writes: When you look at how Copilot has been delivered to customers, it's disappointing. It just doesn't work, and it doesn't deliver any level of accuracy. Gartner says it's spilling data everywhere, and customers are left cleaning up the mess.

To add insult to injury, customers are then told to build their own custom LLMs. I have yet to find anyone who's had a transformational experience with Microsoft Copilot or the pursuit of training and retraining custom LLMs. Copilot is more like Clippy 2.0.

Salesforce CEO Benioff Says Microsoft's Copilot Doesn't Work, Doesn't Offer 'Any Level of Accuracy' And Customers Are 'Left Clea

Comments Filter:
  • ...Copilot. Would you like help with:

    Throwing your hands up in despair?
    Googling alternatives?
    Creating a meme comparing Copilot to Clippy? (Spoiler: I was way cuter, and I didn’t spill data everywhere!)

    And hey, at least with me, you knew what you were getting—a friendly paperclip, not a half-baked AI with a cleanup crew!
  • I don't follow this bubble very much so I really don't know if they have a competing product. Having said that I will worry about AI when in congeals into something proven.
    • They either have one or have announced intent to introduce one, I forget which.

      If it's as good as Salesforce, it will be a million times shittier than Copilot.

      • by Rosco P. Coltrane ( 209368 ) on Friday October 18, 2024 @09:56AM (#64874385)

        If it's as good as Salesforce, it will be a million times shittier than Copilot.

        As someone who was forced to clean up one of our git repos at work in which a junior programmer committed dozens of Copilot-generated changes, that's a scary thought.

        (And yes, the dude was told he'd be summarily fired if he used that shit ever again without asking permission)

        • Nobody was ever fired for choosing Microsoft (Copilot).

          • Well that's about to change with AI, because you should have seen the mess. We were supposed to ship our new firmware update yesterday and the release date has been pushed back one week just to make sure the cleaned-up code is sane and passes production tests again.

            Copilot is not welcome at our company, and we're a Microsoft cloud customer through and through sadly - mostly because our IT guy is utterly incompetent. That should tell you how bad Copilot is.

            • ...we're a Microsoft cloud customer through and through sadly - mostly because our IT guy is utterly incompetent.

              I have found those two things to go together almost exclusively. The deeper a shop is into Microsoft, the more incompetent its leaders tend to be. I witness it personally time after time.

    • They made a lot of fuss about theirs a few weeks ago during their annual cult/networking/expense-tickets-to-vegas thing. Theirs is called "Einstein" so you know it has to be smart; and they are super all in on paying them to use 'agents' to chatbot customers, along with 'generative' to increase your sales team's spamming efficiency.

      In one sense I can understand why he's rubbishing Microsoft so vigorously: not only does 'copilot' deserve it; MS has a fairly obvious interest and fairly obvious ability(at l
    • Having said that I will worry about AI when in congeals into something proven.

      It works relatively well for doing web searches on Bing. Mostly gets past all the SEO.

  • You should be using "Einstein AI" [salesforce.com], it's so much better, it says right there this chatbot is so much better than my old one!

    What? No, I don't stand to make billions of dollars from this. How preposterous.

  • by caseih ( 160668 ) on Friday October 18, 2024 @09:26AM (#64874315)

    How do I disable copilot? And it gives a pretty accurate answer to that one. It's literally the only thing I've ever used copilot for.

  • He's not wrong - but there's apparently still an upside to copilot. I've personally not really seen it, but LLMs in general can be helpful for filling out "puff" in documents. Take this comment for example, I could write all of it out, or I could use an LLM:

    You know, folks, Copilot is a tool that can be really, really helpful—tremendous, actually. But let me tell you, it doesn’t always get it right. Sometimes it misunderstands the context and gives suggestions that are just plain wrong. You

    • by Anonymous Coward

      Sometimes it misunderstands the context and gives suggestions that are just plain wrong.

      Indeed. Some of us have access to Copilot licenses in Teams. I asked it to summarize yesterday's standup meeting (which, coincidentally, nobody remembered to record) and it hallucinated a whole legitimate-sounding summary of key points, actions and blockers... most of which had nothing whatsoever to do with the actual meeting.

    • Take this comment for example, I could write all of it out, or I could use an LLM:

      Just write the prompt into your comment, and save us the time from reading the filler nonsense.

    • Did you really have to use the Trump LLM for this?

      Ugh.. Make AI Great Again..

  • Says the guy that owns a service that attaches HTML files with a rerouter in the head instead of putting the link in the email like a normal person. Apparently he's never heard of Kryptix.
  • by bradley13 ( 1118935 ) on Friday October 18, 2024 @09:44AM (#64874359) Homepage

    He's a clueless dweeb, who listened to sales pitches from clueless dweebs at Microsoft. He probably hoped he could fire half of his developers. That would really boost his bonus! Turns out that's not the case, so he's disappointed.

    • Many CEOs are clueless, but Marc Benioff probably isn't.
    • It's fun to say it this way be there is nothing but the marketing hype has been huge on AI and has effected corporate stock market valuations to the tune of billions/trllions of dollars. Any CEO not playing along with this takes the risk of being dumped by the board of directors.
  • by Baron_Yam ( 643147 ) on Friday October 18, 2024 @10:05AM (#64874405)

    Not the political kind... Just "hey, maybe let us not jump blindly into this trend without careful consideration and some reasonable testing".

    "AI" is supposed to be doing all sorts of things that it clearly cannot, and people are losing their jobs to it while we're being told it's magically creating new ones.

    This economic disruption is bad for the average person in the short term, and in the long run it's not great for companies. Of course, it's a lot easier for the companies to change course after a few years, and executives' bonuses won't be affected at all...

  • by Murdoch5 ( 1563847 ) on Friday October 18, 2024 @10:10AM (#64874413) Homepage
    His point is gen AI is a gimmick that doesn't really work, is mashed together patches and a total let down. Can anyone call him out as wrong? Honestly, I don't know because I've never used gen AI that's good enough to warrant an endorsement.

    I have used it to generate scaffolding / boilerplate policy wording, which I then fill in / tailor to my needs, and it does that fairly decently. I've also used it in my IDEs to help with basic boilerplate code generation, and it's maybe 40% accurate at the simple stuff, enough that it saves me time. Would I ever use it in a professional, unguided, unwatched, and unverified capacity? Absolutely not, even accidentally, gen AI is not ready for professional uses cases that a human can't do better or more accurately.
  • Now that's how you burn Microsoft's ass.
  • I can't count the times I've heard people say things in posts and articles that, "I asked an LLM something and it was wrong!" Oh no.

    We're talking to computers here.

    I'll ask you all, 1. How accurate are humans? 2. Do humans accidentally leak data or cause security problems? 3. Do humans understand how to interact effectively with LLMs? 4. Let's check driving safety comparing humans and auto driving AIs. I think you already know the answer.

    1. Humans are not accurate in general and praise their own ac
    • 3. I took a course through a university regarding Prompt Engineering.

      It's kind of hard to take you seriously after that.

      • Really, I find it very interesting that a University would offer this. Presumably to the wider public and not students working toward a degree. It's a difficult concept to wrap your head around without taking the time to learn exactly what it does. And also, emergent behavior from what it does have is already counterintuitive without a lot of mental gymnastics.

        I see very smart people write very bad Google search queries. And some of them even know how modern search engines work. This is at least a few

    • The simple answer is that this is what the marketing says it can do.

      These LLMs do exactly what marketers should be saying they can do. But the moon was promised.

  • OK, I'm a dummy. I never got around to disabling automatic updates for the one Win 10 Pro computer I have for work purposes. I recently had the unalloyed pleasure of finding out I had an icon for Co-pilot squatting on my task bar like a turd on the carpet. Fortunately, it seemed easy enough to get rid of. I have a fresh disk image I will revert to if I find out "Uninstall" actually means "Hide and Continue 'Recall' Functionality".

    It's hard to describe the sinking feeling in my stomach when I found that

  • If you are depending on AI blindly then that is your mistake. These companies shoving AI in your face are as annoying as sites such as FB constantly shoving autocompletion links at you.

  • why does it need to be transformational to add value?

    Nothing of what salesforce has brought to market has been transformational, but some it suits a purpose and thus is adopted.

    For someone to say it's crap because it's not "transformational" is in need of a mirror.

    It's a tool... it's not a replacement for an expert in every field... it's there to assist... if it can make your team slightly more productive, it's a success. And it looks to be a hell of a lot more effective than anything the leadership team a

  • The trouble with AI output is that it needs to be checked by a competent person, and often that checking, if done to a suitable standard, will take longer than doing things the old fashioned way. For coding, AI can be useful in suggesting things to try, but the programmer must understand the output, and to correct any problems. Now if, for example, I'm writing something in Python or Rust, then AI can be a great source of suggestions as to what packages I should look for, and also possible search terms to go

  • While it's likely that future AI will be a very useful tool, and early versions like AlphaFold are already producing results, today's consumer focused AI offerings are just crap generators that produce stuff that appears to be well written, but is in fact, crap. It's kinda like a BS artist, who confidently claims expertise while spewing nonsense

"Card readers? We don't need no stinking card readers." -- Peter da Silva (at the National Academy of Sciencies, 1965, in a particularly vivid fantasy)

Working...