Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

Goldman Research Head Skeptical on AI Returns Despite Massive Spend 51

Goldman Sachs' head of global equity research Jim Covello has expressed skepticism about the potential returns from AI technology, despite an estimated $1 trillion in planned industry investment over the coming years. In a recent report [PDF], Covello argued that AI applications must solve complex, high-value problems to justify their substantial costs, which he believes the technology is not currently designed to do.

"AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn't designed to do," Covello said. Unlike previous technological revolutions like e-commerce, which provided low-cost solutions from the start, AI remains prohibitively expensive even for basic tasks, he said. Covello also questioned whether AI costs would decline sufficiently over time, citing potential lack of competition in critical components like GPU chips.

The Goldman executive also expressed doubt about AI's ability to boost company valuations, arguing that efficiency gains would likely be competed away and that the path to revenue growth remains unclear. Despite the skepticism, Covello acknowledged that substantial AI infrastructure spending will continue in the near term due to competitive pressures and investor expectations.
This discussion has been archived. No new comments can be posted.

Goldman Research Head Skeptical on AI Returns Despite Massive Spend

Comments Filter:
  • by drnb ( 2434720 ) on Monday July 08, 2024 @03:39PM (#64610597)
    It'll work, it'll just take 10x to 20x times longer than originally claimed. So it OK for the HODL investors.
  • "AI technology is exceptionally expensive..."

    No, present brute force scaling of neural networks is expensive. But let's pretend that's all that can be.

    • and the current "AI revolution" has some other Secret Sauce apart from brute force scaling of neural networks?

    • by Junta ( 36770 ) on Monday July 08, 2024 @04:11PM (#64610691)

      Ok, but that is the approach that has been demonstrated, and the brute force scaling approach does not necessarily indicate a different approach is appreciably more imminent than it was before the current rush started in 2020. So it is reasonable to measure the current gen of AI from the breakthroughs to date rather than presuming a breakthrough yet to come. It's going to be difficult to specifically predict the nature of a breakthrough that no one can guess yet, so it's kind of pointless in this sort of context to bring up.

      • by narcc ( 412956 )

        You say "breakthrough" like that's inevitable. The "breakthrough" behind the current gold rush is the transformer, which allows for highly parallel operations, enabling the massive scaling we've seen over the past few years. Of course, we can't just continue to scale as performance improvements are known to drop off logarithmically. We're already well-past the point where doubling the size of the model brings noticeable improvements. Hell, we're well past the point where increasing the size of the model 1

        • The "breakthrough" behind the current gold rush is the transformer,

          Also the massive expansion of GPUs. NVidia did a ton of work to push them really far. They are huge, hot, vast number of transistors, very wide memory buses and incredibly fast interconnects. They've gone from being GPUs to accelerators to complete systems and that work is now all done. All that's left is what's left of Moore's law.

      • A new approach like this one? https://news.ucsc.edu/2024/06/... [ucsc.edu]

    • by Anonymous Coward

      The writer quoted in the Goldman Sachs report can't tell the future. Today, it is expensive.

      • Perhaps you should read the article instead? Firstly, the writer is merely an interviewer, secondly the actual interviewee is making a very cogent and well reasoned argument.
    • by m00sh ( 2538182 )

      "AI technology is exceptionally expensive..."

      No, present brute force scaling of neural networks is expensive. But let's pretend that's all that can be.

      nVidia GPU is exceptionally expensive.

      Gamers cry about $1600 4090 GPU. nVidia turns around and sell them for $30K to companies and pocket eye watering margins. That's why they are the most valuable company in the world. The sheer margins are mind boggling. An AI server is 95-98% nVidia cost. This is why AI is so expensive.

      • No matter who makes the gear, it sucks a lot of power, and also requires cooling. An "AI server" as you call it is much more power dense than your average datacenter gear. Datacenters and power ain't cheap.

    • There has been a belief for literally decades that something other than brute-force scaling will finally bring about a revolution in AI. Time and time again, AI research tries to use some form of applying human knowledge to constrain a problem space and make solutions more efficient, but ultimately statistical methods based on just throwing more compute at the problem win out. It's repeated so many times that one of the recognized leading figures in the field, Rich Sutton, wrote an article five years ago called "The Bitter Lesson of AI" [incompleteideas.net] where he lays out the history of the field failing to learn this pattern time and time again.
      • by phantomfive ( 622387 ) on Monday July 08, 2024 @07:27PM (#64611235) Journal
        There are plenty of examples where improved algorithms have made AI more efficient. For example, in computer Go [wikipedia.org] the introduction of Monte Carlo algorithms saw a dramatic improvement in skill level. Google's TPU [wikipedia.org] was made possible after the realization that the algorithm can be modified to use 8 bit integer math instead of floating point, with little loss of quality. In the 90s, neural networks gave us pretty good self-driving cars [cmu.edu]. Presumably we'll need another algorithm improvement to reach fully self-driving cars. It's true the LLMs and CNNs are basically possible because of availability of big data, but there were also algorithmic improvements along the way. It wasn't only big data.
      • Literally for decades the development in math of numerical methods is the thing that has allowed many intractable problems to become tractable.

        Brute force is what you do when you're not smart enough to do anything else.

    • As long as someone is selling processors, rack space, and elecricity then why gamble on developing a different way of doing AI. Especially when the market requirements are vague and mostly driven by hype?

  • adult in the room (Score:5, Insightful)

    by sdinfoserv ( 1793266 ) on Monday July 08, 2024 @03:47PM (#64610619)
    When it comes to VC exuberance, nobody listens to the adult in the room. Especially when CEO's are getting sold they'll be able to fire 10%-20% of the workforce replaced with "AI". On a positive note, I don't have to read about idiots pontificating blockchain anymore.
    • It will be "Blockchain AI" when someone chains them together, in the ultimate waste of power.

    • by m00sh ( 2538182 ) on Monday July 08, 2024 @05:06PM (#64610893)

      When it comes to VC exuberance, nobody listens to the adult in the room. Especially when CEO's are getting sold they'll be able to fire 10%-20% of the workforce replaced with "AI". On a positive note, I don't have to read about idiots pontificating blockchain anymore.

      Hate to say it but github copilot and chatgpt cuts my time to code in half.

      I think they can reduce the workforce by 50%.

      • Hate to say it but github copilot and chatgpt cuts my time to code in half.

        Why are you writing so much boilerplate code. Do that less.

      • Good for you (and I mean that sincerely).

        As to cutting their workforce by 50%, perhaps you'll be interested in knowing that they were unimpressed with the AI generated slop they got when they tried. Instead, they found human analysts to be much cheaper to run and with much better outputs than the AI systems. Hence the argument that AI systems are much too expensive and much too trivial for tackling real decision making problems.

        • by m00sh ( 2538182 )

          It is not for decision making. It is for clearing out the mundane stuff so you can get to the decision making parts faster and get stuff done faster.

    • by khchung ( 462899 )

      Especially when CEO's are getting sold they'll be able to fire 10%-20% of the workforce replaced with "AI".

      The problem for the CEOs is if they didn't buy the lie and start some project to use AI to replace their workforce, they won't be CEO for much longer when other potential CEOs sell such plan to their board.

      Short-sighted CEOs simply have short-sighted board members, which represents short-sighted shareholders.

    • You think this guy is the "adult in the room"? He clearly doesn't understand the technology and can only think of it in terms of dollars and he even fucks that up.

      "While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do."
      Utter drivel. We

  • by BishopBerkeley ( 734647 ) on Monday July 08, 2024 @04:16PM (#64610707) Journal
    All AI companies are desperate for the killer app. Nothing suggests that AI will be economic beyond a few computationally thorny applications. This is why they hurling AI at every single problem hoping to find the one that sticks. Mainstream applications have been so streamlined already that it is an open question as to whether AI can squeeze more efficiency from them.
  • Meanwhile . . . (Score:5, Insightful)

    by PeeAitchPee ( 712652 ) on Monday July 08, 2024 @04:34PM (#64610769)
    . . . it's been a phenomenal year for the average investor, with the S & P 500 up almost 23% in the last year and the NASDAQ 100 up a whopping 35.65% during the same period. You don't have to be a VC; there are low fee index funds that literally anyone can buy using just a tiny bit of money to get started. Even better, if you think it's all hype and the bubble is gonna blow, you could have ridden up the indexes over the past year and then simply sell at wherever you think the peak is (maybe now, or next month?), lock in your gains, and move them to T-bills or a money market and still clear 5%. Lots of money to be made with not a lot of risk.
  • ...surprised their inventors with unexpected emergent properties
    This caused excitement among researchers who believed that rapid progress would continue
    It also caused excitement among investors who wanted to get in early on the "next big thing"
    Problem is, current LLMs are kinda limited and useless and nobody knows if rapid progress will really continue or if major new problems will be discovered
    Meanwhile, executives who want to be seen as futurists will release a metric buttload of half-baked, useless, anno

    • by narcc ( 412956 )

      surprised their inventors with unexpected emergent properties

      Increasingly fewer people are buying into the early claims of 'emergent properties', researchers and investors included.

      My prediction is that eventually, useful stuff will be invented, but it will take a while

      Coward! Take a risk! Be bold!

      My prediction? The trend in commercial AI over the next few years will be towards smaller models that make heavy use of external resources and traditional algorithms, assuming funding holds out. LLMs will ultimately have no significant social impact over the next decade.

    • ...surprised their inventors with unexpected emergent properties This caused excitement among researchers who believed that rapid progress would continue It also caused excitement among investors who wanted to get in early on the "next big thing" Problem is, current LLMs are kinda limited and useless and nobody knows if rapid progress will really continue or if major new problems will be discovered Meanwhile, executives who want to be seen as futurists will release a metric buttload of half-baked, useless, annoying AI crap My prediction is that eventually, useful stuff will be invented, but it will take a while

      There's a whole plethora of stories out there about small signs of emergent properties leaping forward on their own to become AI and slamming all of humanity into the stratosphere, or down into oblivion. Of course the culture expects that the tiniest sign of emergence will lead directly to "OUR AI BECAME GOD!" Because if there's anything us humans are good at? It's mowing down our own dogma rather than face reality. ESPECIALLY in the United States.

      Granted, living in the United States does make us crave fant

  • If Goldman Sachs is wrong, it can just ask the US government to bail it out. Again, I am sure they have experts working at their company; however, what they are experts in might be how to avoid responsibility.
  • I keep telling g people these advances are about the same as those that heralded the arrival of search in the 90s and early 00s. It's a very modest advancement in data structures and algorithms.
  • It seems these guys only see money, but do not see globally. Saying that AI should be profitable right now is like saying that the $9b LHC or $28b fusion reactor was built in vain. People like him would happily shut down any developments. Fortunately, when I was writing my research proposal, I was able to find the words. Well, not really for me personally, https://edubirdie.com/research-proposal-writing-service [edubirdie.com] helped me with this. Scientific research must be carried out, even if at first glance it is not n

The two most common things in the Universe are hydrogen and stupidity. -- Harlan Ellison

Working...