Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Businesses

'What Kind of Bubble Is AI?' (locusmag.com) 100

"Of course AI is a bubble," argues tech activist/blogger/science fiction author Cory Doctorow.

The real question is what happens when it bursts?

Doctorow examines history — the "irrational exuberance" of the dotcom bubble, 2008's financial derivatives, NFTs, and even cryptocurrency. ("A few programmers were trained in Rust... but otherwise, the residue from crypto is a lot of bad digital art and worse Austrian economics.") So would an AI bubble leave anything useful behind? The largest of these models are incredibly expensive. They're expensive to make, with billions spent acquiring training data, labelling it, and running it through massive computing arrays to turn it into models. Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical.

AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments... There just aren't that many customers for a product that makes their own high-stakes projects betÂter, but more expensive. There are many low-stakes applications — say, selling kids access to a cheap subscription that generates pictures of their RPG characters in action — but they don't pay much. The universe of low-stakes, high-dollar applications for AI is so small that I can't think of anything that belongs in it.

There are some promising avenues, like "federated learning," that hypothetically combine a lot of commodity consumer hardware to replicate some of the features of those big, capital-intensive models from the bubble's beneficiaries. It may be that — as with the interregnum after the dotcom bust — AI practitioners will use their all-expenses-paid education in PyTorch and TensorFlow (AI's answer to Perl and Python) to push the limits on federated learning and small-scale AI models to new places, driven by playfulness, scientific curiosity, and a desire to solve real problems. There will also be a lot more people who understand statistical analysis at scale and how to wrangle large amounts of data. There will be a lot of people who know PyTorch and TensorFlow, too — both of these are "open source" projects, but are effectively controlled by Meta and Google, respectively. Perhaps they'll be wrestled away from their corporate owners, forked and made more broadly applicable, after those corporate behemoths move on from their money-losing Big AI bets.

Our policymakers are putting a lot of energy into thinking about what they'll do if the AI bubble doesn't pop — wrangling about "AI ethics" and "AI safety." But — as with all the previous tech bubbles — very few people are talking about what we'll be able to salvage when the bubble is over.

Thanks to long-time Slashdot reader mspohr for sharing the article.
This discussion has been archived. No new comments can be posted.

'What Kind of Bubble Is AI?'

Comments Filter:
  • by klipclop ( 6724090 ) on Sunday December 24, 2023 @05:08PM (#64103707)
    "AI is the kind of bubble that doesn't pop â" it just keeps expanding your mind!"
    • by Kisai ( 213879 )

      AI is a bubble, but I think there is far too much conflation of one kind of AI with another.

      What we have right now is not General AI, it's three and a half kinds of AI that are "kinda rubbish"

      - CryptoCoins/NFT/Ethereum = Garbage AI, wastes shit loads of energy and produces nothing useful
      - Generative AI = Aims to replace a creative worker, wastes a shit load of energy, and due to how the datasets were obtained, legally dubious
      - Assistive AI = Aims to "auto complete" a work, like generative AI, but for text a

      • by gweihir ( 88907 )

        Well, there is a lot of actually useful AI, just not in the current hype. No idea why you classify crapto and NFTs as "AI". They are not.

        • by Rei ( 128717 )

          Well, there is a lot of actually useful AI

          Yet Doctorow seems unaware of basically all of it. I mean, for example:

          Even more important, these models are expensive to run....

          Like, this is demonstrably not true? A $1,4k (when new!) 300W-underclocked RTX 3090 can generate ~140 characters per second on a Mixtral GGUF. Maybe, what, 2 seconds for your average reply? 43,2k responses per day, 15,8 million per year? 0.00017 kWh per reply? A hundred thousand replies for a dollar with servers located in a place wi

          • by gweihir ( 88907 )

            I was not referring to the AI in the current AI hype when pointing out that there is actually useful AI and I said that explicitly. Regarding the AI in the current hype, I agree with Doctorow and his arguments. The main problem with the current hype is that hallucinations cannot be fixed and that limits the use of these models rather severely.

          • Except that for a huge and growing number of medical tasks, AI performs better than humans,

            You have to be careful with this one. In some studies they perform better than humans, but it's not always a fair competition.

            • by gweihir ( 88907 )

              There are cases where it is fair comparison?

              • I think of the case of protein folding as one that is probably fair (without looking into it too deeply). Computer vs Human chess is probably fair (although the original Kasparov vs Deep Blue test was not).
          • (go read the full text on Cory's blog [pluralistic.net])

            Even more important, these models are expensive to run....

            Like, this is demonstrably not true?

            The subject that Croy discusses is the commercial AI-as-a-service companie, such as Open.ai: there seem to be an arm's race in that field of making the biggest model ever (see Chat GPT 4, etc.)

            These companies' business model isn't workable in the long term. ChatGPT *is* very costly to run in the long term.
            And tons of start-ups directly relay on APIs of Chat GPT and similar.
            The low price of such API is artificially kept low by burning investors' money.

            The day the commerc

          • ...and further on (sorry for the spli posting)

            Except that for a huge and growing number of medical tasks, AI performs better than humans

            Medical doctor speaking. In short: Nope.

            More precisely: there's a growing number of big public announcements which are picked-up by magazines, blogs, etc. (and here on /. )
            It's basically start-ups which need to drum up whichever slighest sign of promises of success they've been lucky to hit.
            (and remember that almost no one is interested in reporting failure).
            and academic groups pushing the currently pooular buzzwords to attract a bit more funding (I work in rese

            • by gweihir ( 88907 )

              Yep. People trying to get grant money, people trying to keep their jobs and some assholes trying to get rich on a lie.

              Here is an anecdote: I teach a lecture security course. In one exercise (very, very simple firewall config), one group tried to use ChatGPT to find the answer. Their conclusion was "completely worthless". (The exercises are not graded, so no issue. And they were completely open about it.)

            • by Rei ( 128717 )

              More precisely: there's a growing number of big public announcements which are picked-up by magazines, blogs, etc.

              I don't care about "big public announcements", I care about peer-reviewed research.

              And if you don't, then don't pretend to speak on behalf of science.

            • by Rei ( 128717 )

              Some fall in the "leaky training material" pitfall. See the recent claims about AI able to pass the bar's exams. (Then followed by utterly catastrophic results of attempt at using AI in real court). What most likely happens is that the exams questions were part of the training material.

              Yeah, nah. One, you can't look up bar exams a year in advance on the internet. Two, something appearing once on the internet doesn't mean it can be memorized, things have to be widespread to be memorized. Three, you're en

        • by Rei ( 128717 )

          And yeah, crypto "proof of work" algos are mainly just a giant game of "Guess The Magic Number!", over and over again. Not even remotely related to AI.

          Also, their "categories of AI" aren't actual categories.

    • Generative AI is great for jobs that make shit up. That includes writing short stories, novels, music, movies as well as advertising, marketing, primary school teaching, politics, etc. It is also good for learning and then freezing the model to classify stuff for driving cars and implements, harvesting fruit and vegetables. It is not so great for generating technical manuals. So there is indeed a large market for it.
  • If the Fed had bailed out Lehman Brothers like it bailed out Silicon Valley Bank, would the housing "bubble" ever have popped (and didn't we go right back into another housing bubble a few years after the 2008 "crisis")?

    What is with this bubble-phobia?

    • by sopwith ( 5659 ) on Sunday December 24, 2023 @05:16PM (#64103731)

      What is wrong with bubbles? They cause a lot of resources to be allocated to things that don't really deserve it. Wikipedia [wikipedia.org] has a fuller explanation of the negative impacts of bubbles.

      • by jhecht ( 143058 )
        Or in other words, bubbles evaporate money invested in schemes unable to deliver on their promises. The more greater fools, the bigger the bubble.
      • What is wrong with bubbles? They cause a lot of resources to be allocated to things that don't really deserve it. Wikipedia [wikipedia.org] has a fuller explanation of the negative impacts of bubbles.

        Have no fear. The government is always near. So says great investor Bill Ackman.

        Ackman, who runs Pershing Square Capital Management, and is not averse to an apocalyptical outburst, said the banking sector needed a temporary deposit guarantee immediately until an expanded government insurance scheme is widely available.

        “We need to stop this now. We are beyond the point where the private sector can solve the problem and are in the hands of our government and regulators. Tick-tock.”

        So long as the government continues to force we taxpayers to hand over our money to protect these people, bubbles will keep being exploited.

    • by gweihir ( 88907 )

      Is this a serious question?

      • Why not, since all the answers ignore that resources are always being allocated inefficiently, even now (wars destroy production ...), and that no taxpayer was debited anything for Fed bailouts?

  • AI bubble-learning (Score:4, Interesting)

    by ElitistWhiner ( 79961 ) on Sunday December 24, 2023 @05:19PM (#64103739) Journal

    After degreeing in ComSci, the self-taught learning knothole APL required taught a lifetime of skills. Didn’t know how then, but it was glaringly obvious this augmented computer enhanced learning feedback loop was education’s answer to classroom drudgery.

    APL->R–>matrices programming which stole its scalability to symbolic learning away. Now parallels with AI and ChatGPT point to its future following that of APL. AI will specialize into obscurity with only the WallSt wealthy able to profit with the resources to which it commands.

    The general application to learning, AI’s sacrificial lamb as its arguably no more useful at teaching than a electronic calculator.

  • how does that make them any different than economists or republicians?
    • by korgitser ( 1809018 ) on Sunday December 24, 2023 @07:53PM (#64103963)

      You have gotten your politics, and humans for that matter, wrong. The reality is that the human being is a rationalizing animal, not a rational animal, and politics abuses this.

      Standard playbook of the human being: 1) Think of what you want to do. 2) Think of a plausible argument in support of what you want to do, and convince yourself it is indeed the rational thing to do. 3) Do what you want to do, and congratulate yourself for being so smart.

      Do notice that the argumentation comes after, not before, the decision, do take the time to realize that this is also what me, you and everyone else does, and do take the time to think and wonder about it. 99,999% of the time nobody is rational, decisions are made on emotions, and mostly this is the correct way, because our feelings, not our rationality is what makes us feel good day to day. But this also means we sleepwalk into rationality-requiring decisions thinking that we are being oh-so-smart-and-rational, while in reality, we are being emotional as ususal. Funny, that.

      Now in any case you are all fine and okay to say that the shit republicans make up is well, shit, but that is beside the point. The same goes for any other shit also, be it democrats or just your stupid neighbour. If you are a politician, what you want to do is sell off your decisions to finance your campaign, lifestyle and retirement fund. We all know this is true, and this is why the govt institutions poll worse every year. Yet we convince ourselves that somehow, this or that our guy is the one epitome of honesty and integrity in all of the heap of shit, and that this our guy will be our saviour, until he once again proves us wrong like he was going to, and we flock to yet another one...

      But I digress. So whatever your party and whoever your voters, we all know what buttons you have to push to make a particular voter bloc jump. So you push the buttons and the voters will associate their preferred feelings with your policy proposal. And this will be your licence to go and do what you were going to do, sell off the country and the people, one piece at a time. And once again, you are right to say that one party's bullshit is stupider than the other party's, but in the end both of them relate to shit like two cheeks of the ass, and both of them are out for the same thing. But as long as they have us hooked to the emotions they provide, they are both free to do whatever they want.

  • All I know.... (Score:4, Insightful)

    by Berkyjay ( 1225604 ) on Sunday December 24, 2023 @06:10PM (#64103803)

    Is that Copilot has been a huge boost to my programming productivity. While it's not popping out code that is 100% usable, it just takes a but of debugging and I'm good to go. I've been on a free trial, but I plan on paying the subscription once that's done.

  • Like al bubbles, it will be spherical. Duh.
    The hype cycle is a well known and well studied thing. I don't think we have to ask what a tech bubble looks like.

  • When that newfangled inter-webs thing first came out some people were also skeptical. They were wrong.
    Consider that AI use of GPUs was second priority to gaming. Not anymore.
    Consider that the Turing Test has been passed but everyone just shrugged.
    Consider the huge money now flowing into every aspect of AI with OpenAI recently valued at $100billion
    Consider that AIs can beat humans in EVERY single game out there.
    Consider that new tougher benchmarks need to be invented to score the new models.
    Consider
    • Eh, AI still isn't "smarter" than anyone - it is still literally just a tool that can do some types of pattern matching faster than humans (same as any computer can do math operations faster than humans) without getting tired or emotional, and with eidetic memory (the occasional SEU notwithstanding).

      Of course, if "matching patterns" is what you mean by intelligence then yes AI is intelligent.

      Matching patterns, or even finding new patterns in existing data, to me isn't really "intelligence" - it's just apply

      • You are asking for AGI capability, don't worry it is coming soon. Conservation of Energy did you say? https://www.youtube.com/watch?... [youtube.com]
      • Matching patterns, or even finding new patterns in existing data, to me isn't really "intelligence" -

        It is something that intelligence can be used to do, but intelligence isn't the only way to do that.

        Humans use their intellect to count, and that is intelligence; but odometers count without using intelligence. There's more than one way to do things.

    • by HiThere ( 15173 )

      The Turing test, AKA "the imitation game", has not been passed. It hasn't even been approached. And if you're going to call anyone being fooled into thinking a computer is intelligent passing the Turing test, that was done by the original Eliza program. (The guy who called in tried to get her fired for being a smart-ass. [over a teletype, of course])

    • by phantomfive ( 622387 ) on Monday December 25, 2023 @01:41AM (#64104305) Journal

      Consider that the Turing Test has been passed but everyone just shrugged.

      It didn't pass the Turing test [independent.co.uk]. That's why people shrugged. People with understanding also rolled their eyes.

  • That Cory Doctorow must be new to capitalism or something. When the AI bubble bursts there will be another bubble to jump on. I hope that bubble is robotics.

  • There certainly is a bubble going on right now, with companies dumping ridiculous amounts of money into training models. But don't think that bubble is the same thing as AI. It's just a bubble. AI was progressing fast, solving real world problems, and growing in popularity for years before the current bubble got started. The bubble is speeding it up by getting investors to dump lots of money into it, but it doesn't need them. When the bubble bursts and those investors lose money, AI will keep right on

  • It promises everything but once its been thrown against the wall only a few things will stick. When I was a kid it was fractals and cellular automata.
  • by MpVpRb ( 1423381 ) on Sunday December 24, 2023 @07:03PM (#64103891)

    ...real progress is being made.
    While todays chatbots are close to useless for serious work, their emergence was kinda unexpected and has forced researchers to change their assumptions.
    Will progress continue? accelerate? or will the work hit a dead end? Nobody knows.
    What makes it a different kind of bubble is that there is real, serious research going on at the same time as the financial shenanigans and skullduggery.
    I'm optimistic, but also realize that the hype vastly exceeds the actual progress.
    I once worked on a very well funded VR project for a major corporation and clearly remember the VR hypemongers writing vastly overoptimistic fantasy fiction. I see the same thing happening with AI. VR still hasn't come close to the fantasies, and I suspect that it will take a while for AI to become truly useful

  • I like his novels, but he's fallen for a classic AI fallacy in TFA.

    AI/ML models don't need to be perfect, they just need to be better than us in price/performance. Big difference. If an AI diagnosis is 5% worse than the best expert on a given condition, I'll still use AI if it's only as far away as my phone and the humans in my vicinity are mediocre doctors.

    There sure is AI hype and many companies won't get rich by AI, but that's because AI will replace them with no revenue left for then, not that AI is a

    • If there is one thing I like about AI is that it does not get emotional or upset or vindictive, unless it's explicitly designed to allow that. Nor will it suffer a mental breakdown and come into the office to shoot up my work chums, or me. If they can get the "hallucinations" problem solved, then we don't have to worry about it experiencing mental disabilities either.
      • by caseih ( 160668 )

        At least Bing's AI does a pretty good job of faking being upset. I asked it for a certain programming example of something in a particular language and it spit out some code with an obvious syntactical error in it, which I pointed out and it said, "oh sorry i'll correct that for you." And spit out the exact same code again, same error. Again I pointed out the error and it again said sorry and tried again. After the third time telling it it was still wrong, it got mad and said it wasn't going to talk to me

        • I see ChatGPT as being good at creating a very basic framework for code, and maybe some simple working examples. But the programmer still has to fill in all of the details. It can help, but I don't see it replacing a skilled programmer anytime soon.
  • by Tony Isaac ( 1301187 ) on Sunday December 24, 2023 @07:29PM (#64103937) Homepage

    The internet, and the web, didn't go away. It was just the silliest concepts that failed. That bubble was froth around real advances in technology, that we still enjoy and use today.

    Crypto never solved a real problem, so that bubble is bursting in a big way and isn't likely to come back.

    AI is in the first category. Yes, there will be froth, there will be hair-brained solutions that don't do anything useful. But at its core, AI solves real problems. It's not going away.

  • Right now, it's really expensive to run LLMs. Every new technology is expensive at first. Over time, I expect the price of running LLMs will come down significantly.

    • by Ken_g6 ( 775014 )

      It's really expensive to train LLMs. Running an LLM after it's trained can be done on hardware as light as a smartphone processor. Once an LLM is trained and in-hand, its training is a sunk cost. I'd expect a trained LLM to be in use for a very long time and not retrained for quite awhile, years to decades. I really don't know why OpenAI keeps retraining their models.

      • From the summary:

        Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on?

        So yeah, training is expensive But according to the author, running the models is also expensive.

        • From the summary:

          Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on?

          So yeah, training is expensive But according to the author, running the models is also expensive.

          Running them is too expensive to use? Currently the GenAI world might be running on deficit, using investor supplied money, but we can expect architectural and algorithmic improvements in the very near future to drive these costs down with all the money flowing into it.

          But over the next several years this will definitely stop being true, if it is true now. The cost of computation has been dropping since 2000 by about an order of magnitude a decade. So if a GPT response costs X now, with no other improvement

        • by Draeven ( 166561 )

          Long story short, the author is wrong, and will become even more wrong over time. Cloud based LLMs like ChatGPT are not the cheapest to run right now, but it's not like they aren't economical either. With each passing week, advances are made in these models to make them available for local use, provide more powerful customization features, and make them run on less and less hardware.

        • by fyngyrz ( 762201 )

          But according to the author, running the models is also expensive.

          Mr. Doctorow is entirely wrong; running pre-trained GPT/LLMs is only expensive if you do it (very) poorly. You can put a quite capable GPT/LLM on your desktop and generate results against your own queries for a tiny fraction of a cent. See GPT4All [gpt4all.io], for instance. Try the Hermes ( nous-hermes-llama2-13b.Q4_0.gguf ) model; uncensored, local, private, no network interaction unless you opt for reporting back.

          Running current technology GPT/LLM syst

          • The standard of "it can run on your desktop" is not the bar, when it comes to the cost of running an LLM. The question is, does it take more horsepower than traditional search? That's a different question, and matters a lot, at scale.

            Also, all that extra "unnecessary" stuff you mentioned...it all counts towards the cost.

            • by fyngyrz ( 762201 )

              The standard of "it can run on your desktop" is not the bar, when it comes to the cost of running an LLM.

              The standard is what the actual achievable, practical cost is, and it is minimal. Trying to point to an inefficient implementation and claiming it's "the cost" is absurd. And in fact, the actual single-PC low cost currently is the bar. I suspect the bar will soon be a smartphone. We're not quite there, but it very much looks like the next milestone.

              Also, all that extra "unnecessary" stuff you mentioned..

              • I don't doubt that performance will improve. But the desktop PC "standard" is irrelevant. I can run a full instance of SQL Server on my desktop, and it performs as well as it does on most actual servers. That proves nothing. If we had 500 users connecting to my desktop SQL Server all at once, that's when we would start to see the differences between a desktop and a real server. OpenAI and Microsoft and Google et al. have to support millions of concurrent users. It's very much not the same as "running an LLM

  • First, let's separate AI from the bubble. AI technology exists now. Even if nobody can ever train another one, models like LLaMa are out there and will be shared, on the black market if necessary.

    So, the bubble is about who, if anyone, actually makes money from AI and LLMs. I think this mostly depends on the legal system and legislation. I see four possibilities:

    1. AI is made illegal. It goes underground. It makes less money than drugs; maybe close to as much as selling pirated movies.

    2. AI is a copyr

  • When they start calling it Web 4.0, thatâ(TM)s when you know the generative AI goose is cooked.

  • First one quite critical of Quantum Computing (which I have been for 30 years and I see nothing that would make me change my negative assessment) and now one very critical of the current "AI" hype?

    What is the world coming to? I feel my status as high-tech Pandora threatened!

  • Yes, there is a bubble. Yes, things get hyped.

    At the same time, the quick growing bubbles also pop up quickly this time. Remember all those startups that built their entire existence around passing PDFs to ChatGPT? Guess what, ChatGPT implemented native PFD import. What about Amazon product page helpers? Amazon is not offering the same service.

    On the hardware side, manufacturers are more cautious. The AI cores are designed to serve more than one purpose. They might be accelerating the new fancy model, yes.

    • On the hardware side, manufacturers are more cautious. The AI cores are designed to serve more than one purpose. They might be accelerating the new fancy model, yes. But they also help improve your native camera application, or make Adobe export several times faster. They are more of a continuation of standard SIMD operations, like Intel SSE or ARM Neon instructions.

      I recall MMX stood for "multimedia extensions" in marketing speak, while it was really more about "matrix mathematix". I wouldn't waste my money on "AI cores" but if they're really just wider SIMD units, I'm much more interested — but even then only if they are freely programmable without some closed SDK. It doesn't need to become a SSE-like CPU extension, I'm fine with something like OpenCL support, where it might actually make more sense.

      It's interesting, though, how linear algebra with large arr

  • Weren't Worldcom and Enron both disasters?

    What is the difference between them that Doctorov is trying to use to make his point?
  • AIs trained with The Law will scour the Web looking for anything and everything that can be useful to their controllers.
    For example, corporations will scan for Patent/Copyright/Trademark infringements. AI esquires will generate new lawsuits by the boat-load, a tanker sized boat-load.
    Of course there will be AIs looking for blackmail material and *anything* useful. No person is too small or immune to a Web scan. Happy Future everyone!

    • And that's not all the future has in store...

      Ya know how you cannot pump your own gas in New Jersey?
      https://www.cnn.com/2022/06/18... [cnn.com]

      In a future where robots can do a job better than a human, you are going to be legally required to use a human. People need jobs!

      Robot taxis? Banned. You get the smelly human taxi. People need jobs.
      "Can I have the AI do my taxes? It would take like 2 seconds." No. Illegal! You get the smelly human CPA. People need jobs!

  • Saw some publisher's book chart earlier - their top ten had nine books with 'AI' shoehorned into the title. What a load of rubbish. This is what happens when you get an industry combining very technical subjects and almost no professional accreditation- so many chancers chasing shiny new apps and The Next Big Thing... at least they've stopped talking about NFTs and block chain for a bit
  • I pay no attention to opinions like these unless the author is willing to bet real money on their view. If you really believe AI is in an irrational boom phase, then short the stocks. Some made fortunes shorting Enron, for example.
  • Of course it's a bubble (because some of the current AI hype is rather ridiculous, how could it not be a bubble!) but it's not all bubble, like tulip bulbs or cabbage patch kids, there is plenty of substance. AI systems that can reason roughly as well as people are new and interesting, and useful. http://www.cs.toronto.edu/~jdd... [toronto.edu] As for the huge energy use, that's real for training the models, but using them is not outrageously expensive, as others have pointed out already.

God help those who do not help themselves. -- Wilson Mizner

Working...