'What Kind of Bubble Is AI?' (locusmag.com) 100
"Of course AI is a bubble," argues tech activist/blogger/science fiction author Cory Doctorow.
The real question is what happens when it bursts?
Doctorow examines history — the "irrational exuberance" of the dotcom bubble, 2008's financial derivatives, NFTs, and even cryptocurrency. ("A few programmers were trained in Rust... but otherwise, the residue from crypto is a lot of bad digital art and worse Austrian economics.") So would an AI bubble leave anything useful behind? The largest of these models are incredibly expensive. They're expensive to make, with billions spent acquiring training data, labelling it, and running it through massive computing arrays to turn it into models. Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical.
AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments... There just aren't that many customers for a product that makes their own high-stakes projects betÂter, but more expensive. There are many low-stakes applications — say, selling kids access to a cheap subscription that generates pictures of their RPG characters in action — but they don't pay much. The universe of low-stakes, high-dollar applications for AI is so small that I can't think of anything that belongs in it.
There are some promising avenues, like "federated learning," that hypothetically combine a lot of commodity consumer hardware to replicate some of the features of those big, capital-intensive models from the bubble's beneficiaries. It may be that — as with the interregnum after the dotcom bust — AI practitioners will use their all-expenses-paid education in PyTorch and TensorFlow (AI's answer to Perl and Python) to push the limits on federated learning and small-scale AI models to new places, driven by playfulness, scientific curiosity, and a desire to solve real problems. There will also be a lot more people who understand statistical analysis at scale and how to wrangle large amounts of data. There will be a lot of people who know PyTorch and TensorFlow, too — both of these are "open source" projects, but are effectively controlled by Meta and Google, respectively. Perhaps they'll be wrestled away from their corporate owners, forked and made more broadly applicable, after those corporate behemoths move on from their money-losing Big AI bets.
Our policymakers are putting a lot of energy into thinking about what they'll do if the AI bubble doesn't pop — wrangling about "AI ethics" and "AI safety." But — as with all the previous tech bubbles — very few people are talking about what we'll be able to salvage when the bubble is over.
Thanks to long-time Slashdot reader mspohr for sharing the article.
The real question is what happens when it bursts?
Doctorow examines history — the "irrational exuberance" of the dotcom bubble, 2008's financial derivatives, NFTs, and even cryptocurrency. ("A few programmers were trained in Rust... but otherwise, the residue from crypto is a lot of bad digital art and worse Austrian economics.") So would an AI bubble leave anything useful behind? The largest of these models are incredibly expensive. They're expensive to make, with billions spent acquiring training data, labelling it, and running it through massive computing arrays to turn it into models. Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical.
AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments... There just aren't that many customers for a product that makes their own high-stakes projects betÂter, but more expensive. There are many low-stakes applications — say, selling kids access to a cheap subscription that generates pictures of their RPG characters in action — but they don't pay much. The universe of low-stakes, high-dollar applications for AI is so small that I can't think of anything that belongs in it.
There are some promising avenues, like "federated learning," that hypothetically combine a lot of commodity consumer hardware to replicate some of the features of those big, capital-intensive models from the bubble's beneficiaries. It may be that — as with the interregnum after the dotcom bust — AI practitioners will use their all-expenses-paid education in PyTorch and TensorFlow (AI's answer to Perl and Python) to push the limits on federated learning and small-scale AI models to new places, driven by playfulness, scientific curiosity, and a desire to solve real problems. There will also be a lot more people who understand statistical analysis at scale and how to wrangle large amounts of data. There will be a lot of people who know PyTorch and TensorFlow, too — both of these are "open source" projects, but are effectively controlled by Meta and Google, respectively. Perhaps they'll be wrestled away from their corporate owners, forked and made more broadly applicable, after those corporate behemoths move on from their money-losing Big AI bets.
Our policymakers are putting a lot of energy into thinking about what they'll do if the AI bubble doesn't pop — wrangling about "AI ethics" and "AI safety." But — as with all the previous tech bubbles — very few people are talking about what we'll be able to salvage when the bubble is over.
Thanks to long-time Slashdot reader mspohr for sharing the article.
I asked chatgpt (Score:3)
Re: I asked chatgpt (Score:5, Insightful)
That's because some people think AI is a bigger deal than it really is. For example, rsilvergun thinks it will replace every job on the planet
I think so too. But it's not because AI is capable of performing the jobs it will displace, it's because greedy CEOs and "decision-makes" will think it's capable enough.
AI workers will do a terrible job compared to human workers, but AI workers cost almost nothing, don't take sick leaves or holidays and don't unionize. That's reason enough for capitalism to decide to ruin everybody's livelihoods on a massive scale.
AI will be a societal disaster, but it won't be AI's fault: it'll be ultra-capitalists' and greedy tech-bros.
Re: I asked chatgpt (Score:5, Insightful)
You're a certified idiot.
There's no better way to disqualify anything you say next than using ad-hominems. So right off the bat, you lose. And fuck you too.
AI tools are not "free."
They don't need to be free. They just need to be cheaper than humans doing the same job.
Re: I asked chatgpt (Score:1)
Not only do they need to be cheaper, but they need to be able to do the job to an acceptable standard. If we go back to the rsilvergun example, it may very well be adequate, because in his case it might be an upgrade over his Antonio Banderas blow-up doll with lifelike testicles. But if we go back to the car dealership example from a few days ago, it obviously didn't meet their standards, so they got rid of it.
Re: I asked chatgpt (Score:2, Insightful)
Acceptable standards are pretty low nowadays
Re: (Score:2)
More accurately, the need to be BELIEVED to be able to do the job to an acceptable standard.
If the people making the decisions can't tell at the time they make the decision, then all that's important is what they believe. It may kill the company...but that's next year.
Re: I asked chatgpt (Score:1)
Re: (Score:2)
We are however still quite a way from cheap machines outright replacing people one for one.
Cheaper? Nope. (Score:2)
The "cheaper" part is the part Cory is pointing out.
In practice, most of the commercial AI models from companies such as, e.g., Open.ai cost insane amounts of resources.
Not only does training them cost mind bogglingly large amount of both energy and labor, but even running them requires vast data center (energy hungry, for servers and cooling those) and an army of low paid workers (whose job is to make sure, e.g, the chat bot doesn't start spewing racist nazi propaganda, or that the image generator doesn't
Re: I asked chatgpt (Score:3)
I hear you. But based on my interaction with offshore call centers, I think LLMs will generally outperform them. This is not a troll.
Re: (Score:1)
Re: (Score:3)
Probably. But "completely useless" to "less useless but still completely useless" is not actually an improvement. Maybe people will get a bit less enraged, but that hardly makes all the effort worthwhile.
Re: (Score:2)
Maybe people will get a bit less enraged, but that hardly makes all the effort worthwhile.
Did you forget the "also cheaper" part?
Re: (Score:3)
I hear you. But based on my interaction with offshore call centers, I think LLMs will generally outperform them.
But that's that problem; they'll use them to make "better" call centres, whatever that means, which is usually better for them, not for us. What would make our lives "better" would be LLMs that interact with the corporations on our behalf so that we can cancel that subscription, challenge those additional unjustifiable & possibly illegal charges on our bills, etc., & generally not waste endless hours on their enshittified platforms & call centres & legal departments while they try to wear us
Re: (Score:2)
Compare your opinion to the reality of outsourcing; sure, a lot of jobs are lost, but then crisis happens and they come back. The crises for AI will come as the providers fail, as better solutions come up, or as the limitations become obvious.
I look forward to specialized small language models though that require fewer resources and are faster to train for a specialized task.
Re: I asked chatgpt (Score:3)
I think so too. But it's not because AI is capable of performing the jobs it will displace, it's because greedy CEOs and "decision-makes" will think it's capable enough.
They can think that all they want, but remember: Their business lives or dies based on their decisionmaking. If they make poor decisions...
Re: I asked chatgpt (Score:4, Insightful)
Their business lives or dies based on their decisionmaking. If they make poor decisions...
That's only true when a company making bad decisions has meaningful competition: customer can vote with their wallets.
However, it doesn't hold true when the company is a monopoly, or if the entire industry they're in does exactly the same thing.
Case in point: all car manufacturers today fill their vehicles to the brim with privacy-invading technologies and subscription-based features. If you don't like it, tough cookie: nobody makes privacy-respecting cars that you fully own anymore.
This is exactly what's about to happen with AI. Everybody will do AI, so that everybody's formerly manned products and services will massively enshittify, and you won't have an alternative choice as a consumer.
Re: (Score:2)
Re: (Score:2)
Nitpick: capitalism does not "decide" to do things. Capitalism is just an economic system in which private citizens can own the means of production. Greedy people are present in every economic system, and wind up ruining it for everyone else in every case.
AI won't replace all jobs by itself. For example, AI alone cannot replace electricians. The manual labor component of those jobs requires on-site agents with more physical abilities than robots currently possess.
I say "currently" because the tech conti
Re: (Score:3)
AI workers will do a terrible job compared to human workers, but AI workers cost almost nothing, don't take sick leaves or holidays and don't unionize.
What? Haven't you ever visited a fully automated factory where robots are doing all the work, from assembly to paint work to packaging? Those robots ARE AI workers, with computerized vision and other technologies to drive them. I would argue that most of those are far better (in speed and quality) than humans doing the same work. In this latest shift, the new AI capabilities have just been extended to new areas. It's not revolution, it is evolution. Gradually making AI workers more capable.
Re: (Score:2)
AI is a bubble, but I think there is far too much conflation of one kind of AI with another.
What we have right now is not General AI, it's three and a half kinds of AI that are "kinda rubbish"
- CryptoCoins/NFT/Ethereum = Garbage AI, wastes shit loads of energy and produces nothing useful
- Generative AI = Aims to replace a creative worker, wastes a shit load of energy, and due to how the datasets were obtained, legally dubious
- Assistive AI = Aims to "auto complete" a work, like generative AI, but for text a
Re: (Score:2)
Well, there is a lot of actually useful AI, just not in the current hype. No idea why you classify crapto and NFTs as "AI". They are not.
Re: (Score:2)
Yet Doctorow seems unaware of basically all of it. I mean, for example:
Like, this is demonstrably not true? A $1,4k (when new!) 300W-underclocked RTX 3090 can generate ~140 characters per second on a Mixtral GGUF. Maybe, what, 2 seconds for your average reply? 43,2k responses per day, 15,8 million per year? 0.00017 kWh per reply? A hundred thousand replies for a dollar with servers located in a place wi
Re: (Score:2)
I was not referring to the AI in the current AI hype when pointing out that there is actually useful AI and I said that explicitly. Regarding the AI in the current hype, I agree with Doctorow and his arguments. The main problem with the current hype is that hallucinations cannot be fixed and that limits the use of these models rather severely.
Re: (Score:2)
Except that for a huge and growing number of medical tasks, AI performs better than humans,
You have to be careful with this one. In some studies they perform better than humans, but it's not always a fair competition.
Re: (Score:2)
There are cases where it is fair comparison?
Re: (Score:2)
Re: (Score:2)
Protein folding is not something humans would normally do, though.
*Commercial* Ai models (Score:2)
(go read the full text on Cory's blog [pluralistic.net])
Even more important, these models are expensive to run....
Like, this is demonstrably not true?
The subject that Croy discusses is the commercial AI-as-a-service companie, such as Open.ai: there seem to be an arm's race in that field of making the biggest model ever (see Chat GPT 4, etc.)
These companies' business model isn't workable in the long term. ChatGPT *is* very costly to run in the long term.
And tons of start-ups directly relay on APIs of Chat GPT and similar.
The low price of such API is artificially kept low by burning investors' money.
The day the commerc
Medical AI ; licensing (Score:2)
...and further on (sorry for the spli posting)
Except that for a huge and growing number of medical tasks, AI performs better than humans
Medical doctor speaking. In short: Nope.
More precisely: there's a growing number of big public announcements which are picked-up by magazines, blogs, etc. (and here on /. )
It's basically start-ups which need to drum up whichever slighest sign of promises of success they've been lucky to hit.
(and remember that almost no one is interested in reporting failure).
and academic groups pushing the currently pooular buzzwords to attract a bit more funding (I work in rese
Re: (Score:2)
Yep. People trying to get grant money, people trying to keep their jobs and some assholes trying to get rich on a lie.
Here is an anecdote: I teach a lecture security course. In one exercise (very, very simple firewall config), one group tried to use ChatGPT to find the answer. Their conclusion was "completely worthless". (The exercises are not graded, so no issue. And they were completely open about it.)
Re: (Score:2)
I don't care about "big public announcements", I care about peer-reviewed research.
And if you don't, then don't pretend to speak on behalf of science.
Re: (Score:2)
Yeah, nah. One, you can't look up bar exams a year in advance on the internet. Two, something appearing once on the internet doesn't mean it can be memorized, things have to be widespread to be memorized. Three, you're en
Re: (Score:2)
And yeah, crypto "proof of work" algos are mainly just a giant game of "Guess The Magic Number!", over and over again. Not even remotely related to AI.
Also, their "categories of AI" aren't actual categories.
Jobs that make shit up (Score:2)
What is wrong with bubbles? (Score:2)
If the Fed had bailed out Lehman Brothers like it bailed out Silicon Valley Bank, would the housing "bubble" ever have popped (and didn't we go right back into another housing bubble a few years after the 2008 "crisis")?
What is with this bubble-phobia?
Re:What is wrong with bubbles? (Score:5, Informative)
What is wrong with bubbles? They cause a lot of resources to be allocated to things that don't really deserve it. Wikipedia [wikipedia.org] has a fuller explanation of the negative impacts of bubbles.
Re: (Score:2)
Re: (Score:2)
Has Quantitative Easing proved the Fed can make investors whole without needing taxes?
Re: What is wrong with bubbles? (Score:2)
Re: (Score:2)
What is wrong with bubbles? They cause a lot of resources to be allocated to things that don't really deserve it. Wikipedia [wikipedia.org] has a fuller explanation of the negative impacts of bubbles.
Have no fear. The government is always near. So says great investor Bill Ackman.
Ackman, who runs Pershing Square Capital Management, and is not averse to an apocalyptical outburst, said the banking sector needed a temporary deposit guarantee immediately until an expanded government insurance scheme is widely available.
“We need to stop this now. We are beyond the point where the private sector can solve the problem and are in the hands of our government and regulators. Tick-tock.”
So long as the government continues to force we taxpayers to hand over our money to protect these people, bubbles will keep being exploited.
Re: (Score:2)
Is this a serious question?
Re: (Score:2)
Why not, since all the answers ignore that resources are always being allocated inefficiently, even now (wars destroy production ...), and that no taxpayer was debited anything for Fed bailouts?
The real question is what happens when it bursts? (Score:1)
Jesus Comes.
AI bubble-learning (Score:4, Interesting)
After degreeing in ComSci, the self-taught learning knothole APL required taught a lifetime of skills. Didn’t know how then, but it was glaringly obvious this augmented computer enhanced learning feedback loop was education’s answer to classroom drudgery.
APL->R–>matrices programming which stole its scalability to symbolic learning away. Now parallels with AI and ChatGPT point to its future following that of APL. AI will specialize into obscurity with only the WallSt wealthy able to profit with the resources to which it commands.
The general application to learning, AI’s sacrificial lamb as its arguably no more useful at teaching than a electronic calculator.
Re: AI bubble-learning (Score:2)
I mod up references to APL. I have to program Java for a living and it sucks. Then again, I only like to read code I wrote. If I have to read other people's code, I'd rather read their Java then their APL.
Re:AI bubble-learning (Score:4)
Re: (Score:2)
so they make shit up and base decisons on that, (Score:2)
Re:so they make shit up and base decisons on that, (Score:5, Insightful)
You have gotten your politics, and humans for that matter, wrong. The reality is that the human being is a rationalizing animal, not a rational animal, and politics abuses this.
Standard playbook of the human being: 1) Think of what you want to do. 2) Think of a plausible argument in support of what you want to do, and convince yourself it is indeed the rational thing to do. 3) Do what you want to do, and congratulate yourself for being so smart.
Do notice that the argumentation comes after, not before, the decision, do take the time to realize that this is also what me, you and everyone else does, and do take the time to think and wonder about it. 99,999% of the time nobody is rational, decisions are made on emotions, and mostly this is the correct way, because our feelings, not our rationality is what makes us feel good day to day. But this also means we sleepwalk into rationality-requiring decisions thinking that we are being oh-so-smart-and-rational, while in reality, we are being emotional as ususal. Funny, that.
Now in any case you are all fine and okay to say that the shit republicans make up is well, shit, but that is beside the point. The same goes for any other shit also, be it democrats or just your stupid neighbour. If you are a politician, what you want to do is sell off your decisions to finance your campaign, lifestyle and retirement fund. We all know this is true, and this is why the govt institutions poll worse every year. Yet we convince ourselves that somehow, this or that our guy is the one epitome of honesty and integrity in all of the heap of shit, and that this our guy will be our saviour, until he once again proves us wrong like he was going to, and we flock to yet another one...
But I digress. So whatever your party and whoever your voters, we all know what buttons you have to push to make a particular voter bloc jump. So you push the buttons and the voters will associate their preferred feelings with your policy proposal. And this will be your licence to go and do what you were going to do, sell off the country and the people, one piece at a time. And once again, you are right to say that one party's bullshit is stupider than the other party's, but in the end both of them relate to shit like two cheeks of the ass, and both of them are out for the same thing. But as long as they have us hooked to the emotions they provide, they are both free to do whatever they want.
All I know.... (Score:4, Insightful)
Is that Copilot has been a huge boost to my programming productivity. While it's not popping out code that is 100% usable, it just takes a but of debugging and I'm good to go. I've been on a free trial, but I plan on paying the subscription once that's done.
Spherical (Score:2)
Like al bubbles, it will be spherical. Duh.
The hype cycle is a well known and well studied thing. I don't think we have to ask what a tech bubble looks like.
it's the kind of bubble that does not pop (Score:2)
Consider that AI use of GPUs was second priority to gaming. Not anymore.
Consider that the Turing Test has been passed but everyone just shrugged.
Consider the huge money now flowing into every aspect of AI with OpenAI recently valued at $100billion
Consider that AIs can beat humans in EVERY single game out there.
Consider that new tougher benchmarks need to be invented to score the new models.
Consider
Re: (Score:2)
Eh, AI still isn't "smarter" than anyone - it is still literally just a tool that can do some types of pattern matching faster than humans (same as any computer can do math operations faster than humans) without getting tired or emotional, and with eidetic memory (the occasional SEU notwithstanding).
Of course, if "matching patterns" is what you mean by intelligence then yes AI is intelligent.
Matching patterns, or even finding new patterns in existing data, to me isn't really "intelligence" - it's just apply
Re: (Score:2)
Re: (Score:2)
Matching patterns, or even finding new patterns in existing data, to me isn't really "intelligence" -
It is something that intelligence can be used to do, but intelligence isn't the only way to do that.
Humans use their intellect to count, and that is intelligence; but odometers count without using intelligence. There's more than one way to do things.
Re: (Score:3)
The Turing test, AKA "the imitation game", has not been passed. It hasn't even been approached. And if you're going to call anyone being fooled into thinking a computer is intelligent passing the Turing test, that was done by the original Eliza program. (The guy who called in tried to get her fired for being a smart-ass. [over a teletype, of course])
Re:it's the kind of bubble that does not pop (Score:4, Informative)
Consider that the Turing Test has been passed but everyone just shrugged.
It didn't pass the Turing test [independent.co.uk]. That's why people shrugged. People with understanding also rolled their eyes.
New here? (Score:2)
That Cory Doctorow must be new to capitalism or something. When the AI bubble bursts there will be another bubble to jump on. I hope that bubble is robotics.
The bubble != AI (Score:2)
There certainly is a bubble going on right now, with companies dumping ridiculous amounts of money into training models. But don't think that bubble is the same thing as AI. It's just a bubble. AI was progressing fast, solving real world problems, and growing in popularity for years before the current bubble got started. The bubble is speeding it up by getting investors to dump lots of money into it, but it doesn't need them. When the bubble bursts and those investors lose money, AI will keep right on
we've all seen it before (Score:2)
Yes, the hype is strong, but... (Score:5, Insightful)
...real progress is being made.
While todays chatbots are close to useless for serious work, their emergence was kinda unexpected and has forced researchers to change their assumptions.
Will progress continue? accelerate? or will the work hit a dead end? Nobody knows.
What makes it a different kind of bubble is that there is real, serious research going on at the same time as the financial shenanigans and skullduggery.
I'm optimistic, but also realize that the hype vastly exceeds the actual progress.
I once worked on a very well funded VR project for a major corporation and clearly remember the VR hypemongers writing vastly overoptimistic fantasy fiction. I see the same thing happening with AI. VR still hasn't come close to the fantasies, and I suspect that it will take a while for AI to become truly useful
Cory Doctorov falls for a classic AI analysis fala (Score:2)
I like his novels, but he's fallen for a classic AI fallacy in TFA.
AI/ML models don't need to be perfect, they just need to be better than us in price/performance. Big difference. If an AI diagnosis is 5% worse than the best expert on a given condition, I'll still use AI if it's only as far away as my phone and the humans in my vicinity are mediocre doctors.
There sure is AI hype and many companies won't get rich by AI, but that's because AI will replace them with no revenue left for then, not that AI is a
Re: (Score:2)
Re: (Score:3)
At least Bing's AI does a pretty good job of faking being upset. I asked it for a certain programming example of something in a particular language and it spit out some code with an obvious syntactical error in it, which I pointed out and it said, "oh sorry i'll correct that for you." And spit out the exact same code again, same error. Again I pointed out the error and it again said sorry and tried again. After the third time telling it it was still wrong, it got mad and said it wasn't going to talk to me
Re: (Score:2)
After the dot-com bubble (Score:3)
The internet, and the web, didn't go away. It was just the silliest concepts that failed. That bubble was froth around real advances in technology, that we still enjoy and use today.
Crypto never solved a real problem, so that bubble is bursting in a big way and isn't likely to come back.
AI is in the first category. Yes, there will be froth, there will be hair-brained solutions that don't do anything useful. But at its core, AI solves real problems. It's not going away.
The cost of running LLMs (Score:2)
Right now, it's really expensive to run LLMs. Every new technology is expensive at first. Over time, I expect the price of running LLMs will come down significantly.
Re: (Score:3)
It's really expensive to train LLMs. Running an LLM after it's trained can be done on hardware as light as a smartphone processor. Once an LLM is trained and in-hand, its training is a sunk cost. I'd expect a trained LLM to be in use for a very long time and not retrained for quite awhile, years to decades. I really don't know why OpenAI keeps retraining their models.
Re: (Score:2)
From the summary:
Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on?
So yeah, training is expensive But according to the author, running the models is also expensive.
Re: (Score:2)
From the summary:
Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on?
So yeah, training is expensive But according to the author, running the models is also expensive.
Running them is too expensive to use? Currently the GenAI world might be running on deficit, using investor supplied money, but we can expect architectural and algorithmic improvements in the very near future to drive these costs down with all the money flowing into it.
But over the next several years this will definitely stop being true, if it is true now. The cost of computation has been dropping since 2000 by about an order of magnitude a decade. So if a GPT response costs X now, with no other improvement
Re: (Score:1)
Long story short, the author is wrong, and will become even more wrong over time. Cloud based LLMs like ChatGPT are not the cheapest to run right now, but it's not like they aren't economical either. With each passing week, advances are made in these models to make them available for local use, provide more powerful customization features, and make them run on less and less hardware.
Re: (Score:2)
Mr. Doctorow is entirely wrong; running pre-trained GPT/LLMs is only expensive if you do it (very) poorly. You can put a quite capable GPT/LLM on your desktop and generate results against your own queries for a tiny fraction of a cent. See GPT4All [gpt4all.io], for instance. Try the Hermes ( nous-hermes-llama2-13b.Q4_0.gguf ) model; uncensored, local, private, no network interaction unless you opt for reporting back.
Running current technology GPT/LLM syst
Re: (Score:2)
The standard of "it can run on your desktop" is not the bar, when it comes to the cost of running an LLM. The question is, does it take more horsepower than traditional search? That's a different question, and matters a lot, at scale.
Also, all that extra "unnecessary" stuff you mentioned...it all counts towards the cost.
Re: (Score:2)
The standard is what the actual achievable, practical cost is, and it is minimal. Trying to point to an inefficient implementation and claiming it's "the cost" is absurd. And in fact, the actual single-PC low cost currently is the bar. I suspect the bar will soon be a smartphone. We're not quite there, but it very much looks like the next milestone.
Re: (Score:2)
I don't doubt that performance will improve. But the desktop PC "standard" is irrelevant. I can run a full instance of SQL Server on my desktop, and it performs as well as it does on most actual servers. That proves nothing. If we had 500 users connecting to my desktop SQL Server all at once, that's when we would start to see the differences between a desktop and a real server. OpenAI and Microsoft and Google et al. have to support millions of concurrent users. It's very much not the same as "running an LLM
Ask the lawyers (Score:2)
First, let's separate AI from the bubble. AI technology exists now. Even if nobody can ever train another one, models like LLaMa are out there and will be shared, on the black market if necessary.
So, the bubble is about who, if anyone, actually makes money from AI and LLMs. I think this mostly depends on the legal system and legislation. I see four possibilities:
1. AI is made illegal. It goes underground. It makes less money than drugs; maybe close to as much as selling pirated movies.
2. AI is a copyr
When they start calling it Web 4.0 (Score:2)
When they start calling it Web 4.0, thatâ(TM)s when you know the generative AI goose is cooked.
What, a _second_ story agreeing with me? (Score:2)
First one quite critical of Quantum Computing (which I have been for 30 years and I see nothing that would make me change my negative assessment) and now one very critical of the current "AI" hype?
What is the world coming to? I feel my status as high-tech Pandora threatened!
There are already "micro-pops" (Score:2)
Yes, there is a bubble. Yes, things get hyped.
At the same time, the quick growing bubbles also pop up quickly this time. Remember all those startups that built their entire existence around passing PDFs to ChatGPT? Guess what, ChatGPT implemented native PFD import. What about Amazon product page helpers? Amazon is not offering the same service.
On the hardware side, manufacturers are more cautious. The AI cores are designed to serve more than one purpose. They might be accelerating the new fancy model, yes.
Re: (Score:2)
On the hardware side, manufacturers are more cautious. The AI cores are designed to serve more than one purpose. They might be accelerating the new fancy model, yes. But they also help improve your native camera application, or make Adobe export several times faster. They are more of a continuation of standard SIMD operations, like Intel SSE or ARM Neon instructions.
I recall MMX stood for "multimedia extensions" in marketing speak, while it was really more about "matrix mathematix". I wouldn't waste my money on "AI cores" but if they're really just wider SIMD units, I'm much more interested — but even then only if they are freely programmable without some closed SDK. It doesn't need to become a SSE-like CPU extension, I'm fine with something like OpenCL support, where it might actually make more sense.
It's interesting, though, how linear algebra with large arr
"The difference between Worldcom and Enron" ? (Score:2)
What is the difference between them that Doctorov is trying to use to make his point?
Infringements (Score:1)
AIs trained with The Law will scour the Web looking for anything and everything that can be useful to their controllers.
For example, corporations will scan for Patent/Copyright/Trademark infringements. AI esquires will generate new lawsuits by the boat-load, a tanker sized boat-load.
Of course there will be AIs looking for blackmail material and *anything* useful. No person is too small or immune to a Web scan. Happy Future everyone!
Re: (Score:2)
And that's not all the future has in store...
Ya know how you cannot pump your own gas in New Jersey?
https://www.cnn.com/2022/06/18... [cnn.com]
In a future where robots can do a job better than a human, you are going to be legally required to use a human. People need jobs!
Robot taxis? Banned. You get the smelly human taxi. People need jobs.
"Can I have the AI do my taxes? It would take like 2 seconds." No. Illegal! You get the smelly human CPA. People need jobs!
"Prompt engineering" (Score:2)
money talks (Score:2)
Bubble, yes, but LLMs quite significant. (Score:2)