Comment Re:I remember a time when... (Score 1) 75
AFAIK, the computing power of multiple LLMs combined is not greater than that of a single one. So the proof would carry over.
AFAIK, the computing power of multiple LLMs combined is not greater than that of a single one. So the proof would carry over.
I did several replacements by hand (soldering iron) to get that. Totally worth is, but obviously socketed switches that anybody can replace are even better.
I agree. There are perverted incentives at work and they make the product objectively worse.
Essentially: Yes, if you are reeeeally dumb, you can hurt yourself with it.
They all got a cut-off for that: Unless they detect voltage and frequency from external, they have nothing to sync to and will not inject power. This really is a pseudo-concern.
And if not, you should be able to replace the whole keyboard for moderate cost and effort. Then everybody can get what they prefer.
Same. This may result in the loss of a major part of a whole generation of senior people. Which cannot be replaced easily and cannot be replaced at all if you loose too many.
There is, presumably, an amount of time savings where this could be justified(at least for things that you, ultimately, only do because they pay the bills; not ones of some intrinsic value); but it seems particularly grim to deal with the changed nature of the work for such paltry savings.
Indeed. And also remember that the LLM companies are all losing money like crazy at this time. There was enough time now to do the obvious optimizations to bring the cost down. It has not happened. The easy stuff does not do it. But the problem with the harder stuff is that you cannot force it and have no clue when it will be happening. It may be 10 years, 100 years or even 1000 years before somebody figures out a way to make LLMs the needed 3 orders of magnitude cheaper to run and to train. It it may happen that somebody can prove this is impossible.
Hence the odds are that LLM use will get exceptionally expensive pretty soon. And then these "gains" turn into spectacular losses.
Yep. The evidence is mounting that reviewing LLM output is a hellish job and not good for you. The problem, that the business-grads in charge seem to completely have overlooked, is that they are messing with the work experience of senior (!) people and are making it much worse. Senior people have skills and can leave. Senior people will leave when mistreated. There is no known way to compensate for or recover from losing most or all of your senior people. They take a lot of institutional knowledge and specific experience with them and that cannot be replaced. You may be able to survive for a while with expensive consulting, but you are essentially dead when you were stupid enough to let it happen.
There are indicators that it is so bad that people are leaving well-paid jobs, see e.g. https://medium.com/@Reiki32/wh...
Somebody here called AI code "review resistant" and that seems to be exactly what it is. Bloated, inefficient, insecure, misleading comments, but all gussied up to the max to look like a rockstar (well, a wannabe rockstar) coded it.
Even lower-quality ISP and cloud services generally need 3 nines to be commercially viable to use. AWS recently dropped below that due to use of AI slop code in production and may be in real trouble as a result. For higher reliabilities, 5 nines are the generally accepted minimum. Something need to go higher.
Thinking that one 9 is enough is just utterly disconnected and clueless nonsense.
You need to be in a _really_ low quality field if 10% error rates are acceptable. In most areas, error rates of experts making decisions need to be much, much lower to not completely kill profits.
You need to be really incompetent for LLM-type AI being ablow to do things you cannot do in your professional capacity. And how do you propose you review these results if you were not competent to obtain them? Review is harder (!) than doing.
Yes, I am familiar with the work you cite. It is not an LLM doing science. It is an LLM dealing with bad data management in a scenario where hallucinations are acceptable. These are rare and are caused by human incompetence.
There are even some mathematical proofs for that by now. For example, "creativity" of gen-AI seems to be limited to below the level of a professional by the very approach used: https://www.psypost.org/a-math...
This is a pretty devastating result, because in a larger context, it means that AI-slop is not a temporary problem, but the only thing the tech can really produce.
For code, we already have results that things get slower (https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding) and evidence is mounting that AI code is insecure, inefficient, hard to review, violates KISS and generally a really bad idea except, maybe, for use-once code.
For chat-bot use, we know there are serious psychological risks ("AI Psychosis") that may be more general that currently observed.
On top of that, add that LLMs are still not earning remotely as much money as they cost to train and operate and it looks the time of fast gains ("low hanging fruit") is over.
The whole thing is just an excessively bad idea from start to end. Some people saw that right from the start. I guess others need to get caught up in a few catastrophes. And still others will not be able to accept it even when it kicks them in the face. The really hilarious thing is that if LLMs had worked well, putting them in everywhere would still have been an excessively bad idea. A classical lose-lose situation made possible by a ton of wishful thinking and denial.
You are confusing two issues here. Obviously, there is always some resistance to changes in tooling and procedures, because some people fear being left behind. This is not what this study is around at all.
If what they've been doing hasn't solved the problem, tell them to do something else. -- Gerald Weinberg, "The Secrets of Consulting"