Comment Re: I predict this will be short-lived (Score 1) 29
Indeed. In most production scenarios, unreliability is a very bad killer.
Indeed. In most production scenarios, unreliability is a very bad killer.
From the known numbers, software development gets slower (!) with LLM "help". At the same time, software now often is so abysmally bad that it becomes unsustainable to use it, and that already without LLM "help". The ones that actually benefit from LLM help when creating software are the attackers as they do not need reliability, security or maintainability for attack code.
Hence I predict that in the software-space, a really big shakeup and probably liability and regulation are not very far ahead.
Where are we seeing actual job growth?
It is right there in the title of the story. Seriously.
Attacking the messenger is the default of most people for the whole process.
Yes. Abject stupidity all around.
This is not comparable. Automation generally removes some flexibility (more recently, it may actually do the opposite), but automation does not remove reliability. In fact, it generally improves it by more standardized products. LLM-type AI kills reliability. And that is not really fixable as it is baked into the math.
For special purposes, it will probably be possible to create small dedicated LLMs and fit them with a result-checker (that uses deduction and hence is reliable), but these will look very differently from the current crop of general LLMs trained on a massive Internet piracy campaign.
Hence, no. Not comparable. The need for more "slop" is low and slow was pretty cheap to produce before already.
The question is whether it is actually a beneficial amplifier. It does amplify errors, bad style, genericity, and outright fabrications as well. That may kill the use or even make the usefulness negative. One example: https://mikelovesrobots.substa...
It is quite possible other areas will be affected similarly. I mean, yes, you can probably replace an incompetent and unreliable employee with LLM-type AI, but why were you employing that person in the first place?
Having experienced some AI "translations" and "voice acting" and "graphics", I think they are not at risk to any significant degree in the longer term. Replacing competent work with slop takes a while to be reflected in revenue, but it will be.
That argument is only 50% of the cost calculation. And for some fields (software) we already have numbers saying that not only does it make people slower, the results are worse. One problem is that LLMs make a lot of security bugs and chances are that more will slip though that code by a competent person did contain in the first place. Another problem is that LLMs cannot do software architecture. And another is that beyond a pretty low complexity level, they get confused.
For the lawyer, things are open at this time. Say, they are 30% faster. But say on top of that they have 200% the malpractice liability cost and on top of that 1% risk of getting disbarred per year. That may still pan out, pan out partially or it may completely kill the idea.
To me it more sounds like "the numbers do not pan out" and "AI is not performing on the level people think it performs". AI replacing workers is only a bad thing if it actually works to a significant degree for that. It does not seem like it does that. There are other bad aspects to LLM-type AI, no argument. But this time, the grand promises will (again) probably not pan out.
Indeed. While spoofing is hard, jamming is very easy. By its very nature, a GPS signal is not very strong and cannot be made very strong. On the other hand, I would expect that for a powerful GPS jammer, the batteries are actually more expensive than the rest of the electronics. You will probably have to build it yourself, but tons of people have the skills.
You are confused as to what requirement information production has. As soon as there is a significant level of false statements, it becomes totally useless in mist scenarios. You are just too clueless to understand that.
Soo, when you are semi- or incompetent, LLMs are more competent than you? No question about that, but LLMs are incapable of producing reasonable production code without somebody that can actually do it manually spending a lot of time fixing what the LLM produced. And that process is _slower_ than having that somebody competent do it directly. Add systematic errors, bad code maintainability and that fixing code is more stressful than writing it and results in less reliable and less secure code, and the whole thing is just an expensive way to produce bad code.
So, no, it will not happen. The only ones finding AI slop-level code profitable are attackers as they do not need reliability or security in exploit code.
Reference: https://mikelovesrobots.substa...
In other words you have nothing and do not understand relevancy. Typical MAGA cretin.
Good luck with that. If they have to prefix every statement with "IANAL", they might not se much use.
AI tools are fantastic, I use them extensively too, but right now it's so early that it needs a lot of hand-holding
AI tools are not "fantastic". The need for "hand-holding" will not go away. LLMs are decades old tech, there are no easy improvements or easy wins, which is why 3 years into the hype they are basically still as incapable as they were at the start.
"This is lemma 1.1. We start a new chapter so the numbers all go back to one." -- Prof. Seager, C&O 351