Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:At a lower cost. (Score 1) 36

From the known numbers, software development gets slower (!) with LLM "help". At the same time, software now often is so abysmally bad that it becomes unsustainable to use it, and that already without LLM "help". The ones that actually benefit from LLM help when creating software are the attackers as they do not need reliability, security or maintainability for attack code.

Hence I predict that in the software-space, a really big shakeup and probably liability and regulation are not very far ahead.

Comment Re:They are still working on the build out (Score 1) 36

This is not comparable. Automation generally removes some flexibility (more recently, it may actually do the opposite), but automation does not remove reliability. In fact, it generally improves it by more standardized products. LLM-type AI kills reliability. And that is not really fixable as it is baked into the math.

For special purposes, it will probably be possible to create small dedicated LLMs and fit them with a result-checker (that uses deduction and hence is reliable), but these will look very differently from the current crop of general LLMs trained on a massive Internet piracy campaign.

Hence, no. Not comparable. The need for more "slop" is low and slow was pretty cheap to produce before already.

Comment Re:Fishy (Score 1) 36

The question is whether it is actually a beneficial amplifier. It does amplify errors, bad style, genericity, and outright fabrications as well. That may kill the use or even make the usefulness negative. One example: https://mikelovesrobots.substa...

It is quite possible other areas will be affected similarly. I mean, yes, you can probably replace an incompetent and unreliable employee with LLM-type AI, but why were you employing that person in the first place?

Comment Re:Fishy (Score 1) 36

That argument is only 50% of the cost calculation. And for some fields (software) we already have numbers saying that not only does it make people slower, the results are worse. One problem is that LLMs make a lot of security bugs and chances are that more will slip though that code by a competent person did contain in the first place. Another problem is that LLMs cannot do software architecture. And another is that beyond a pretty low complexity level, they get confused.

For the lawyer, things are open at this time. Say, they are 30% faster. But say on top of that they have 200% the malpractice liability cost and on top of that 1% risk of getting disbarred per year. That may still pan out, pan out partially or it may completely kill the idea.

Comment Re:Fishy (Score 1) 36

To me it more sounds like "the numbers do not pan out" and "AI is not performing on the level people think it performs". AI replacing workers is only a bad thing if it actually works to a significant degree for that. It does not seem like it does that. There are other bad aspects to LLM-type AI, no argument. But this time, the grand promises will (again) probably not pan out.

Comment Re:Opposing military powers (Score 1) 24

Indeed. While spoofing is hard, jamming is very easy. By its very nature, a GPS signal is not very strong and cannot be made very strong. On the other hand, I would expect that for a powerful GPS jammer, the batteries are actually more expensive than the rest of the electronics. You will probably have to build it yourself, but tons of people have the skills.

Comment Re:Yes (Score 1) 63

Soo, when you are semi- or incompetent, LLMs are more competent than you? No question about that, but LLMs are incapable of producing reasonable production code without somebody that can actually do it manually spending a lot of time fixing what the LLM produced. And that process is _slower_ than having that somebody competent do it directly. Add systematic errors, bad code maintainability and that fixing code is more stressful than writing it and results in less reliable and less secure code, and the whole thing is just an expensive way to produce bad code.

So, no, it will not happen. The only ones finding AI slop-level code profitable are attackers as they do not need reliability or security in exploit code.

Reference: https://mikelovesrobots.substa...

Comment Re:Yes (Score 1) 63

AI tools are fantastic, I use them extensively too, but right now it's so early that it needs a lot of hand-holding

AI tools are not "fantastic". The need for "hand-holding" will not go away. LLMs are decades old tech, there are no easy improvements or easy wins, which is why 3 years into the hype they are basically still as incapable as they were at the start.

Slashdot Top Deals

"This is lemma 1.1. We start a new chapter so the numbers all go back to one." -- Prof. Seager, C&O 351

Working...