Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:on this topic... (Score 1) 141

To me, this is like half of the problem right now, standard auto high beams. Someone smashed the lights on my co-worker's Tesla, and I feel for him because they are crazy expensive... but I fantasize about smashing them every single day that I go outside in the city during the daytime.

I live in Seattle where it is overcast a lot, so it's auto high beams all day long. I get up and go for a walk every morning, in a nice suburb with wide winding roads with huge bike lanes, and I get blinded by high beams about every 5 minutes. It's the daytime - not nighttime even. It's wrong.

Comment Re: Seems reasonable (Score 1) 85

This is exactly what I was saying... up until I invested more time using the agent-based ones.

So I've dealt with badly written code for most of my professional SW career (15 years) and these days, I work at a company that makes test instruments... we have many legacy embedded legacy devices and LLMs have transformed how I work. It's like having a new bit driver that augments your old screwdriver set.

LLMs are chat bots. They get wound up by whatever you get juicing through them. If you're trying to figure something out, you juice it up on that problem, describing it concisely, point it in the right direction as well as you can, and then let it unwind. It sometimes takes some iteration, but it can definitely help you understand things. You have to learn how to manage its context, otherwise the output may be garbage. And you have to know how far you can trust it, it's not always obvious when it is off track, but often it is quite obvious. Also, learning to use it to produce code takes some practice, so if you aren't getting good results, maybe check out how other people do it and see if that works for you. Definitely experiment. For stuff that matters, I end up personally modifying most of the code before I deliver it.

Comment Re:asking for screwups (Score 1) 118

A more accurate analogy would be "Comparing AlphaFold to an LLM is like comparing a race car to a cargo truck" - they're both AI systems built on similar engines, but designed for completely different purposes.

There are also many other differences, such as the fact that predicting folds is something with verifiable correct answers. Also, they generate the entire sequence as the output, whereas LLMs feed on their output, one symbol at a time, to generate the rest of their output.

Slashdot Top Deals

APL is a write-only language. I can write programs in APL, but I can't read any of them. -- Roy Keir

Working...