Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:So ChatGPT is a magnificent cut-and-paste machi (Score 1) 73

This is vacuous nonsense. LLMs have the ability to generalize. They can actually do shit. They can for example translate languages, base64 decode, solve simple ciphers, double recipes, apply knowledge learned via ICL. All with varying degrees of success. While their behavior is generally rather rote blanket dismissal as a glorified random number generator or a next character predictor fails to speak in any useful way to demonstrated capabilities.

No they can't generalize. They can produce random numbers and characters based on training data. If you're talking about 'summarize this paper' it is just producing random characters that fit the training data. If you think it 'thinks' or 'does shit', other than producing random characters that fit training data (or context data if you're loading it that way for MCP or tooling shit) then you have a fundamental misunderstanding of how both LLM is 'trained' and function. (we use the word training but what we really mean is ngram/token probability extraction from a given set of data).

Comment Re:So ChatGPT is a magnificent cut-and-paste machi (Score 1) 73

An LLM is not a "cut and paste machine" unless you're talking about your usage of it, and your discussion devolved into nonsense about an LLM pruning paths. If that's not what you meant, perhaps you should re-read the post *you* responded to. You said

Probably not with the old school programs that were more brute force with limited pruning, maybe more so with ML based series pruning?

Explain to me with your knowledge of machine learning, how does an ML model 'prune paths'? Or if you don't know, name one that does for chess that isn't a dedicated chess engine.

Comment Re:So ChatGPT is a magnificent cut-and-paste machi (Score 1) 73

Yes, but are they really "playing"? Like a human they are analyzing.a series of moves and countermoves, perhaps a longer series than a human, but is it "instinctively" pruning those paths to explore like a human? Probably not with the old school programs that were more brute force with limited pruning, maybe more so with ML based series pruning?

You do not understand how LLMs work. Its not playing or analyzing anything at all. Its a glorified random number generator that's is literally predicting characters, in this case chess move sequences. Its not judging those sequences by rules, or using lookup tables or probability matrices for movement. Its literally predicting, character by character, what its been trained with.

The fact that it plays a 'real' game and tells a master it played a good game is meaningless - it would tell a child its ideas about how blue is the best color that it has brilliant ideas.

It 'plays' a 'good' game simply because its not trained with invalid chess moves (or they are so outnumbered by 'correct' chess moves as to be meaningless), so the prediction engine won't predict invalid moves, and there's no 'pruning' involved whatever that means (reducing tree branch choice I suppose?)

Comment Re:Where does the salt go? (Score 1) 11

Even if they were pumping just the water and leaving the salt at levels to create a cloud (ignoring the sun does this *all the day for a billion years*, and they found way to do it at that level and are somehow not getting rich off their desalination invention), I think you underestimate just how much salt and water there is, and how many billions of litres of water they'd have to pump into the air to increase the salinity even 1%, and that's even just assuming those clouds didn't immediately rain back down into the ocean as water.

Of course, if you'd thought about this for 5 seconds you'd know this, but judging by your comment history, you like your conspiracies, so I'm hoping to educate others who stumble across this post.

Comment Re:Forget it (Score 1) 11

Did you even read the summary of the paper, which is, to quote, "Coral cover remains high while impacts of mass coral bleaching yet to be determined", and later, even with pictures for those who don't like to read, it shows that the coral cover in the northern and central great barrier reefs are at an *all time high* (since measurements started in the 1980s).

However:

The high coral cover reported this year is good news but does not mean all is fine on the GBR as it continues to face cumulative stressors. 2024 saw the fifth mass coral bleaching event since 2016 with the largest spatial footprint of coral bleaching yet recorded, coupled with the impact of two tropical cyclones.

And not everyone is as pessimistic as you. Some scientists hope that governments will be convinced to help play a bigger role in reducing pollution causing climate change, and thus want to find ways to 'buy time' for things that are more susceptible than others. Nature does its own damage with cyclones, which are also increasing due to the effects of climate change. And the barrier reefs are important to the health of the overall greater ocean too, with many species migrating to and from there. Nature will find a way, but humans might not, so doing what we can to maintain the status quo of the planetary natural cycles will benefit the longevity of our species.

Comment Re:Hah! (Score 1) 41

I'm sure he doesn't mind "AI" supporting a good developer as a tool, but not an "AI" to actually "contribute" "code".

Again, go and read the LKML and see recent discussions on this in the last year. Based on your previous comments here I highly doubt you are having personal conversations with Linus Torvalds, but if you are, I'd love to hear it. Again, Mr Torvalds has talked publicly in interviews about AI and has never said no to contributions from LLM.

Anyway, not one person on the LKML discussion has said it was a bad idea - in fact, everyone knows that some contributions were already written by an LLM, and at least this way you know who to blame, who signed off on it, and give people the 'right' path to do this (rather than hiding it or doing it anyway and not telling you), and they can find patterns of bad behavior (summation of the discussion for you.)

Comment Re:No. Stop. Just no. (Score 1) 41

You can disagree, but your view point is the view of 'right now, this moment'. You cannot stop the idea that people are seeing an increase in personal productivity from using an LLM even if you do not, and all you see are the problems (of which there are many, I am not an apologist for LLM). You cannot stop the billions of USD and euro being thrown into LLM research.

You can stop using it in your own world, in your own life, and I completely sympathize with that point of view, but you cannot kill their viability for 'real work'. You can, at most, delay or slow it by showing the problems, until someone else comes along and fixes those problems.

The real problems are how can we redistribute the wealth the top AI companies will make by putting white collar jobs and artists out of work? I'm 100% ok with AI taking my job one day but I would like to share in the wealth and have a path for my children other than manual labor.

Comment Re:No. Stop. Just no. (Score 2, Insightful) 41

it's that people are submitting patches partly made using AI and not telling anyone.

ABSOLUTELY THIS. LLM can write good code, or it can hallucinate and fall down its own rabbit hole of generating garbage to fit garbage. It needs *extra* review and this ensures the path to that review.

The general reaction here these days is old-man-syndrome 'AI IS THE DEVIL' but pandora's box has been opened. You can't get rid of coding models at this point.

Comment Re:Hah! (Score 1) 41

Linus would sooner allow C++ to touch the kernel than AI.

Did he tell you this? because recent statements by Linus Torvalds are pretty much, in a nutshell, 'AI is going to be cool, hype sucks'. (You can find the interview easily enough on google). I mean this whole process is to ensure that its crystal clear what code was written by an LLM to give *extra attention* to it during peer review.

It's not like any LLM today has enough "intelligence" (whatever that means with an LLM) to be able to create a new VMM or IRQ manager or get rid of some locking thing, and it would most definitely create a buggy mess if you tried, but I could see it finding bugs and fixing grammar issues, as the example shows, and what is wrong with that?

Comment Re:Bad Idea (Score 2) 41

This is a bad, bad idea; perhaps with the sole exception of knowing which lines of code need to be scrutinized by a competent developer, and to know which developers need to be banned from making kernel submissions.

If you think that 'kernel developers' are all 100% competent and making good decisions and writing good code you must not have ever read or lurked on the LKML.

I'm not saying ML is going to write good or equivalent code to even the worst kernel developer, but ML is not allowed to sign off on a peer review, which means a competent developer *must* scrutinize this. No one with any knowledge of current limitations, even you Gemini Pro 2.5, not even a hard core LLM proponent, in the year 2025, wants any 'smart' machine writing kernel code, at least not without extreme peer review, which is *EXACTLY* what this does.

Also, MS claims over half their code is already written by machine - do you think the quality of Microsoft's code is somehow worse than or more 'buggy/poor security' than it was a couple years ago? I have to imagine its no better or worse than the mountains of insecure spaghetti that were there before.

Comment Re:Maybe programmers aren't quite obsolete (Score 1) 151

If you have visions, you should get your head examined.

Nice, you're obviously an observant and intelligent person. Also how would having my head examined help with whatever you think visions are?

Incidentally, humans write sorting algorithms in Python when the ones available (!) in the library (!) do not cut it.

I have done it, because a 2-dimensional sort with intricate border conditions and heuristic elements is not available in the library (also far outside of what an LLM could deliver).

Nice contradiction. *SOMEONE* wrote those libraries, they weren't written by AI 10 years ago. Its *still* a common question in coding interviews - and if you think that's a problem, then consider that question during the interview a red flag and stop the interview.

Incidentally a 2 dimensional sort with intricate border conditions can easily be solved using python's various built in sorts without resorting (lol) to custom sort algorithms. I can think of specific data types or objects that might require some custom sorting algorithm written (sort pictures by size of hair or something) but not basic data types like ints or floats or strings.

Comment Re:Maybe programmers aren't quite obsolete (Score 1) 151

Programmers are like cowboys. The tools used and methods for modern ranching have greatly reduced the number of cattle herders needed per ranch, even as the number of cows has increased.

I'm using cattle since it maps well to the 'cattle vs pets' analogy in a data center, and code will soon move to that as well. As we had tools like puppet and ansible and terraform come along its not like we needed less humans - in fact we had an exponential increase in servers and though human numbers don't correlate linear to that, they are increasing.

That said..I envision a day in the near future where the vast majority of code is cattle, not pets, easily regenerated or recreated, requiring some herders to ensure Q/A and deployments and objectives work as expected, tuning a bit as needed. Its a rare day to see cowboys sleeping on the range, eating beans, and it will be a rare day to see a human writing sorting algorithms in python.

I personally think "Human" programmers as in 'I code for() loops and write unit tests and make hundreds of thousands of dollars for it' probably won't exist in a few years. Once we get to 10M tokens in context, it will be game over for 95% of programmer's day jobs though. And don't think you can just transition over, you'll be too expensive to be a Q/A person that doesn't take a whole lot of programming skill to 'talk' to ML and tell it what went wrong. That will still be faster than getting Joe to recode the function.

Comment Re:NO (Score 1) 67

Why not use a hypothetical (gravitic/magnetic/magic) field that could both compress a very small star down to a manageable size (maybe down the level just before it becomes a singularity) but also use the resistance, heat, and radiation to power your alien government boondoggles?

I mean its about as likely as a Dyson Sphere powering a civilization.

Comment Re:Nice (Score 2) 62

OpenBSD has its strengths but screaming fast compared to most Linux distros? This is laughable. I would say openbsd is more secure by default and in general than the average linux desktop is but I'm not even sure of that in the year 2024...

This benchmark is a few years old but this has not changed, except maybe to skew a bit more in linux's favor - https://www.phoronix.com/revie...

Again there are places where OpenBSD has its strengths but I'm not even sure its faster as a firewall at this point.

Slashdot Top Deals

"Show business is just like high school, except you get paid." - Martin Mull

Working...