Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment And what will we lose? (Score 1) 12

Anyone who wanted to know something went to Google, asked a question, clicked through some of the pages, weighed the information, and came to an answer. Now, the chatbot genie does that for you, spitting the answer out in a few neat paragraphs ...

When I use a search engine, I'm often not looking to answer a question. I may simply be exploring. But even when I'm looking for a specific answer or fact, the search for it usually gives me additional information. It may just teach me additional facts, or it may lead me down sideroads to new knowledge and a new way of looking at things. The search seldom results in just an answer to a question - it broadens my horizons.

Maybe I'm a rarity, and the vast majority of people just get an answer and get on with their day. But I suspect that I'm in at least a sizable minority. So what will happen when everyone is allowing AI to pre-chew and pre-digest their informational meals? I see search engines as up-to-date encyclopedias on steroids with a killer random search feature. By contrast, AI seems more like an inscrutable oracle, giving an answer to a question instead of pointing to a bit of real estate in a vast field of knowledge.

We may lose something of profound value when AI replaces search engines, even given the ad-ridden, algorithm-ridden swamp that search has become in the last couple of decades. When I consider all the downsides of AI - and even when I ignore the truly dystopian aspects of it that are becoming increasingly evident - I fear the we're going to regret going all-in on LLMs and whatever they evolve into.

Comment Re: What? how long can that possibly take? (Score 1) 142

Apparently even 2 step solutions are hard for you.

1. Green the demand / EVs

2. Green the Grid and magically the transport sector gets massively greener.

Since deniers and oil companies have delayed things decades, we're going to have to do both at the same time. They knew about this in the 70s and buried the studies b/c it would hurt their profits.

Comment Re:"Science" has the same problem, thank you RFKjr (Score 1) 74

LLMs are completely unable to verify.

That's an exaggeration. You can give a LLM access to real things and they can use those real things to verify. I just flatly do not understand why they are not. It wouldn't make them infallible, but it would go a huge way towards improving the situation, and they are clearly not doing it. They could also use non-AI software tools to check up on the AI output. I'd bet that you could even use a plagiarism detection tool for this purpose with little to no modification, but I'd also bet this kind of tool already exists anyway.

Comment Re:Idiotic statement (Score 1) 74

All research shows that increased penalties have no positive effect, but make the problem worse.

It also shows that if the penalty is insufficient then they have no positive effect. A fine that people with a lot of money can easily afford is just a prohibition which only applies to the poor, with a license fee. Look to speeding tickets which scale with income for a fair model.

Comment Re: Remains to be seen... (Score 1) 31

Generally its not too hard to hijack old hardware and add your own op-amps and whatever to the existing bias and drove circuits. Once you can get some signal in, even if you're just using your digital storage scope and some decently set up triggers you can crank though just about any old data set. Phase encoded, MFM, etc are all pretty easy to decode in software with a sufficiently fast microcontroller.

Slashdot Top Deals

If imprinted foil seal under cap is broken or missing when purchased, do not use.

Working...