Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:What's old is new again (Score 1) 41

That wasn't *all* I said, but it is apparently as far as you read. But let's stay there for now. You apparently disagree with this, whnich means that you think that LLMs are the only kind of AI that there is, and that language models can be trained to do things like design rocket engines.

Comment Re:Decentralized services (Score 2) 201

Looked up details on the wording, and it may not be just a logistical nightmare but a legal impossibility. The law appears to only apply to specific platforms, and no Mastodon servers appear on the list. New instances wouldn't either, so there'd be no legal basis for trying to force them to ban teens.

Comment Re:What's old is new again (Score 5, Informative) 41

Here's where the summary goes wrong:

Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs.

Artificial Intelligence is in fact many kinds of technologies. People conflate LLMs with the whole thing because its the first kind of AI that an average person with no technical knowledge could use after a fashion.

But nobody is going to design a new rocket engine in ChatGPT. They're going to use some other kind of AI that work on problems on processes that the average person can't even conceive of -- like design optimization where there are potentially hundreds of parameters to tweak. Some of the underlying technology may have similarities -- like "neural nets" , which are just collections of mathematical matrices that encoded likelihoods underneath, not realistic models of biological neural systems. It shouldn't be surprising that a collection of matrices containing parameters describing weighted relations between features should have a wide variety of applications. That's just math; it's just sexier to call it "AI".

Comment Decentralized services (Score 2) 201

I bet a large enough number of those kids know enough to know about Fediverse-based services like Mastodon to start spreading the word. Instead of a dozen large social media platforms, the government will be faced with thousands of bulletin-board-sized "services" networked together into a platform that has no single place you can go to deactivate accounts. Controlling that would be a logistical nightmare.

Comment Re:Anyone still using IPv4 (Score 2) 40

Most consumers today aren't using IPv4 by choice, but by necessity. Every OS out there supports IPv6, as does every router made in the last 10 years, and supports it pretty much automatically if it's available. The main reason they still use IPv4 is that their ISP hasn't deployed IPv6 support on their residential network, so IPv6 isn't available unless you're a techie and recognize the name Hurricane Electric. The next most common reason is that the site they're accessing only has IPv4 addresses assigned so connections are automatically done via IPv4. Consumers have control over neither of those reasons.

Comment What interests me ... (Score 1) 81

is if our civilisation will survive the next few hundred years and, if it does not, what will be the causes of our decline:
* climate change (the effects will not be evenly felt)
* nuclear (or other) war
* rise of AI that takes control
* grey goo (molecular nanotechnology)
* strike from deep space asteroid

Feel free to reply with other possible causes.

Comment Re:YAFS (Yet Another Financial System) (Score 1) 69

Like I've said before, this is just yet another financial system being created to have a minority of people manage the majority of the wealth, to their own advantage. This is just a new competing system with less regulation created by the crypto bros to wrestle the current system away from the Wall St. bros.

I think this view gives the crypto bros too much credit. They might now be thinking about taking advantage of the opportunity to wrestle the system away from the Wall Street bros, but there was no such plan.

Comment Re:Very difficult to defend (Score 2) 39

too much hassle. build a shadow fleet of well-armed fast interceptors with untraceable munitions and sink the saboteurs.

To intercept them you still have to identify them, which you can't do until after they perform the sabotage. Given that, what's the benefit in sinking them rather than seizing them? Sinking them gains you nothing, seizing them gains you the sabotage vessel. It probably won't be worth much, but more than nothing. I guess sinking them saves the cost of imprisoning the crew, but I'd rather imprison them for a few years than murder them.

Comment Re:If.. (Score 4, Interesting) 72

Comment Re:What is thinking? (Score 1) 289

You ignored his core point, which is that "rocks don't think" is useless for extrapolating unless you can define some procedure or model for evaluating whether X can think, a procedure that you can apply both to a rock and to a human and get the expected answers, and then apply also to ChatGPT.

Comment Re:PR article (Score 1, Interesting) 289

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper

Heh. It says a lot about the pace of AI research and discussion that a paper from last year is "old".

This is a common thread I notice in AI criticism, at least the criticism of the "AI isn't really thinking" or "AI can't really do much" sorts... it all references the state of the art from a year or two ago. In most fields that's entirely reasonable. I can read and reference physics or math or biology or computer science papers from last year and be pretty confident that I'm reading the current thinking. If I'm going to depend on it I should probably double-check, but that's just due diligence, I don't actually expect it to have been superseded. But in the AI field, right now, a year old is old. Three years old is ancient history, of historical interest only.

Even the criticism I see that doesn't make the mistake of looking at last year's state of the (public) art tends to make another mistake, which is to assume that you can predict what AI will be able to do a few years from now by looking at what it does now. Actually, most such criticism pretty much ignores the possibility that what AI will do in a few years will even be different from what it can do now. People seem to implicitly assume that the incredibly-rapid rate of change we've seen over the last five years will suddenly stop, right now.

For example, I recently attended the industry advisory board meeting for my local university's computer science department. The professors there, trying desperately to figure out what to teach CS students today, put together a very well thought-out plan for how to use AI as a teaching tool for freshmen, gradually ramping up to using it as a coding assistant/partner for seniors. The plan was detailed and showed great insight and a tremendous amount of thought.

I pointed out that however great a piece of work it was, it was based on the tools that exist today. If it had been presented as recently as 12 months ago, much of it wouldn't have made sense because agentic coding assistants didn't really exist in the same form and with the same capabilities as they do now. What are the odds that the tools won't change as much in the next 12 months as they have in the last 12 months? Much less the next four years, during the course of study of a newly-entering freshman.

The professors who did this work are smart, thoughtful people, of course, and they immediately agreed with my point and said that they had considered it while doing their work... but had done what they had anyway because prediction is futile and they couldn't do any better than making a plan for today, based on the tools of today, fully expecting to revise their plan or even throw it out.

What they didn't say, and I think were shying away from even thinking about, is that their whole course of study could soon become irrelevant. Or it might not. No one knows.

Slashdot Top Deals

The absence of labels [in ECL] is probably a good thing. -- T. Cheatham

Working...