Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Really? (Score 1) 249

That's a good point. Here on /. I can assume people know what open world games are. Out in the real world movies are probably the better analogy.

Comment Microsoft said "Windows10 is the last version" (Score 1) 54

”Right now we’re releasing Windows 10, and because Windows 10 is the last version of Windows, we’re all still working on Windows 10.”
Jerry Nixon, Microsoft developer evangelist speaking at the company’s Ignite conference this week.

Why Microsoft is calling Windows 10 ‘the last version of Windows’ May 7th, 2015
Why Microsoft Announced Windows 10 Is 'The Last Version Of Windows' May 8, 2015
Windows 10 will be 'the last version of Windows' from May 11, 2015

Comment Re:YAFS (Yet Another Financial System) (Score 1) 67

Like I've said before, this is just yet another financial system being created to have a minority of people manage the majority of the wealth, to their own advantage. This is just a new competing system with less regulation created by the crypto bros to wrestle the current system away from the Wall St. bros.

I think this view gives the crypto bros too much credit. They might now be thinking about taking advantage of the opportunity to wrestle the system away from the Wall Street bros, but there was no such plan.

Comment Re:Very difficult to defend (Score 2) 38

too much hassle. build a shadow fleet of well-armed fast interceptors with untraceable munitions and sink the saboteurs.

To intercept them you still have to identify them, which you can't do until after they perform the sabotage. Given that, what's the benefit in sinking them rather than seizing them? Sinking them gains you nothing, seizing them gains you the sabotage vessel. It probably won't be worth much, but more than nothing. I guess sinking them saves the cost of imprisoning the crew, but I'd rather imprison them for a few years than murder them.

Comment Re:Really? (Score 1) 249

The movie analogy is old and outdated.

I'd compare it to a computer game. In any open world game, it seems that there are people living a life - going to work, doing chores, going home, etc. - but it's a carefully crafted illusion. "Carefully crafted" in so far as the developers having put exactly that into the game that is needed to suspend your disbelief and let you think, at least while playing, that there are real people. But behind the facade, they are not. They just disappear when entering their homes, they have no actual desires just a few numbers and conditional statements to switch between different pre-programmed behaviour patterns.

If done well, it can be a very, very convincing illusion. I'm sure that someone who hasn't seen a computer game before might think that they are actual people, but anyone with a bit of background knowledge knows they are not.

For AI, most of the people simply don't (yet?) have that bit of background knowledge.

Comment Re:What is thinking? (Score 1) 249

You ignored his core point, which is that "rocks don't think" is useless for extrapolating unless you can define some procedure or model for evaluating whether X can think, a procedure that you can apply both to a rock and to a human and get the expected answers, and then apply also to ChatGPT.

Comment Re:PR article (Score 0) 249

And yet, when asked if the world is flat, they correctly say that it's not.

Despite hundreds of flat-earthers who are quite active online.

And it doesn't even budge on the point if you argue with it. So for whatever it's worth, it has learned more from scraping the Internet than at least some humans.

Comment Re:PR article (Score 1, Interesting) 249

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper

Heh. It says a lot about the pace of AI research and discussion that a paper from last year is "old".

This is a common thread I notice in AI criticism, at least the criticism of the "AI isn't really thinking" or "AI can't really do much" sorts... it all references the state of the art from a year or two ago. In most fields that's entirely reasonable. I can read and reference physics or math or biology or computer science papers from last year and be pretty confident that I'm reading the current thinking. If I'm going to depend on it I should probably double-check, but that's just due diligence, I don't actually expect it to have been superseded. But in the AI field, right now, a year old is old. Three years old is ancient history, of historical interest only.

Even the criticism I see that doesn't make the mistake of looking at last year's state of the (public) art tends to make another mistake, which is to assume that you can predict what AI will be able to do a few years from now by looking at what it does now. Actually, most such criticism pretty much ignores the possibility that what AI will do in a few years will even be different from what it can do now. People seem to implicitly assume that the incredibly-rapid rate of change we've seen over the last five years will suddenly stop, right now.

For example, I recently attended the industry advisory board meeting for my local university's computer science department. The professors there, trying desperately to figure out what to teach CS students today, put together a very well thought-out plan for how to use AI as a teaching tool for freshmen, gradually ramping up to using it as a coding assistant/partner for seniors. The plan was detailed and showed great insight and a tremendous amount of thought.

I pointed out that however great a piece of work it was, it was based on the tools that exist today. If it had been presented as recently as 12 months ago, much of it wouldn't have made sense because agentic coding assistants didn't really exist in the same form and with the same capabilities as they do now. What are the odds that the tools won't change as much in the next 12 months as they have in the last 12 months? Much less the next four years, during the course of study of a newly-entering freshman.

The professors who did this work are smart, thoughtful people, of course, and they immediately agreed with my point and said that they had considered it while doing their work... but had done what they had anyway because prediction is futile and they couldn't do any better than making a plan for today, based on the tools of today, fully expecting to revise their plan or even throw it out.

What they didn't say, and I think were shying away from even thinking about, is that their whole course of study could soon become irrelevant. Or it might not. No one knows.

Comment Re:Wrong Name (Score 2) 249

It's almost as if we shouldn't have included "intelligence" in the actual fucking name.

We didn't. The media and the PR departments did. In the tech and academia worlds that seriously work with it, the terms are LLMs, machine learning, etc. - the actual terms describing what the thing does. "AI" is the marketing term used by marketing people. You know, the people who professionally lie about everything in order to sell things.

Comment Re:What is thinking? (Score 1) 249

professions that most certainly require a lot of critical thinking. While I would say that that is ludicrous

It is not just ludicrous, it is irrationally dangerous.

For any (current) LLM, whenever you interact with them you need to remember one rule-of-thumb (not my invention, read it somewhere and agree): The LLM was trained to generate "expected output". So always think that implicitly your prompt starts with "give me the answer you think I want to read on the following question".

Giving an EXPECTED answer instead of the most likely to be true answer is literally life-threatening in a medical context.

Slashdot Top Deals

A person with one watch knows what time it is; a person with two watches is never sure. Proverb

Working...