Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Unleashed animal runs into street? (Score 1) 143

The AI is significantly more aware of other cars around it. Unlike a human, the self-driving system has continuous 360-degree visibility.

While I agree that it *should* always be safe to hit the brakes, the truth is that when you're driving on busy roads most of the time it's not safe to brake hard. People follow too close more often than they maintain proper separation.

I also agree that drivers should always have sufficient situational awareness to know whether or not it's safe to brake, they often don't, and they often react without considering the consequences. This isn't a "man vs woman" thing, it's a human thing.

Comment Re:study confirms expectations (Score 1) 164

it is a royal BITCH to try and remove them

It's worth noting that the way you remove them is by making them stop just sitting there, to the degree they do. The various approaches ultimately just try to break the ink up into smaller pieces that can be absorbed into the bloodstream and carried throughout the body... hopefully to get filtered out by the kidneys and liver and then excreted, but who knows? It seems likely to me that tattoo removal may create exactly the same effects as tattoo application, but moreso.

Comment Re:It's not Waymo's fault (Score 1) 143

You shouldn't worry about getting rear ended. That's the worry of the person behind you. It's their fault if you get rear ended.

Have you ever been rear-ended? I have, twice. Both times while I was stopped at a red light, so fault was absolutely incontestable. It's their fault, but you end up without a car. Sure, their insurance has to pay, but they only have to give you what it's worth, not what it will cost you to replace it, and the difference is significant. Not to mention that you could be injured. Your hospital bills will be covered, but you were still injured and have to deal with pain, the recovery, and maybe even some amount of permanent damage. My neck has never been quite right after the second time I was rear-ended.

Comment Re:It's not Waymo's fault (Score 1) 143

I can tell you exsctly how many human drivers would respond in a situation like this, because I've seen it happen and have heard about it enough times: the driver would have accelerated away from the incident at high speed.

They would have done that after slamming on the brakes in a vain attempt to avoid hitting the dog, possibly losing control of their vehicle, and possibly causing a collision with other cars or objects. If their reaction failed to cause a serious accident, then maybe they'd have sped away.

Comment Re:One dog and one cat... (Score 1) 143

Many millions of those miles are on roads that never have animals on them.

Until last month, Waymo only allowed their cars to drive on city streets, no freeway driving. Even now, freeway usage is limited, only for selected riders (I'm not sure what the selection criteria is).

So, basically all of Waymo's millions of miles are on streets that often have animals on them.

Comment Re:Shuld the sue Waymo? (Score 1) 143

if it were a medical study on, for example, a robotic surgical system with 10% of the mortality rate of a human surgeon, there would still be concern if, every now and then the system removed a patient's appendix at random during heart surgery.

Sure, there would be concern, but unless you're dumb you will still pick the option with the 90% lower mortality/harm rate. Yeah, it's good to investigate and fix the problem (assuming fixing the problem doesn't increase the mortality rate), but you should still use the provably better option.

Comment Re:Unleashed animal runs into street? (Score 1) 143

The real question is if it simply failed to notice the dog or if it noticed the dog and didn't even attempt to stop.

Also, why it didn't attempt to stop (if it didn't). If it didn't attempt to stop because it correctly determined that attempting to stop would risk causing a more serious accident with other vehicles on the road, that's not only good, it's better than the vast majority of human drivers.

Comment Re:YAFS (Yet Another Financial System) (Score 1) 69

Like I've said before, this is just yet another financial system being created to have a minority of people manage the majority of the wealth, to their own advantage. This is just a new competing system with less regulation created by the crypto bros to wrestle the current system away from the Wall St. bros.

I think this view gives the crypto bros too much credit. They might now be thinking about taking advantage of the opportunity to wrestle the system away from the Wall Street bros, but there was no such plan.

Comment Re:Very difficult to defend (Score 2) 39

too much hassle. build a shadow fleet of well-armed fast interceptors with untraceable munitions and sink the saboteurs.

To intercept them you still have to identify them, which you can't do until after they perform the sabotage. Given that, what's the benefit in sinking them rather than seizing them? Sinking them gains you nothing, seizing them gains you the sabotage vessel. It probably won't be worth much, but more than nothing. I guess sinking them saves the cost of imprisoning the crew, but I'd rather imprison them for a few years than murder them.

Comment Re:What is thinking? (Score 1) 289

You ignored his core point, which is that "rocks don't think" is useless for extrapolating unless you can define some procedure or model for evaluating whether X can think, a procedure that you can apply both to a rock and to a human and get the expected answers, and then apply also to ChatGPT.

Comment Re:PR article (Score 1, Interesting) 289

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper

Heh. It says a lot about the pace of AI research and discussion that a paper from last year is "old".

This is a common thread I notice in AI criticism, at least the criticism of the "AI isn't really thinking" or "AI can't really do much" sorts... it all references the state of the art from a year or two ago. In most fields that's entirely reasonable. I can read and reference physics or math or biology or computer science papers from last year and be pretty confident that I'm reading the current thinking. If I'm going to depend on it I should probably double-check, but that's just due diligence, I don't actually expect it to have been superseded. But in the AI field, right now, a year old is old. Three years old is ancient history, of historical interest only.

Even the criticism I see that doesn't make the mistake of looking at last year's state of the (public) art tends to make another mistake, which is to assume that you can predict what AI will be able to do a few years from now by looking at what it does now. Actually, most such criticism pretty much ignores the possibility that what AI will do in a few years will even be different from what it can do now. People seem to implicitly assume that the incredibly-rapid rate of change we've seen over the last five years will suddenly stop, right now.

For example, I recently attended the industry advisory board meeting for my local university's computer science department. The professors there, trying desperately to figure out what to teach CS students today, put together a very well thought-out plan for how to use AI as a teaching tool for freshmen, gradually ramping up to using it as a coding assistant/partner for seniors. The plan was detailed and showed great insight and a tremendous amount of thought.

I pointed out that however great a piece of work it was, it was based on the tools that exist today. If it had been presented as recently as 12 months ago, much of it wouldn't have made sense because agentic coding assistants didn't really exist in the same form and with the same capabilities as they do now. What are the odds that the tools won't change as much in the next 12 months as they have in the last 12 months? Much less the next four years, during the course of study of a newly-entering freshman.

The professors who did this work are smart, thoughtful people, of course, and they immediately agreed with my point and said that they had considered it while doing their work... but had done what they had anyway because prediction is futile and they couldn't do any better than making a plan for today, based on the tools of today, fully expecting to revise their plan or even throw it out.

What they didn't say, and I think were shying away from even thinking about, is that their whole course of study could soon become irrelevant. Or it might not. No one knows.

Slashdot Top Deals

How come financial advisors never seem to be as wealthy as they claim they'll make you?

Working...