Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Morality Framework UNNEEDED (Score 1) 177

Ahhh, but you are looking at the one situation in isolation. The moral thing to do is everyone hand over the driving to the machines as that will save the greatest number of lives in the long run. By being unwilling to hand the decision to a machine you are choosing to kill a greater number of humans in practice on average – just so you can exercise the moral decision in some outlier. If self-driving cars were only as good as, or even possibly just a little better than us at driving, I might side with you, but likely they will be orders of magnitude better.

BTW I meant “former” not “latter” in my first post.

Comment Re:Poor Linux support (Score 1) 199

FYI, I managed to get my last mouse working better by hosing down the microswitches with CRC Mass Airflow Sensor Cleaner that I had laying around. So far so good.

Any thoughts on the relative merits of MAF cleaner vs. brake cleaner, canned air, or other things like that?

Comment Morality Framework UNNEEDED (Score 1) 177

Why this obsession with moral reasoning on the part of the car? If using self-driving cars are in 10x fewer accidents than human driven cars, why the requirement to act morally in the few accidents they do have. And it isn’t as if the morality is completely missing, it is implicit in not trying to to hit objects, be they human or otherwise. Sure try to detect which are objects are human and avoid them at great cost, but deciding which human to hit in highly unlikely situations seems unneeded and perhaps even unethical in a fashion. As it is now, who gets hit in these unlikely scenarios is random, akin to an Act of God. Once you start programming in morality you’re open to criticism on why you chose the priorities you did. Selfishly I would have my car hit the pedestrian instead of another car, if the latter were more likely to kill me. No need to ascertain the number of occupants in the other car. Instinctively this is what we humans do already -- try not to hit anything, but save ourselves as a first priority. In my few new misses (near hits) I’ve had, I never find myself counting the number of occupants in the other car as I make my driving decisions.

Comment Re:The cost of anti-terrorism (Score 1) 737

But of ALL the crummy things that have come out of post-9/11 security policy, reinforced cockpit doors are not a mistake.

Really? They may have been involved in two airline crashes (this one and Egyptair 990 -- not Helios 522, and there's no evidence on MH370). How many other security measures can boast such a death toll?

Comment Re:Boorish (Score 2) 662

Actually, there are quite a few american cars that he has out and out loved on the show... He drove the Lexus LFA across Nevada and loved it.

In what way is the Lexus LFA an American car? It's made by a Japanese company, designed by Japanese engineers and manufactured in Japan.

Comment Re:Risk Management (Score 1) 737

Or perhaps some means to allow a pilot back in.

Any method of getting in from the passenger compartment would be vulnerable to coercion.

I think maybe they should just have the senior flight attendant enter the cockpit whenever the pilot or copilot leaves, so that there are always two people in the cockpit (in theory, they probably won't *both* be suicidal...)

Either that, or implement better vetting of pilots.

Or maybe they should give the pilots their own bathroom, so there's no reason to leave the cockpit in the first place.

Comment No anger, just thougt exercises (Score 1) 129

I’m not angry, far from it. This is fun and thought provoking thread. I hope I haven’t ruffled your feathers. My last post was a little dark. I am merely suggesting that we must look past mankind’s interests as the final arbiter of what is best in the universe. Perhaps what comes after us will be a better world, even if we have a diminished (if any) place in it.

If robots become truly sentient (and not mere automatons) then what we can ethically do to/with them becomes questionable. Likely there will be castes of robots. Those self-aware who should be considered full citizens, and those (from their inception) that are not self-aware can be treated as automatons without ethical dilemma. Likely self-aware robots will employ non self-aware robots to do their bidding as well.

If mankind wishes to stay in control and maintain a moral high ground, then we probably should not incorporate self-awareness into AI (if we would only then treat them as slaves). Of course failing to create self-aware intellects may it self be morally questionable if we have the power to do so.

I’m not sure what to make of the golden retriever comment. Was it moral to breed dogs that look to us as their masters? It is a thought worth considering. Or will we be the golden retrievers to our new robot overlords? We have a pet dog and it seems a good bargain for he and us. Certainly he would not be able to make his way in the world without us, so our demands on him are probably fair exchange.

Slashdot Top Deals

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...