Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:The problem is "beneficial" (Score 1) 197

Think of it this way; Robot A and B -- the first one can never harm or kill you, nor would choose to with some calculation that "might" save others, and the 2nd one might and you have to determine if it calculates "greater good" with an accurate enough prediction model. Which one will you allow in your house? Which one would cause you to keep an EMP around for a rainy day?

Either is actually a potential moral monster in the making. The one might allow you to all die in a fire because it cannot predict with certainty what would happen if it walked on weakened floor boards, which might cause it to accidentally cause the taking of a life. Indeed, a robot truly programmed to never take any action which could possibly lead to the loss of life would probably never be able to take any action at all. I can't predict what the long-reaching consequences of my taking a shower in the morning might be. Maybe if I didn't create the additional demand for water the water company wouldn't hire a new employee, which means they'd be stuck at home unemployed instead of getting in the car and driving to work and getting into a fatal accident.

Sure, it is a ridiculous example. My point is just that we all make millions of unconscious risk evaluations every day and any AI would have to do the same. It requires some kind of set of principles beyond hard-coded rules like "don't kill somebody." AIs are all about emergent behavior, just as is the case with people. You can't approach programming them the way you'd design a spreadsheet.

Comment Re:The problem is "beneficial" (Score 1) 197

Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific).

There is no moral justification for throwing babies alive into fires.

And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.

Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?

Do those humans get a say in it? Depending on the situation, you might get people to volunteer. The problem is when you do it against their will.

Both situations present quandaries, actually. Is it ethical to allow a person to cause permanent harm to themselves voluntarily if it will achieve a greater good? If so, is there a limit to this?

How about when it is involuntary? If I can save 1000 people by killing one person, is that ethical, even if they explicitly tell me they do not wish to take part? You could actually make an argument for either position. You'll find many people who would agree with either position.

As I said elsewhere, I have no doubt that you can give an answer to these questions. The problem is that you can't get everybody else to give the same answer to these questions. That creates a dilemma for anybody creating an AI where moral behavior is desirable. Even with a defined moral code it would be difficult to implement, and the fact that we can't even agree on a defined moral code makes it all the more difficult.

Comment Re:It's not surprising (Score 1) 129

It doesn't have to be this way and it has little to do with standards. Netflix streaming today still works fine on devices that are first generation from many years ago. This is despite all of the new functions and features they have come out with since then - heck they even changed their whole DRM scheme for many players.

The main difference is YouTube has little incentive to keep supporting these old devices since they don't generate much, of any, ad revenue (heck they might not even support ads), whereas Netflix needs to support their subscribers as long as possible.

Standards don't do anything to help with this problem it has more to do with an advertising driven business model.

Comment Re:Statistics (Score 1) 73

They could maintain a list of third party library versions and identify versions of apps that link with them. But then what? As a user, I might not want Apple to shut off some random app I depend on -- just because they think it might be hackable doesn't mean my device is actually being hacked; and I might really need that app today for some important client presentation.

They could contact impacted developers and request they repair the damage, but what can they do if nobody responds?

Apple focuses on end user experience first. They won't want to inconvenience their users that much.

Comment Re:The problem is "beneficial" (Score 1) 197

Read up on the murder of babies by the Nazis in the concentration camps, it is evil.

Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific). Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?

I'm not saying it is right. I'm saying that it is actually hard to come up with models of morality that cover situations like this in a way most would accept.

Comment Re:80% through tunnels? (Score 1) 189

Accellerating at 1G you could probably make the NYC-LAX trip in 30-60 minutes.

More like 15.

Of course, you might have a bit of a bumpy landing as you leapt for the platform from the speeding train, which would at that point be closing on 9km/s. So, yeah, 30 minutes if you wanted to walk off at the other end.

You'll want to factor in a bit more time for everybody to reverse their chairs or whatever so that they're not thrown out of the seat when you switch to deceleration. :) But yes, it has been a while since I ran the numbers bit it is somewhere around 30min, which is pretty impressive. It wouldn't even require all that much energy to make the trip.

Comment Re:The problem is "beneficial" (Score 1) 197

The person being tortured will tell you whatever you want to hear to get you to stop torturing them. Torture rarely works and is always immoral.

Sure, but this is a thought problem. The question is if inflicting pain on some people can bring benefit to many more, is it ethical to inflict pain? Maybe torture is a bad example. Or maybe not. Suppose you need a recording of somebody's voice in order to play it over a phone and deceive their partners in a terrorist action. Would it be ethical to torture somebody to obtain their cooperation, given that in this situation you can actually know whether they've cooperated or not (you're just asking them to read a script).

I have no doubts that you can answer the question. The problem is that many will disagree with your answer, whatever it might be.

Comment Re:80% through tunnels? (Score 1) 189

Then every car (and the tunnel itself!) needs to be a pressure vessel and you need oxygen masks if there is a leak. Plus you have to turn every station into an airlock. Depressurizing the tunnel is a lot of extra work.

It would certainly need to be a pressure vessel. If there were a leak you could use supplemental oxygen or you could just repressurize the tunnel. Agree that the stations would need locks.

Comment Re:80% through tunnels? (Score 2) 189

I'd wonder if it would almost make sense to make it 100% tunnels and have it in a vacuum.

Probably tripling the cost.

Agree that it would only make sense over large distances. I could see it for a NYC-LAX maglev, maybe with a stop in the midwest somewhere. Maybe have the stops at airports for easy connection.

Accellerating at 1G you could probably make the NYC-LAX trip in 30-60 minutes.

Comment Re:IPv6's day will come, but... (Score 1) 390

So, the designers of IPv6 could not conceive that somebody could have less than 2^64 devices and still want to put them in separate networks?

Networks are allocated as /64 chunks because it makes autoconfiguration easy. It is often argued by newcomers that this is a huge waste, but really, 128 bits gives you so many addresses that you can stand to do a bit of wasting in order to make things simple. Generally the "what a waste" crowd severely underestimate just how big 128 bits is.

So now my ISP will have a say in how many internal networks I have?

Yes and no. You _can_ allocate networks smaller than a /64, but you can't use SLAAC on such networks. That means you're stuck manually configuring devices or using DHCPv6. I believe Android has no support for DHCPv6, so you're probably very restricted if you choose to use a nonstandard network size.

And this is supposed to be better than IPV4 with NAT?

Oddly enough, yes - ISPs really shouldn't be restricting your internal infrastructure. If your ISP is being a dick about this then the answer is pretty obvious - switch to another ISP, it isn't as if ISPs are thin on the ground.

Comment Re:Of course AI will try to kill us all (Score 1) 197

I don't think an AI would qualify as intelligent unless it can realize that human beings are the entire problem and the world would be better off without them. So its obvious that any AI, advanced enough, will try to kill us all.

One thing that I don't understand about this type of self-hate: if the person is so convinced of his view that he is a member of an absolutely bad species, why doesn't he do the honorable thing and end his existence on this planet? Or maybe is he the possibly only exception to the stereotyping?

Well, for somebody who is part of a wholly-dishonorable species to do something honorable would require them to be the exception in the first place. So, anybody capable of doing what you ask, wouldn't have to. :)

Slashdot Top Deals

"It is better to have tried and failed than to have failed to try, but the result's the same." - Mike Dennison

Working...