Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:The problem is "beneficial" (Score 2) 197

I absolutely hate ethical thought problems. They're always presented with a limited number of actions, with no provision for doing anything different or in addition or anything like that. Give me an actual situation, and let me ask questions about it that aren't answered with "no, you can't do that".

They're done that way to distill a matter down to the essence. The same issues apply to complicated situations, but they are far more convoluted.

Is it OK to force people to pay taxes so that others can have free health insurance? If they refuse to pay their taxes is it OK to imprison them, again so that others can have free health insurance? Is it ethical to pay $200k to extend the life of somebody on their deathbed by a week when that same sum could allow a homeless person to live in a half-decent home for a decade? Does it make a difference if the person who will live a week longer is happy and healthy for that week? Is the lottery ethical?

Every one of these issues is controversial, and ethical thought problems try to distill them down to elementary values problems, in the hope of shedding light on how to handle real-world ones where there are many more possible outcomes.

Comment Re:The problem is "beneficial" (Score 1) 197

Perhaps, but I think we could get close for 90% of the world's population.

"Thall shall not kill" is a pretty common one.

"Thall shall not steal" is another, and so on.

Most humans seem to agree on the basics, "be nice to people, don't take things that aren't yours, help your fellow humans when you can, etc.

http://www.goodreads.com/work/...

Well, the army and Robin Hood might take issue with your statements. :) Nobody actually follows those rules rigidly in practice.

It isn't really enough to just have a list of rules like "don't kill." You need some kind of underlying principle that unifies them. Any intelligent being has to make millions of decisions in a day, and almost none of them are going to be on the lookup table. For example, you probably chose to get out of bed in the morning. How did you decide that doing this wasn't likely to result in somebody getting killed that day, or did you not care? I didn't give it much thought, because if I worried about causal relationships that far out I'd never do anything. But, when should an AI worry about such matters?

We actually make moral decisions all the time, we just don't think about them. When we want to design an AI, suddenly we have to.

Comment Re:The problem is "beneficial" (Score 1) 197

We are not machines, it would be sad if we lost the humanity that makes us special.

Well, the fact that not everybody would agree with everything you just said means that none of this has anything to do with the fact that you're human. You have a bunch of opinions on these topics, as do I and everybody else.

And that was really all my point was. When we can't all agree together on what is right and wrong, how do you create some kind of AI convention to ensure that AIs only act for the public benefit?

Comment Re:Legitimate question (Score 1) 310

I'm suggesting competition between markets. In the end the providers who best serve the people who actually make and lose real money, not the pure speculators, will determine what rules they want to follow, and it won't be the crap rules that Goldman and such want.

That would work if:

1. People could actually control what markets their money got invested in. Last time I checked I had no control over where my pension is invested. If my company's pension fund is wiped out, that is a problem for me.

2. Governments actually let people lose their money when they make bad choices. However, because of #1 they really can't do this. Plus so many people make bad choices they couldn't even really let it happen even if it were completely voluntary.

When markets are "too big to fail" then they need to be regulated so that they don't fail.

I'm definitely a fan of minimal intrusion. I'd have taken much more market-based solutions to most of the financial crisis problems, such as splitting up large banks (more competition, nobody is too big to fail on their own), or making bailouts much more powerful (government eminent domains company, reorganizes with only an interest to the national economy and no care of shareholders, and they do everything they can to find a basis for suing every previous executive of the company, then in the end the company is IPOed to get it out of the government's hands, the government recoups all its costs first, and then if anything is left over it goes to previous debtholders followed by shareholders). Because of collective idiocy I still think that markets are going to need to be regulated.

Comment Re:The problem is "beneficial" (Score 1) 197

And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.

Logically, yes...

Morally, no...

We are not Vulcans...

That was my whole point. We can't agree on a logical definition of morality. Thus, it is really hard to design an AI that everybody would agree is moral.

As far as whether we are Vulcans - we actually have no idea how our brains make moral decisions. Sure, we understand aspects of it, but what makes you look at a kid and think about helping them become a better adult and what makes a psychopath look at a kid and think about how much money they could get if they kidnapped him is a complete mystery.

Comment Re:Legitimate question (Score 1) 310

What I would suggest is that the barrier to entry for establishing different markets should be kept as low as feasible, and it should be relatively easy for order flow to move between one and another.

I'd argue the opposite. I'd encourage one set of rules for markets to operate under, and make it illegal to trade securities in any other way. Flash crashes and such are a threat to the national economy (just look back at 2008). The markets certainly should be regulated in a way that helps to stabilize them. It would also make things like transaction taxation easy to implement, and eliminate a lot of forms of fraud and tax evasion. Any security would be traded in exactly one market, and you own however many shares of it the exchange says you own.

Comment Re:So? (Score 1) 310

There are lots of problems with this:

Arbitrage between different markets for one.

Just require that anything traded in the market be traded there exclusively. For additional effect, require by law that all trades within the country use a market that operates under these rules.

There is a lack of transparency ...

I do agree with some of these concerns. This model does require trusting the market administrators, and they would be in a position to give tremendous advantage to any party with inside connections.

Keeping the book secret, is another requirement you have, but it is impractical, and is difficult to audit or enforce.

I have mixed feelings on whether it should be secret. If it weren't, then those particular issues go away but there are others that then come up. However, I don't have a problem with anybody voluntarily sharing information. They just shouldn't be required to do so. I'd also allow anybody to directly place trades - it shouldn't cost anything to directly submit orders to the exchange other than perhaps having a deposit or bond.

Comment Re:So? (Score 1) 310

The solution to this kind of problem is to have trades executed hourly or even daily, but at a random time which is not disclosed in advance. There might be only general guarantees such as it being at least 6 hours after the last trade. So, sometime on Tuesday every trade will be executed, and it will be between midnight and midnight. The computer will freeze the book at a random moment, and then run through and execute every trade it can. The book is secret until this time, and published after this time. I could see an argument for allowing changes or not, so I'm not sure which is better, though obviously all sales are final.

Comment Re:Loss of liquidity (Score 2) 310

Then the cost of price discovery will go up significantly.

Define "significantly."

Do we really need nanosecond resolution on stick price changes?

Do fluctuations at those levels REALLY reflect changes in the actual value of a company? At 4:01.000000001 PM is GM really worth 14.01, but at 4:01.0000000012 something changed and it is now worth 14.02? And what is the cost of having this "extra resolution?"

Comment Re:The problem is "beneficial" (Score 1) 197

Think of it this way; Robot A and B -- the first one can never harm or kill you, nor would choose to with some calculation that "might" save others, and the 2nd one might and you have to determine if it calculates "greater good" with an accurate enough prediction model. Which one will you allow in your house? Which one would cause you to keep an EMP around for a rainy day?

Either is actually a potential moral monster in the making. The one might allow you to all die in a fire because it cannot predict with certainty what would happen if it walked on weakened floor boards, which might cause it to accidentally cause the taking of a life. Indeed, a robot truly programmed to never take any action which could possibly lead to the loss of life would probably never be able to take any action at all. I can't predict what the long-reaching consequences of my taking a shower in the morning might be. Maybe if I didn't create the additional demand for water the water company wouldn't hire a new employee, which means they'd be stuck at home unemployed instead of getting in the car and driving to work and getting into a fatal accident.

Sure, it is a ridiculous example. My point is just that we all make millions of unconscious risk evaluations every day and any AI would have to do the same. It requires some kind of set of principles beyond hard-coded rules like "don't kill somebody." AIs are all about emergent behavior, just as is the case with people. You can't approach programming them the way you'd design a spreadsheet.

Comment Re:The problem is "beneficial" (Score 1) 197

Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific).

There is no moral justification for throwing babies alive into fires.

And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.

Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?

Do those humans get a say in it? Depending on the situation, you might get people to volunteer. The problem is when you do it against their will.

Both situations present quandaries, actually. Is it ethical to allow a person to cause permanent harm to themselves voluntarily if it will achieve a greater good? If so, is there a limit to this?

How about when it is involuntary? If I can save 1000 people by killing one person, is that ethical, even if they explicitly tell me they do not wish to take part? You could actually make an argument for either position. You'll find many people who would agree with either position.

As I said elsewhere, I have no doubt that you can give an answer to these questions. The problem is that you can't get everybody else to give the same answer to these questions. That creates a dilemma for anybody creating an AI where moral behavior is desirable. Even with a defined moral code it would be difficult to implement, and the fact that we can't even agree on a defined moral code makes it all the more difficult.

Comment Re:The problem is "beneficial" (Score 1) 197

Read up on the murder of babies by the Nazis in the concentration camps, it is evil.

Sure, but using some utilitarian models for morality some Nazi activities actually come out fine (well, their implementations were poor, but you can justify some things that most of us would consider horrific). Suppose human experimentation were reasonably likely to yield a medical advance? Would it be ethical? If you treated 1000 people like lab rats (vivsections and all), and it helped advance medicine to the benefit of millions of others, would that be wrong?

I'm not saying it is right. I'm saying that it is actually hard to come up with models of morality that cover situations like this in a way most would accept.

Comment Re:80% through tunnels? (Score 1) 189

Accellerating at 1G you could probably make the NYC-LAX trip in 30-60 minutes.

More like 15.

Of course, you might have a bit of a bumpy landing as you leapt for the platform from the speeding train, which would at that point be closing on 9km/s. So, yeah, 30 minutes if you wanted to walk off at the other end.

You'll want to factor in a bit more time for everybody to reverse their chairs or whatever so that they're not thrown out of the seat when you switch to deceleration. :) But yes, it has been a while since I ran the numbers bit it is somewhere around 30min, which is pretty impressive. It wouldn't even require all that much energy to make the trip.

Comment Re:The problem is "beneficial" (Score 1) 197

The person being tortured will tell you whatever you want to hear to get you to stop torturing them. Torture rarely works and is always immoral.

Sure, but this is a thought problem. The question is if inflicting pain on some people can bring benefit to many more, is it ethical to inflict pain? Maybe torture is a bad example. Or maybe not. Suppose you need a recording of somebody's voice in order to play it over a phone and deceive their partners in a terrorist action. Would it be ethical to torture somebody to obtain their cooperation, given that in this situation you can actually know whether they've cooperated or not (you're just asking them to read a script).

I have no doubts that you can answer the question. The problem is that many will disagree with your answer, whatever it might be.

Comment Re:80% through tunnels? (Score 1) 189

Then every car (and the tunnel itself!) needs to be a pressure vessel and you need oxygen masks if there is a leak. Plus you have to turn every station into an airlock. Depressurizing the tunnel is a lot of extra work.

It would certainly need to be a pressure vessel. If there were a leak you could use supplemental oxygen or you could just repressurize the tunnel. Agree that the stations would need locks.

Slashdot Top Deals

No man is an island if he's on at least one mailing list.

Working...