Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment: Re:systemd, eh? (Score 2, Informative) 465

by Rich0 (#49545041) Attached to: Ubuntu 15.04 Released, First Version To Feature systemd

Requiring a restart is a Windows trait. I was hoping that my Linux installations would be better than that.

Er quite, though I was specifically referring to restarting PulseAudio, which takes a second not the entire computer. If the base underlying init process needs a restart, well, that's a different kettle of fish.

FWIW, the only time I restart systemd is to update the kernel, or I guess systemd itself (though the kernel changes more often and thus I can usually lump the latter in with the former). If you do live-patch your kernel, then you can do the same with systemd - it has a command to re-exec itself while preserving state.

I'm sure it isn't perfect, but it is as robust as anything else I've used on Linux. There are fairly few daemons that I've never seen need a restart sometime in the last 10 years.

Comment: Re:Legitimate question (Score 1) 307

by Rich0 (#49540053) Attached to: Futures Trader Arrested For Causing 2010 'Flash Crash'

As for #2, it doesn't really work that way. The govt didn't bail out ANY retirement funds (at least not private sector ones, nor any mutual funds and similar, money markets, etc). There were some people made whole for certain things out of FDIC or other insurance, but presumably they were paying for that via the premiums coming out of their returns, so its not QUITE a bailout, though perhaps the premiums are subsidized. So in the final analysis the problem isn't that the investors are too big to fail, its the firms themselves that get the bailouts.

Of course the retirement funds weren't bailed out. They didn't have to be, because the companies they invested in were bailed out instead. If the various investment banks were allowed to fail then they'd probably all crash and so would everything all those retirement funds were invested in. THAT is why those companies were too big to fail in the first place.

If investments were just a toy for the wealthy then we could let them play their games and take their haircuts. The problem is that the investment sector affects everybody, so we have no choice but to intervene when things go wrong. That gives us the right to prevent things from going wrong in the first place, even if it means the rich can't play their games any longer...

Comment: Re:Big Data stupidity (Score 1) 65

by Rich0 (#49538853) Attached to: New Privacy Concerns About US Program That Can Track Snail Mail

Just because you have everything recorded, doesn't mean it's useful, though.

While I agree with many of your points, often these records become important after the fact.

Suppose I have a record of every letter sent from anywhere to anywhere. Then somebody blows up a building or whatever and are now known as a terrorist. The database allows you to obtain a list of every letter that had his address somewhere on it. Or any letter sent to a suspicious address which originated in his vicinity even if it didn't have a return address (such as if it were dropped in a mailbox). That kind of information can be useful to expand a network of suspects.

It is like having a record of every phone call for the last 30 years. It is hard to look at call patterns and tell who is a threat. However, if somebody blows up a building you can figure out who their college roommate was, or who they dated in middle school. All kinds of relationships that would not be obvious if you just talked to somebody's neighbors or looked at their recent credit card / phone history become apparent. Maybe their former girlfriend works for the TSA and was on duty when a terrorist slipped past security, but there weren't any phone calls between them in the last 10 years. That is a lead that might become apparent with long-term record retention that would be missed without it. Of course, such techniques inevitably involve looking into the cases of people who are almost certainly innocent. If the girlfriend wasn't involved, pursuing her might mean neglecting other leads that are real threats.

Data is just data. However, there is a lot you can do with a targeted search once you know what you're looking for.

Comment: Re:ostensibly for sorting purposes (Score 1) 65

by Rich0 (#49538289) Attached to: New Privacy Concerns About US Program That Can Track Snail Mail

Whether all the pictures are also retained is a completely different story. 10 years ago, I'd have said, "No; too expensive." But storage costs have plummeted, so nowadays, maybe so.

They've been doing it for well over 10 years:


Relevant quote:

Last month, The New York Times reported on the practice, which is called the Mail Isolation and Tracking system. The program was created by the Postal Service after the anthrax attacks in late 2001 killed five people, including two postal workers.

Comment: Re:The problem is "beneficial" (Score 2) 196

by Rich0 (#49533319) Attached to: Concerns of an Artificial Intelligence Pioneer

I absolutely hate ethical thought problems. They're always presented with a limited number of actions, with no provision for doing anything different or in addition or anything like that. Give me an actual situation, and let me ask questions about it that aren't answered with "no, you can't do that".

They're done that way to distill a matter down to the essence. The same issues apply to complicated situations, but they are far more convoluted.

Is it OK to force people to pay taxes so that others can have free health insurance? If they refuse to pay their taxes is it OK to imprison them, again so that others can have free health insurance? Is it ethical to pay $200k to extend the life of somebody on their deathbed by a week when that same sum could allow a homeless person to live in a half-decent home for a decade? Does it make a difference if the person who will live a week longer is happy and healthy for that week? Is the lottery ethical?

Every one of these issues is controversial, and ethical thought problems try to distill them down to elementary values problems, in the hope of shedding light on how to handle real-world ones where there are many more possible outcomes.

Comment: Re:The problem is "beneficial" (Score 1) 196

by Rich0 (#49531189) Attached to: Concerns of an Artificial Intelligence Pioneer

Perhaps, but I think we could get close for 90% of the world's population.

"Thall shall not kill" is a pretty common one.

"Thall shall not steal" is another, and so on.

Most humans seem to agree on the basics, "be nice to people, don't take things that aren't yours, help your fellow humans when you can, etc.


Well, the army and Robin Hood might take issue with your statements. :) Nobody actually follows those rules rigidly in practice.

It isn't really enough to just have a list of rules like "don't kill." You need some kind of underlying principle that unifies them. Any intelligent being has to make millions of decisions in a day, and almost none of them are going to be on the lookup table. For example, you probably chose to get out of bed in the morning. How did you decide that doing this wasn't likely to result in somebody getting killed that day, or did you not care? I didn't give it much thought, because if I worried about causal relationships that far out I'd never do anything. But, when should an AI worry about such matters?

We actually make moral decisions all the time, we just don't think about them. When we want to design an AI, suddenly we have to.

Comment: Re:The problem is "beneficial" (Score 1) 196

by Rich0 (#49531117) Attached to: Concerns of an Artificial Intelligence Pioneer

We are not machines, it would be sad if we lost the humanity that makes us special.

Well, the fact that not everybody would agree with everything you just said means that none of this has anything to do with the fact that you're human. You have a bunch of opinions on these topics, as do I and everybody else.

And that was really all my point was. When we can't all agree together on what is right and wrong, how do you create some kind of AI convention to ensure that AIs only act for the public benefit?

Comment: Re:Legitimate question (Score 1) 307

by Rich0 (#49531099) Attached to: Futures Trader Arrested For Causing 2010 'Flash Crash'

I'm suggesting competition between markets. In the end the providers who best serve the people who actually make and lose real money, not the pure speculators, will determine what rules they want to follow, and it won't be the crap rules that Goldman and such want.

That would work if:

1. People could actually control what markets their money got invested in. Last time I checked I had no control over where my pension is invested. If my company's pension fund is wiped out, that is a problem for me.

2. Governments actually let people lose their money when they make bad choices. However, because of #1 they really can't do this. Plus so many people make bad choices they couldn't even really let it happen even if it were completely voluntary.

When markets are "too big to fail" then they need to be regulated so that they don't fail.

I'm definitely a fan of minimal intrusion. I'd have taken much more market-based solutions to most of the financial crisis problems, such as splitting up large banks (more competition, nobody is too big to fail on their own), or making bailouts much more powerful (government eminent domains company, reorganizes with only an interest to the national economy and no care of shareholders, and they do everything they can to find a basis for suing every previous executive of the company, then in the end the company is IPOed to get it out of the government's hands, the government recoups all its costs first, and then if anything is left over it goes to previous debtholders followed by shareholders). Because of collective idiocy I still think that markets are going to need to be regulated.

Comment: Re:The problem is "beneficial" (Score 1) 196

by Rich0 (#49530421) Attached to: Concerns of an Artificial Intelligence Pioneer

And that would be the reason I said "some" Nazi activities and not "all" Nazi activities. I certainly find them abhorent, but if you take a strictly utilitarian view then things like involuntary experimentation on people could be justified.

Logically, yes...

Morally, no...

We are not Vulcans...

That was my whole point. We can't agree on a logical definition of morality. Thus, it is really hard to design an AI that everybody would agree is moral.

As far as whether we are Vulcans - we actually have no idea how our brains make moral decisions. Sure, we understand aspects of it, but what makes you look at a kid and think about helping them become a better adult and what makes a psychopath look at a kid and think about how much money they could get if they kidnapped him is a complete mystery.

Comment: Re:Legitimate question (Score 1) 307

by Rich0 (#49530357) Attached to: Futures Trader Arrested For Causing 2010 'Flash Crash'

What I would suggest is that the barrier to entry for establishing different markets should be kept as low as feasible, and it should be relatively easy for order flow to move between one and another.

I'd argue the opposite. I'd encourage one set of rules for markets to operate under, and make it illegal to trade securities in any other way. Flash crashes and such are a threat to the national economy (just look back at 2008). The markets certainly should be regulated in a way that helps to stabilize them. It would also make things like transaction taxation easy to implement, and eliminate a lot of forms of fraud and tax evasion. Any security would be traded in exactly one market, and you own however many shares of it the exchange says you own.

Comment: Re:So? (Score 1) 307

by Rich0 (#49530325) Attached to: Futures Trader Arrested For Causing 2010 'Flash Crash'

There are lots of problems with this:

Arbitrage between different markets for one.

Just require that anything traded in the market be traded there exclusively. For additional effect, require by law that all trades within the country use a market that operates under these rules.

There is a lack of transparency ...

I do agree with some of these concerns. This model does require trusting the market administrators, and they would be in a position to give tremendous advantage to any party with inside connections.

Keeping the book secret, is another requirement you have, but it is impractical, and is difficult to audit or enforce.

I have mixed feelings on whether it should be secret. If it weren't, then those particular issues go away but there are others that then come up. However, I don't have a problem with anybody voluntarily sharing information. They just shouldn't be required to do so. I'd also allow anybody to directly place trades - it shouldn't cost anything to directly submit orders to the exchange other than perhaps having a deposit or bond.

Comment: Re:So? (Score 1) 307

by Rich0 (#49527911) Attached to: Futures Trader Arrested For Causing 2010 'Flash Crash'

The solution to this kind of problem is to have trades executed hourly or even daily, but at a random time which is not disclosed in advance. There might be only general guarantees such as it being at least 6 hours after the last trade. So, sometime on Tuesday every trade will be executed, and it will be between midnight and midnight. The computer will freeze the book at a random moment, and then run through and execute every trade it can. The book is secret until this time, and published after this time. I could see an argument for allowing changes or not, so I'm not sure which is better, though obviously all sales are final.

Comment: Re:Loss of liquidity (Score 2) 307

by Rich0 (#49527875) Attached to: Futures Trader Arrested For Causing 2010 'Flash Crash'

Then the cost of price discovery will go up significantly.

Define "significantly."

Do we really need nanosecond resolution on stick price changes?

Do fluctuations at those levels REALLY reflect changes in the actual value of a company? At 4:01.000000001 PM is GM really worth 14.01, but at 4:01.0000000012 something changed and it is now worth 14.02? And what is the cost of having this "extra resolution?"

Comment: Re:The problem is "beneficial" (Score 1) 196

by Rich0 (#49527745) Attached to: Concerns of an Artificial Intelligence Pioneer

Think of it this way; Robot A and B -- the first one can never harm or kill you, nor would choose to with some calculation that "might" save others, and the 2nd one might and you have to determine if it calculates "greater good" with an accurate enough prediction model. Which one will you allow in your house? Which one would cause you to keep an EMP around for a rainy day?

Either is actually a potential moral monster in the making. The one might allow you to all die in a fire because it cannot predict with certainty what would happen if it walked on weakened floor boards, which might cause it to accidentally cause the taking of a life. Indeed, a robot truly programmed to never take any action which could possibly lead to the loss of life would probably never be able to take any action at all. I can't predict what the long-reaching consequences of my taking a shower in the morning might be. Maybe if I didn't create the additional demand for water the water company wouldn't hire a new employee, which means they'd be stuck at home unemployed instead of getting in the car and driving to work and getting into a fatal accident.

Sure, it is a ridiculous example. My point is just that we all make millions of unconscious risk evaluations every day and any AI would have to do the same. It requires some kind of set of principles beyond hard-coded rules like "don't kill somebody." AIs are all about emergent behavior, just as is the case with people. You can't approach programming them the way you'd design a spreadsheet.

One can't proceed from the informal to the formal by formal means.