Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment: Re:The answer is 42, er...I mean, encryption. (Score 1) 239

by scruffy (#49028813) Attached to: Ask Slashdot: What Will It Take To End Mass Surveillance?

Nice in theory. Not so much in practice. With crypto, the devil's in the details. Here are just a few of the hard problems:

...

"The perfect is the enemy of the good" -- Voltaire.

Yes, those are all hard problems, but at least a widespread partial solution would make mass surveillance at least an order of magnitude more difficult and push TLAs to be more focused in their data gathering.

Also, a partial solution has the chance to be improved into better solutions. This would be a much better situation than what we have now. The fact that we can't solve all those hard problems now should not be an excuse to do nothing.

Comment: Re:I no longer think this is an issue (Score 1) 258

by scruffy (#48797497) Attached to: AI Experts Sign Open Letter Pledging To Protect Mankind From Machines
You misunderstand how AIs are built.

The AI is designed to improve/maximize its performance measure. An AI will "desire" self-preservation (or any other goal) to the extent that self-preservation is part of its performance measure, directly or indirectly, and to the extent of the AI's capabilities. For example, it doesn't sound too hard for an AI to figure out that if it dies, then it will be difficult to do well on its other goals.

Emotion in us is a large part of how we implement a value system for deciding whether actions are good/bad. Avoid actions that make me feel bad; do actions that make me feel good. For an AI, it's very similar. Avoid actions that decrease its performance measure; do actions that increase its performance measure.

The first big question is implementing a moral performance measure (no biggie, just a 2000+-year old philosophy problem). The second big question is keeping that from being hacked, e.g., by giving the AI erroneous information/beliefs. Judging by current events, we don't do very well at this, so I can't imagine much better success with AIs.

Comment: Open Source Tradeoff (Score 1) 265

by scruffy (#48144337) Attached to: Confidence Shaken In Open Source Security Idealism
Yes, the advantage of open source is that good actors can read the code and find and fix security flaws. The disadvantage is that bad actors can also read the code and find and exploit security flaws. One would hope good actors would outweigh the bad ones, but my fear that that governments and organized crime have become bad and worse actors in a big way. Even when a particular flaw is fixed, we all know that there are still flaws to be found and exploited in any big software project, and nowadays the big-time software exploiters have the budgets and the manpower to take advantage.

That said, that doesn't mean closed-source is any better (a different tradeoff), but it would be foolish to think that open-source software is not being exploited for its open-source properties.

Comment: Re:But was it really unethical ? (Score 1) 619

by scruffy (#47510165) Attached to: Experiment Shows People Exposed To East German Socialism Cheat More

I can't speak for Kilobug, but my answers would be:

1. It depends on your values. E.g., how much do you value your own welfare compared to family, friends, co-workers, fellow citizens, and those other people? If you want to be conscious about it, you need to think about what you value and how you might have done things differently in that light.

2. I probably thought I was I a deotonologist, but if you carefully study your own and other people's decisions, the vast majority are consequentialists with values that tend to selfishness. WItness how many Americans are angry about the Central American children/teenagers trying to get into the US.

3. As others have commented, doing a full analysis is time-consuming and uncertain (hence "maximum expected utility"). Most of the time, one has to follow rules that generally (so one believes) that have good consequences. And generally, virtue and duty are good rules. But people make up all sorts of rules with little sense behind them. My grandmother thought opening an umbrella indoors was bad luck, but I am a little skeptical about that one.

Comment: Rattiest Books on My Shelves (Score 1) 247

by scruffy (#46840349) Attached to: Ask Slashdot: Books for a Comp Sci Graduate Student?
Knuth's books are very book, but they don't get much use from me. Instead:

Introduction to Algorithms by Cormen et al.

A good statistics book. Mine is an old thing: Mathematical Statistics with Applications by Mendenhall and Scheaffer.

A good operations research book (linear programming, queueing theory, Markov models/decision processes, and the like). Another old thing: Operations Research by Hillier and Lieberman.

Other than that, it's books that are/were used often for programming reference: Common Lisp: The Language by Steele and LaTeX: A Document Preparation System by Lamport look the most worn.

Hopefully, someone will come up with something a little more recent than the "old things" I mentioned above.

Comment: Tax Corps Based on the CItizenship of Their Owners (Score 1) 288

by scruffy (#46421951) Attached to: How Ireland Got Apple's $9 Billion Australian Profit
Really, the "location" of these mega-corporations is a sham.

Instead, figure out (or estimate) what percentage of the shares are owned by US residents. Multiply that percentage times the corporation's profit times the corporate tax rate and that is what they should pay.

Note: Any public corporation knows who are the immediate owners, so that they can send out shareholder info. However, a shareholder might be another corporation which is owned by other corporations, etc. Hence, the need to estimate (along with following the money as much as possible).

+ - Helping Snowden Spill His Secrets->

Submitted by mspohr
mspohr writes: Great article in the NYTimes Magazine section by Peter Maass. http://www.nytimes.com/2013/08/18/magazine/laura-poitras-snowden.html

It goes into a lot of detail on how Snowden first attempted to contact Glenn Greenwald (who couldn't use secure communication at first) and then contacted Laura Poitras who was making a documentary about security. Lots of detail about their getting together, vetting each other, and personal threats to Greenwald and Poitras (as well as Snowden) as well as a good timeline of how events unfolded.
After reading this article I am more concerned than ever about the extent of US surveillance and the extent to which the USG will go to suppress information and intimidate whistle-blowers. Good to see that the NYTimes finally publish some real journalism on this subject.
Also... accompanying transcript of "Q&A — Edward Snowden" http://www.nytimes.com/2013/08/18/magazine/snowden-maass-transcript.html

Link to Original Source

Comment: The Reasons for "Herculean effort" (Score 3, Informative) 95

by scruffy (#43246801) Attached to: DARPA Tackles Machine Learning
Raw data need to be cleaned up and organized to feed into the ML algorithm.

The results of the ML algorithm need to be cleaned up and organized so that they can be used by the rest of the system.

No one (currently) can tell you which ML algorithm will work best on your problem and how its parameters should be chosen without a lot of study. Preconceived bias (e.g., that it should be biologically based, blah, blah) can be a killer here.

The best results typically come from combinations of ML algorithms through some kind of ensemble learning, so now your have the problem of choosing a good combination and choosing a lot more parameters.

All of the above need to work together in concert.

Certainly, it's not a bad idea to try to make this process better, but I wouldn't be expecting miracles too soon.

You can fool all the people all of the time if the advertising is right and the budget is big enough. -- Joseph E. Levine

Working...