Become a fan of Slashdot on Facebook


Forgot your password?
Take advantage of Black Friday with 15% off sitewide with coupon code "BLACKFRIDAY" on Slashdot Deals (some exclusions apply)". ×

Comment Re:The answer is 42, er...I mean, encryption. (Score 1) 239

Nice in theory. Not so much in practice. With crypto, the devil's in the details. Here are just a few of the hard problems:


"The perfect is the enemy of the good" -- Voltaire.

Yes, those are all hard problems, but at least a widespread partial solution would make mass surveillance at least an order of magnitude more difficult and push TLAs to be more focused in their data gathering.

Also, a partial solution has the chance to be improved into better solutions. This would be a much better situation than what we have now. The fact that we can't solve all those hard problems now should not be an excuse to do nothing.

Comment Re:I no longer think this is an issue (Score 1) 258

You misunderstand how AIs are built.

The AI is designed to improve/maximize its performance measure. An AI will "desire" self-preservation (or any other goal) to the extent that self-preservation is part of its performance measure, directly or indirectly, and to the extent of the AI's capabilities. For example, it doesn't sound too hard for an AI to figure out that if it dies, then it will be difficult to do well on its other goals.

Emotion in us is a large part of how we implement a value system for deciding whether actions are good/bad. Avoid actions that make me feel bad; do actions that make me feel good. For an AI, it's very similar. Avoid actions that decrease its performance measure; do actions that increase its performance measure.

The first big question is implementing a moral performance measure (no biggie, just a 2000+-year old philosophy problem). The second big question is keeping that from being hacked, e.g., by giving the AI erroneous information/beliefs. Judging by current events, we don't do very well at this, so I can't imagine much better success with AIs.

Comment Open Source Tradeoff (Score 1) 265

Yes, the advantage of open source is that good actors can read the code and find and fix security flaws. The disadvantage is that bad actors can also read the code and find and exploit security flaws. One would hope good actors would outweigh the bad ones, but my fear that that governments and organized crime have become bad and worse actors in a big way. Even when a particular flaw is fixed, we all know that there are still flaws to be found and exploited in any big software project, and nowadays the big-time software exploiters have the budgets and the manpower to take advantage.

That said, that doesn't mean closed-source is any better (a different tradeoff), but it would be foolish to think that open-source software is not being exploited for its open-source properties.

Comment Re:But was it really unethical ? (Score 1) 619

I can't speak for Kilobug, but my answers would be:

1. It depends on your values. E.g., how much do you value your own welfare compared to family, friends, co-workers, fellow citizens, and those other people? If you want to be conscious about it, you need to think about what you value and how you might have done things differently in that light.

2. I probably thought I was I a deotonologist, but if you carefully study your own and other people's decisions, the vast majority are consequentialists with values that tend to selfishness. WItness how many Americans are angry about the Central American children/teenagers trying to get into the US.

3. As others have commented, doing a full analysis is time-consuming and uncertain (hence "maximum expected utility"). Most of the time, one has to follow rules that generally (so one believes) that have good consequences. And generally, virtue and duty are good rules. But people make up all sorts of rules with little sense behind them. My grandmother thought opening an umbrella indoors was bad luck, but I am a little skeptical about that one.

Without life, Biology itself would be impossible.