Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Have they moved to LLVM/Clang? (Score 1) 26

LLVM/Clang builds the DragonFly world and kernel but does not yet build the boot loader. It can be brought in via dports. So it isn't 100% yet but very close. When it does get to 100% it will become one of our two officially supported compilers. Those are currently gcc-4.7 and gcc-5.2.1.

Wayland support isn't really up to us, but there is wayland support in XOrg that I think works for programs desiring to use that API. Don't quote me on it though.


Ok, got it. No quoting.

Comment Re:The answer is 42, er...I mean, encryption. (Score 1) 239

Nice in theory. Not so much in practice. With crypto, the devil's in the details. Here are just a few of the hard problems:


"The perfect is the enemy of the good" -- Voltaire.

Yes, those are all hard problems, but at least a widespread partial solution would make mass surveillance at least an order of magnitude more difficult and push TLAs to be more focused in their data gathering.

Also, a partial solution has the chance to be improved into better solutions. This would be a much better situation than what we have now. The fact that we can't solve all those hard problems now should not be an excuse to do nothing.

Comment Re:I no longer think this is an issue (Score 1) 258

You misunderstand how AIs are built.

The AI is designed to improve/maximize its performance measure. An AI will "desire" self-preservation (or any other goal) to the extent that self-preservation is part of its performance measure, directly or indirectly, and to the extent of the AI's capabilities. For example, it doesn't sound too hard for an AI to figure out that if it dies, then it will be difficult to do well on its other goals.

Emotion in us is a large part of how we implement a value system for deciding whether actions are good/bad. Avoid actions that make me feel bad; do actions that make me feel good. For an AI, it's very similar. Avoid actions that decrease its performance measure; do actions that increase its performance measure.

The first big question is implementing a moral performance measure (no biggie, just a 2000+-year old philosophy problem). The second big question is keeping that from being hacked, e.g., by giving the AI erroneous information/beliefs. Judging by current events, we don't do very well at this, so I can't imagine much better success with AIs.

Slashdot Top Deals

Surprise due today. Also the rent.