Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Friendliness (Score 5, Insightful) 367

The article's viewpoint is dangerous. We must solve the Friendliness problem before AGI is developed, or the resulting superintelligence will most likely be unfriendly.

The author also assumes an AI will not be interested in the real world, preferring virtual environments. This ignores the need for a physical computing base, which will entice any superintelligence to convert all matter on Earth (and then, the universe) to computronium. If the AI is not perfectly friendly, humans are unlikely to survive that conversion.

Comment Re:Liberty? (Score 1) 241

I suppose there's an AI issue, too--a singularity is going to get into this data in a few decades. I can't predict what an AI a hundred times smarter than any of us might do with it.

Don't worry about that. :) If the AI is Friendly, it won't hurt us no matter how much it knows, and if it's Unfriendly, it won't matter how much it knows; it would hurt us just as bad anyway.

To iterate is human, to recurse, divine. -- Robert Heller

Working...