Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Slashdot Deals: Prep for the CompTIA A+ certification exam. Save 95% on the CompTIA IT Certification Bundle ×

Comment Friendliness (Score 5, Insightful) 367

The article's viewpoint is dangerous. We must solve the Friendliness problem before AGI is developed, or the resulting superintelligence will most likely be unfriendly.

The author also assumes an AI will not be interested in the real world, preferring virtual environments. This ignores the need for a physical computing base, which will entice any superintelligence to convert all matter on Earth (and then, the universe) to computronium. If the AI is not perfectly friendly, humans are unlikely to survive that conversion.

Comment Re:Liberty? (Score 1) 241

I suppose there's an AI issue, too--a singularity is going to get into this data in a few decades. I can't predict what an AI a hundred times smarter than any of us might do with it.

Don't worry about that. :) If the AI is Friendly, it won't hurt us no matter how much it knows, and if it's Unfriendly, it won't matter how much it knows; it would hurt us just as bad anyway.

Thus spake the master programmer: "Time for you to leave." -- Geoffrey James, "The Tao of Programming"

Working...