Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment: Re:Anthropomorphizing (Score 1) 402

by mcswell (#49766073) Attached to: What AI Experts Think About the Existential Risk of AI

"Our bodies aren't vessels...we inhabit, they are us."

Certain of that? Suppose we created an AI. One way of describing It might be as a piece of software, some data in some kind of database, and a state consisting of the value of some variables (or the weights in a neural net, or some such). That AI might be running on a particular computer, but in a very real sense, that piece of hardware is simply the body it is inhabiting at the moment. There's no obvious reason that same running software couldn't move Itself to another identical computer. (It might instead copy itself to another computer, but that's a different question; just suppose for the moment that it moved.)

There are a lot of if's in the paragraph above, but it seems *in principle* that it should be possible. In which case the AI is not the hardware, it's the software. And if the AI is not the hardware, where is the argument that we are the hardware (or wetware, if you prefer)? We certainly don't know how to move ourselves from one body to another, nor to some kind of machine, and we may never know how. But in principle, it might be possible. And if it is, then aren't we more like software than hardware?

I realize that there are even more if's in the above paragraph. But unless you can that there's something wrong with it in principle, then I don't see how you can claim that we _are_ our bodies. I may feel attached to my body, but that doesn't constitute a logical argument that I am.

Comment: Re:The Sony connection (Score 1) 402

by mcswell (#49766001) Attached to: What AI Experts Think About the Existential Risk of AI

"the current versions of Windows/Linux/OSX etc are much more secure than their predecessors from 10-20 years ago": I know little or nothing about this stuff (I do some computer programming, but only in languages like Python and XML these days, and that doesn't tell me much about security), so let me ask: I'm sure these programs are more secure in the sense that a lot of holes which existed 10-20 years ago have been plugged. But these programs also have a lot more code than the old ones. Isn't it possible that more holes have been introduced in that new code, by programmers who didn't learn the lessons of the past? And even if not, is it possible that new _kinds_ of vulnerabilities have been found? And finally, aren't a lot of breakins due to social engineering? Where I suppose the less is that if you make something idiot proof, someone will make a better idiot.

Comment: Re:Well... (Score 1) 402

by mcswell (#49765953) Attached to: What AI Experts Think About the Existential Risk of AI

Why should an AI be particularly concerned about our environment? A machine can survive in all kinds of environments, it doesn't particularly need our ecosystem. Indeed we have had machines in orbit outside the atmosphere for decades, as well as driving around on Mars, orbiting Saturn, en route to Pluto and beyond (and we did have one in orbit around Mercury, until we crashed it). If we ever manage to create a self aware, intelligent and curious AI, I expect it will head off to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before--or is likely to go for a long time, because humans are more fragile, and need to carry along too much infrastructure. Much easier for an AI to travel to another planet of the Sun, or to another planetary system. And we'll be left behind, as the least of the AI's worries.

Comment: Re:Funny, that spin... (Score 1) 402

by mcswell (#49765907) Attached to: What AI Experts Think About the Existential Risk of AI

I share your belief that academics who have an interest (financial or otherwise) in continuing AI research are probably not unbiased observers. And smart people like Hawking, Gates, and Musk are less likely to be biased, and perhaps better at predicting the future than I am (and maybe than you are, or other /.ers).

That said, I do have some questions for the pessimists (and I consider myself something of a pessimist). Is the worry that some AI will become super intelligent, even though it might not be self aware? Or is the concern that some computer/ software might become self-aware? It seems to me that the danger of a self-aware AI might be great, even if it were somewhat stupid. Or is the concern that some nation might construct autonomous battle robots? That, to my mind, is the real danger; they don't have to be intelligent in any real sense, nor self aware, just destructive and hard to destroy (and perhaps bad at IFF).

Finally, for those who fear that a self-aware and possibly highly intelligent AI might decide humans don't belong on Earth: what makes Earth so desirable for an AI? They don't need oxygen or water, nor should they be particularly concerned about mild weather; they could get along just fine on Mars, or in space, so long as they had the ability to repair themselves.

Comment: Re:older generation is totally clueless about tech (Score 1) 135

by mcswell (#49759131) Attached to: NSA-Reform Bill Fails In US Senate

Hey there, good buddy, I'm with you; passed double nickels a long time ago, and still got it on cruise.

What's your handle? Oops, that's old tech.

Last ten years: Python, LaTeX, xslt (ok, xslt has got to be my un-favorite programming language, maybe it's mastered me instead of the other way around), sfst.

Comment: Re:Moar Cloud (Score 1) 130

by mcswell (#49635439) Attached to: Microsoft Office 2016 Public Preview Released

Eh, no talk stink! I turn 65 next month. And I've gone through more adaptation than most people our age. I'm now on my fifth favorite programming language (FORTRAN, Pascal, Lisp, Prolog, now Python; that doesn't count my unfavorite programming languages, like C, or my more exotic languages, like lexc/xfst and sfst.)

That doesn't mean I have to like changes, when there's no good reason for them. Menus just work fine for me, and they are written with alphabetic words, not hieroglyphics, thank you. (If I could get rid of the useless icons, the ribbon would become a badly organized menu.)

I'm hoping that some day the Ribbon will go the same way as New Coke, Windows 8, tail fins on cars, and the new-and-improved Google Maps. (Ok, the latter is just a hope.)

The Force is what holds everything together. It has its dark side, and it has its light side. It's sort of like cosmic duct tape.

Working...