Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Advancement overcloked! (Score 1) 265

The fact that the apparent diameter of the Moon and the Sun are virtually the same, resulting in nearly perfect solar eclipses, is also rather surprising, although I don't know that any science breakthrough comes out of that. (Lots of beauty, for those who can fly their Learjet up to Nova Scotia to see the total eclipse of the sun. Also nice if you want to avoid being burned at the stake by King Arthur. And I suppose that's how the Sun's corona was discovered.)

Comment Re:Simplistic (Score 1) 385

Like arithmetic?

Ok, that's a bad example, but 80 or 90 years ago it wouldn't have been thought of as something likely to get automated, because it required "thinking." (Some forward thinkers like Turing knew better, but most people would not have, I suspect.) Since then, many things that have been thought to require thinking have succumbed to some kind of automation: chess playing, solving word problems, integral calculus, and now even Jeopardy. Along the way, even a certain kind of psychoanalysis was easily mimicked.

So: how do we know what *really* requires a mind? That's really the question.

Comment # of users (Score 1) 102

The downside of this is that they can afford to be totally unresponsive to users. Google has recently replaced their classic Google maps with a piece of junk. Don't take my word for it, go to the Google maps forum, this link for example: https://productforums.google.c.... While every single one of the close to 1000 posts on that thread (except for the Google representative's initial post) is negative, Google can afford to ignore them (and in fact, not even respond to them), because the complainers constitute a tiny fraction of the number of users. (And it's not clear how many they represent, i.e. how many other users hate the new version but haven't taken the time to post their displeasure--or may not know how to do so.)

Comment Re:Anthropomorphizing (Score 1) 421

"Our bodies aren't vessels...we inhabit, they are us."

Certain of that? Suppose we created an AI. One way of describing It might be as a piece of software, some data in some kind of database, and a state consisting of the value of some variables (or the weights in a neural net, or some such). That AI might be running on a particular computer, but in a very real sense, that piece of hardware is simply the body it is inhabiting at the moment. There's no obvious reason that same running software couldn't move Itself to another identical computer. (It might instead copy itself to another computer, but that's a different question; just suppose for the moment that it moved.)

There are a lot of if's in the paragraph above, but it seems *in principle* that it should be possible. In which case the AI is not the hardware, it's the software. And if the AI is not the hardware, where is the argument that we are the hardware (or wetware, if you prefer)? We certainly don't know how to move ourselves from one body to another, nor to some kind of machine, and we may never know how. But in principle, it might be possible. And if it is, then aren't we more like software than hardware?

I realize that there are even more if's in the above paragraph. But unless you can that there's something wrong with it in principle, then I don't see how you can claim that we _are_ our bodies. I may feel attached to my body, but that doesn't constitute a logical argument that I am.

Comment Re:The Sony connection (Score 1) 421

"the current versions of Windows/Linux/OSX etc are much more secure than their predecessors from 10-20 years ago": I know little or nothing about this stuff (I do some computer programming, but only in languages like Python and XML these days, and that doesn't tell me much about security), so let me ask: I'm sure these programs are more secure in the sense that a lot of holes which existed 10-20 years ago have been plugged. But these programs also have a lot more code than the old ones. Isn't it possible that more holes have been introduced in that new code, by programmers who didn't learn the lessons of the past? And even if not, is it possible that new _kinds_ of vulnerabilities have been found? And finally, aren't a lot of breakins due to social engineering? Where I suppose the less is that if you make something idiot proof, someone will make a better idiot.

Comment Re:Well... (Score 1) 421

Why should an AI be particularly concerned about our environment? A machine can survive in all kinds of environments, it doesn't particularly need our ecosystem. Indeed we have had machines in orbit outside the atmosphere for decades, as well as driving around on Mars, orbiting Saturn, en route to Pluto and beyond (and we did have one in orbit around Mercury, until we crashed it). If we ever manage to create a self aware, intelligent and curious AI, I expect it will head off to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before--or is likely to go for a long time, because humans are more fragile, and need to carry along too much infrastructure. Much easier for an AI to travel to another planet of the Sun, or to another planetary system. And we'll be left behind, as the least of the AI's worries.

Comment Re:Funny, that spin... (Score 1) 421

I share your belief that academics who have an interest (financial or otherwise) in continuing AI research are probably not unbiased observers. And smart people like Hawking, Gates, and Musk are less likely to be biased, and perhaps better at predicting the future than I am (and maybe than you are, or other /.ers).

That said, I do have some questions for the pessimists (and I consider myself something of a pessimist). Is the worry that some AI will become super intelligent, even though it might not be self aware? Or is the concern that some computer/ software might become self-aware? It seems to me that the danger of a self-aware AI might be great, even if it were somewhat stupid. Or is the concern that some nation might construct autonomous battle robots? That, to my mind, is the real danger; they don't have to be intelligent in any real sense, nor self aware, just destructive and hard to destroy (and perhaps bad at IFF).

Finally, for those who fear that a self-aware and possibly highly intelligent AI might decide humans don't belong on Earth: what makes Earth so desirable for an AI? They don't need oxygen or water, nor should they be particularly concerned about mild weather; they could get along just fine on Mars, or in space, so long as they had the ability to repair themselves.

Slashdot Top Deals

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...