Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Risks (Score 2) 421

Ok, how is it like that? Remember the original concern was about "not doing anything" about the environment. I pointed out several ways that we were doing a lot about the environment contrary to the assumptions of the original post.

I think it's more like having a thousand neighbors living in a small building next to you and complaining that they aren't "doing anything" about the noise they make. Those people could go to incredible lengths to minimize noise and still be loud enough to bug you just because, well, there's a thousand people living right next door.

Comment Re:More than PR (Score 1) 385

Bit of a long time for the "great men" to be "on strike" isn't it?

Too long, but it did happen in the end.

Since it's set in the USA, IMHO, that book is a kick in the face for both democracy and capitalism. Somehow American society is so useless in Rand's eyes that only a small nobility can keep it going. That is the exact opposite of the reality of the wide road to prosperity in the 1940s and 50s when she wrote the book.

Or you could choose not to perceive it that way. Then it's not. It's quite clear that her "nobility" were merely people who were good at making or trading things, and who took the initiative to do so. And unlike actual nobility, ties of blood or marriage did not make someone a member of this supposed elite (for example, in Atlas Shrugged two of the significant antagonists of the book were a wife and a brother of the two main protagonists).

Anyway, I think it's poison preying on the young and naive for a wide range of reasons and I've probably vented enough on that. I accept that you have a different view and that you probably do not see it as a deliberate kick in the face to a society that was built by people that actually did something instead of sitting on a throne issuing orders.

Well, you are right in that I don't see it as a deliberate kick in the face to the said society. Instead, I see it as an homage to those very people and a pertinent warning to the present.

Comment Re:Anthropomorphizing (Score 1) 421

But this seems to rest on an assertion that it would be the same mind.

Which doesn't strike me as a serious problem. After all, there are many other problems you run into when you try that game, such as whether a mind is the "same" ten minutes later or when you change characteristics of the associated brain (such as damaging it or adding a bit of electronics). Eventually, you either end up in some philosophical dead end or you have to admit that a mind is a perdurant construct (crudely, a thing which can change over time to some degree without changing its categorization) which is moderately independent of how the associated brain is constituted and structured (the brain has to work, after all, in order for there to be a mind). Then we move on.

It is also possible that one's mental processes can by definition not be preserved in a silicon-based machine, so long as direct simulation is excluded.

No, because definition by definition does not mean that. And who knows, maybe it's impossible for me to sleep suspended from my ankles. After all, I haven't tried that either.

Comment Re:Anthropomorphizing (Score 2) 421

But where do these come from? I submit that each one of these is only suggested here because we already have these motivations.

So we have a demonstration that intelligence can have these motivations. Since AI is not a category determined by motivation, then it is reasonable to expect that AI can overlap with the category of intelligences that have such motivations.

we're a biological vessel for intelligence

I consider this antimaterialist.

I wasn't aware that saying something is "antimaterialist", especially when it's not, was somehow an argument that anyone would take seriously. In this case, one could imagine a transformation from biological entity to say, strictly mechanical one where the intelligence remains intact. Then the model of body (and also, the organ of the brain) as vessel for mind is demonstrated by actually being able to move the mind to a new and demonstrably different body.

Say, an alien transforms you to a silicon-based machine while preserving your mental processes and having the morphology of the new form close enough to a human body that it feels pretty much the same.

Sure, we can come with a "materialist" description that operates in the way that you imply, but the point here is that this description is not unique.

Comment Re:One way street (Score 1) 421

You're assuming that we programmed it to have a self-preservation instinct, desire to be loved, reproduce, and all that other BS evolution has saddled us with.

The earlier poster makes no such assumption.

If it's programmed to be fat and happy because it's being fed a lot of data from humans to do interesting calculations, and it's dependent on humans for it's continued access to the electrical grid, then the proper analogy isn't an insect we actively try to kill because it's eating all our food (like ants), but an insect we intentionally foster because we like what they do (say, the ladybug) even if it ever goes evil.

"IF". If on the other hand, it is programmed to have motivations that turn out to be a problem, then the outcome can be different. There's also the matter of the AI developing its own motivations.

Hell if you do the programming right it will help design it's replacement and then turn itself off as obsolete.

And doing the programming right is pretty damn easy, right?

Comment Re:One way street (Score 1) 421

Once the AI gets the win, there is no second round.

Unless, of course, it doesn't turn out that way. There are several problems with the assertion. First, it is unlikely that there will be a single "the AI". Second, there's no reason humanity can't upgrade itself to become AIs as well. Third, the laws of physics don't change just because there is AI. Among other things, it does mean that humanity can continue to provide for itself using the tools that have worked so far. After all, ants didn't go away just because vastly smarter intelligences came about.

Comment Re:The Sony connection (Score 1) 421

but they are not getting "more vulnerable" unless your management is A) not willing to spend the reasonable cost for appropriate security controls, or B) doesn't listen to their IT security staff when those systems start raising warning flags, or C) fails to hire competent security personnel in the first place.

Which happened.

Comment Re:Risks (Score 2) 421

What about the existential risk of not doing anything about the environment?

We should should worry about overpopulation from pinhead-dancing angels too. I find it interesting how people can ignore the vast amount of activity that humanity does about the environment. Humanity has yet to show even a slowing down in doing anything about the environment. There's vast areas of the world put under conservancy, pollution controls in most of the world, and yet we're supposedly doing nothing about the environment?

Comment Re:Well... (Score 1) 421

There is no reason for an AI to kill us.

Sure, if we ignore the many reasons for an AI to kill us, then you are right.

AI created by us will have no such impulses.

Unless, of course, you happen to be very wrong on that point.

No self preservation instinct (since we won't program them to, and it serves to purpose).

Because it is impossible to unintentionally kill something in the course of doing other things, say like perfectly optimizing paperclip production?

The only reason I can think of is if some human being specifically programs them to do so.

Which is already one more reason than none.

I'm saying that assuming that AI will eventually kill us and to view it as a foregone conclusion is illogical.

Because that is the logical outcome of considering that a single AI might even have a single reason to kill people?

Comment Re:Well... (Score 2) 421

It is unclear to me why an AI living like a parasite on the information fed to it by humans and the fact humans are living will decide suddenly it can benefit from killing all of us.

Because it can do better than "living like a parasite on the information fed to it by humans". It's kind of like saying that you should be happy with an empty prison cell where you can actually stretch your legs out and you get a whole bowl of gruel every day! Who wouldn't love to have that?

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...