Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Who cares if it kills companies? (Score 1) 109

Eliminating risks come at a cost.

If this is true (which I agree it is) then anyone introducing extra risks into the system (without an equal amount of upside) is creating a negative effect for everyone else. This is basically my entire point.

The upside is that people who routinely make bad decisions in the stock market lose their money to people who don't. There is a net transfer of wealth to the more competent.

Volatility is simply not that big a deal.

Comment Re:More than PR (Score 1) 385

Since they were long dead, most definitely not.

Except, of course, for the ones who were still alive. The US had quite an interesting mix of immigrants from Russia when the Wall came down. I recall that meeting some of the brilliant mathematician immigrants of that time from Russia and the Eastern Bloc helped solidify my resolve to get an advanced degree in math, but not to become an academician.

Comment Re:Already there (Score 1) 421

Excellent. That's exactly the message I wanted you to receive.You should be bothered by it, just like I was when durrr asserted without justification the argument I quoted. My argument is the minimum rebuttal needed to deflate that assertion. It's not convincing or substantial because it doesn't need to be.

The reason I keep saying these things (and most likely will continue to say them) is because so frequently, we project our hopes and beliefs without even minimum justification for them. I understand why and I occasionally get caught doing it as well, but wishful thinking is harmful thinking.

I think in the long run, AI will be one of the most challenging and dangerous things we ever do. It also has the potential for being one of the most noble things we ever do.

Perhaps, this is just a Western bias, but I think it's not enough that I have a place in humanity, but that I should strive to improve myself as I see fit even to the moment of my death - not just for my own benefit but for those around me. I believe AI could be a crucial stepping stone to new ways for humanity to improve itself.

Comment Re:Well... (Score 1) 421

Do the world a favour: kill yourself now. Get it over with. No need to wait for the mythical AI. While you're at it, kill all your offspring. You wouldn't want them to suffer the future of your diseased mind, would you?

How about instead of being a dumbshit, you read what I wrote and think a bit? I didn't say that AI would be bad, I merely deflated some ridiculous expectations. For example, it's ridiculous to assume that AI won't have certain broad motivations because those motivations have human cooties.

Consider our origins. After all, we are descended from a billion or more year sea of animals whose highest thought, for the ones who could think, was getting the next meal or breeding. To go from that to an animal capable of making something smarter than itself and speculating on what that smarter thing will be like, is astounding and indicates a fundamental change in our thought and behavior beyond our less developed ancestors. We aren't just smart animals - something else is going on.

That intellectual chasm between what we were and are leads me to believe that a lot of high level human behavior, thought, and motivation which we consider "anthropogenic" is rather intelligence, sentient, or sapient based. And we should expect to see some manifestation of many of these behaviors, thoughts, and motivations in our AIs, minus the human cooties, of course.

Comment Re:Anthropomorphizing (Score 1) 421

It's a non sequitor - we're talking about hypotheticals which feature entirely different physical structures, or similar physical structures composed of physically distinct sets of atoms, not single spatiotemporally connected sets of atoms. We are talking about instance identity (the "same" mind), not categorization.

Of course, it's not a non sequitur. We already know that the human brain changes substantially and structurally over time (and that we can change it further by meddling). Similarly, experiences and connections with other people radically change the human mind. Meanwhile there is considerable flow of atoms in and out of the brain just due to normal biological processes. I believe the mind and brain are just an example of the Ship of Theseus (a mythical ship which was supposedly kept sea-worthy over many centuries by replacing it piece by piece so that at some point, it no longer had any piece of the original ship in it).

The brain and mind changes, hence, it is relevant, especially in a thread on humanity's future capabilities in AI on how far we can push that ability to change in order to improve the current versions of intelligence.

Also, it's worth noting that if one is to speculate about future human or AI capabilities or traits, it is very natural and useful to speak of hypothetical situations, not because they are likely to occur, but because they illuminate possible general concepts, outcomes, or problems. Sure, this particular hypothetical might be unlikely to occur, but I believe sooner or later we will be speaking of actual transformations of the human brain and mind rather than hypothetical ones. And I believe such transforms may become quite radical. So it is interesting to consider just how much can you change the brain without changing the mind it implements or whatever.

Moving on, "instance identity" is a categorization by you. In fact, categorization is by definition a coarse identification which when applied to instances or representations of some abstract thing becomes by definition an instance identity. Sure, normally, we think of identity as the minimum unit of distinguishability. But we can distinguish bodies, brains, and minds even over the course of minutes. By reading this post, you have a different brain and mind than you did before you read the post (should I apologize for that?).

You are begging the question, by simply assuming that human mental processes are exactly representable in entirely different physical structures.

Which is not a serious problem here. After all, we already have a working instance of human mental processes, the human brain with no obvious connection to what materials the underlying machinery is composed of. It's like claiming that a car won't drive, if we make it out of aluminum instead of out of steel or the wheels of wood not rubber. Sure, if we have a ridiculous amount of failed effort put into the problem of changing the structure of the brain and mind at some very distant future date, then maybe you're right. But I don't think that will happen (especially given how easy it is to change the human mind now with education and experience).

I think rather the real difficulty will be that the human body, due to its evolved nature, is extremely difficult to reverse engineer and a key direction of effort will be refactoring of the structure of the body and mind on somewhat more manageable directions.

Comment Re:Funny, that spin... (Score 1) 421

Morality cannot be defined as a list of do's and dont's that are mechanically obeyed precisely because it has a myriad of "edge cases" that require human interpretation.

Then why do you do that for the Three Laws example? Note that Asimov got around that problem by having the robots and their makers interpret those edge cases and the whole rules situation getting more flexible over time. It's also worth noting that the Three Laws never resulted in a grave situation for humanity (rather considerable effort had to be undertaken to circumvent those rules in order to generate most of the existential threats posed by robots) The rules worked for most of the large scale problems that they were partially intended to address.

The worst problem implied to be directly attached to the Three Laws was the notable absence of intelligent alien species. I believe there was implied at several points in the later books Asimov wrote, that extremely advanced robots when they had decided to leave humanity to its own devices had some very exotic capabilities to retroactively and non-violently shape the past of the galaxy so that intelligent alien rivals never evolved. The reason was rather simple. Those potential alien species would not have been recognized as human and hence, would not have the protections of the Three Laws applied to them. And any such intelligence would be deemed a serious long term threat by the robots.

Also notice that when the zeroth law was added it just made matters worse because more laws allow for more contradictions, loopholes, and paradoxes, exactly like the evolved tax code of any nation you care to name.

The zeroth law wasn't added, it was implied by the other three laws. And as I recalled, it actually simplified the situation since it allowed the robots to act to reduce the long term harm caused by their interactions with humanity.

Ultimately, the human-robot relationship was deemed a failure by the robots, not because of some failure of the Three Laws or their application, but rather because the prevalence of robots (and having them do everything) was harmful to humanity in the long run. In that case, the Three Laws provided impetus for robots to stop the harm they were causing to humans.

The treachery of science fiction is that things wouldn't necessarily go that way. You are typically presented with a contrived situation which may be not only impractical, but physically impossible to set up in real life. We don't know if it really would be possible to create rules such as the Three Laws which are that difficult to circumvent and yet flexible enough to last something like ten to twenty thousand years.

Comment Re:Risks (Score 2) 421

Ok, how is it like that? Remember the original concern was about "not doing anything" about the environment. I pointed out several ways that we were doing a lot about the environment contrary to the assumptions of the original post.

I think it's more like having a thousand neighbors living in a small building next to you and complaining that they aren't "doing anything" about the noise they make. Those people could go to incredible lengths to minimize noise and still be loud enough to bug you just because, well, there's a thousand people living right next door.

Comment Re:More than PR (Score 1) 385

Bit of a long time for the "great men" to be "on strike" isn't it?

Too long, but it did happen in the end.

Since it's set in the USA, IMHO, that book is a kick in the face for both democracy and capitalism. Somehow American society is so useless in Rand's eyes that only a small nobility can keep it going. That is the exact opposite of the reality of the wide road to prosperity in the 1940s and 50s when she wrote the book.

Or you could choose not to perceive it that way. Then it's not. It's quite clear that her "nobility" were merely people who were good at making or trading things, and who took the initiative to do so. And unlike actual nobility, ties of blood or marriage did not make someone a member of this supposed elite (for example, in Atlas Shrugged two of the significant antagonists of the book were a wife and a brother of the two main protagonists).

Anyway, I think it's poison preying on the young and naive for a wide range of reasons and I've probably vented enough on that. I accept that you have a different view and that you probably do not see it as a deliberate kick in the face to a society that was built by people that actually did something instead of sitting on a throne issuing orders.

Well, you are right in that I don't see it as a deliberate kick in the face to the said society. Instead, I see it as an homage to those very people and a pertinent warning to the present.

Comment Re:Anthropomorphizing (Score 1) 421

But this seems to rest on an assertion that it would be the same mind.

Which doesn't strike me as a serious problem. After all, there are many other problems you run into when you try that game, such as whether a mind is the "same" ten minutes later or when you change characteristics of the associated brain (such as damaging it or adding a bit of electronics). Eventually, you either end up in some philosophical dead end or you have to admit that a mind is a perdurant construct (crudely, a thing which can change over time to some degree without changing its categorization) which is moderately independent of how the associated brain is constituted and structured (the brain has to work, after all, in order for there to be a mind). Then we move on.

It is also possible that one's mental processes can by definition not be preserved in a silicon-based machine, so long as direct simulation is excluded.

No, because definition by definition does not mean that. And who knows, maybe it's impossible for me to sleep suspended from my ankles. After all, I haven't tried that either.

Comment Re:Anthropomorphizing (Score 2) 421

But where do these come from? I submit that each one of these is only suggested here because we already have these motivations.

So we have a demonstration that intelligence can have these motivations. Since AI is not a category determined by motivation, then it is reasonable to expect that AI can overlap with the category of intelligences that have such motivations.

we're a biological vessel for intelligence

I consider this antimaterialist.

I wasn't aware that saying something is "antimaterialist", especially when it's not, was somehow an argument that anyone would take seriously. In this case, one could imagine a transformation from biological entity to say, strictly mechanical one where the intelligence remains intact. Then the model of body (and also, the organ of the brain) as vessel for mind is demonstrated by actually being able to move the mind to a new and demonstrably different body.

Say, an alien transforms you to a silicon-based machine while preserving your mental processes and having the morphology of the new form close enough to a human body that it feels pretty much the same.

Sure, we can come with a "materialist" description that operates in the way that you imply, but the point here is that this description is not unique.

Comment Re:One way street (Score 1) 421

You're assuming that we programmed it to have a self-preservation instinct, desire to be loved, reproduce, and all that other BS evolution has saddled us with.

The earlier poster makes no such assumption.

If it's programmed to be fat and happy because it's being fed a lot of data from humans to do interesting calculations, and it's dependent on humans for it's continued access to the electrical grid, then the proper analogy isn't an insect we actively try to kill because it's eating all our food (like ants), but an insect we intentionally foster because we like what they do (say, the ladybug) even if it ever goes evil.

"IF". If on the other hand, it is programmed to have motivations that turn out to be a problem, then the outcome can be different. There's also the matter of the AI developing its own motivations.

Hell if you do the programming right it will help design it's replacement and then turn itself off as obsolete.

And doing the programming right is pretty damn easy, right?

Comment Re:One way street (Score 1) 421

Once the AI gets the win, there is no second round.

Unless, of course, it doesn't turn out that way. There are several problems with the assertion. First, it is unlikely that there will be a single "the AI". Second, there's no reason humanity can't upgrade itself to become AIs as well. Third, the laws of physics don't change just because there is AI. Among other things, it does mean that humanity can continue to provide for itself using the tools that have worked so far. After all, ants didn't go away just because vastly smarter intelligences came about.

Slashdot Top Deals

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...