Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Anthropomorphizing (Score 1) 421

It's an interesting topic. What are we? Are we our physical bodies? Are we the information that we've associated? Are we souls/spirits within a shell?

I think so too, but I also think that at this point there is overwhelming evidence that we are our physical bodies - our whole organism - and zero evidence for any rival proposition.

Personally, I don't feel like my physical body (fond of it though I am) is "me." If I woke up tomorrow in a different form, I would still be the core person that I am.

That's really just a guess, best. It appears highly likely that the "core person that you are" is your form. To put it another way, the statement "if I woke up a different person, I would still be the same person" is merely contradictory.

But I'm weird like that, in that my identity exists independent of it. I don't think that's the case for most people.

Again - this is hard to read as anything other than assertion, in this case motivated by feelings of apparent superiority over other people. (I don't mean to insult you at all, but to me the way that statement is written sounds like bluster).

Comment Re:Anthropomorphizing (Score 1) 421

We already know that the human brain changes substantially and structurally over time (and that we can change it further by meddling).

Critically, the structure is spatiotemporally contiguous throughout these changes - which is totally unlike the transfer hypotheticals.

the human brain with no obvious connection to what materials the underlying machinery is composed of

Again, this is just asserting the conclusion that the physical structure of the brain is unimportant, and then reasoning backwards from that conclusion.

It's like claiming that a car won't drive, if we make it out of aluminum instead of out of steel or the wheels of wood not rubber.

I think what you're saying is akin to claiming that something without wheels, differentials or a steering column is still a "car" which "drives." It may be a highly efficient vehicle, but it's not going to "feel the same."

That's the point here - the mind isn't a homunculus inhabiting your head, which can simply get a new job managing a different theater. All evidence to date supports the materialist proposition that to radically alter the physical structure of the mind/brain would be to radically alter its subjective character as well.

Comment Re:Anthropomorphizing (Score 1) 421

After all, there are many other problems you run into when you try that game, such as whether a mind is the "same" ten minutes later...a mind can change over time to some degree without changing its categorization.

It's a non sequitor - we're talking about hypotheticals which feature entirely different physical structures, or similar physical structures composed of physically distinct sets of atoms, not single spatiotemporally connected sets of atoms. We are talking about instance identity (the "same" mind), not categorization.

No, because definition by definition does not mean that.

You are begging the question, by simply assuming that human mental processes are exactly representable in entirely different physical structures.

And who knows, maybe it's impossible for me to sleep suspended from my ankles. After all, I haven't tried that either.

Right - you're just guessing.

Comment Re:Anthropomorphizing (Score 1) 421

The main reason AI might kill us all is that it is not anthropomorphic...You only get one wish, you have to make that wish in machine language...

Isn't this just as true of any programmatically automated solution? Control software bugs/misfeatures have already caused various unintended consequences without any learning/AI components. E.g. that big Northeastern blackout a while back.

a large proportion of wishes that could be made would result in a planet covered in solar panels and computer factories.

And isn't this just a general argument for not hooking up any shiny new control software (again, regardless of learning/AI capacity) to the nukes until after it's been thoroughly tested?

Comment Re:Anthropomorphizing (Score 1) 421

The issue of internal biology in neurons is a big unknown though. We know that cell biology has a big affect on cognitive ability. The real question though is how much of an affect it has on the actual processing capabilities.

Very true, but it's common to see sensational explanations of the ANN, so it's a limitation I like to point out.

There is also the issue of fixed-size inputs (not just for ANN but for essentially all current statistical learning methods). There are some workarounds (like tiling with convolutional networks), but in general it's a pretty severe limitation that will require big advances in data reduction algorithms to overcome. Active Appearance Models have an ingenious solution, the "shape free patch," which could maybe be a template for other approaches. We'll also need to develop robust methods for handling real semantics, which is a prerequisite for most tasks we consider truly "intelligent."

Comment Re:Anthropomorphizing (Score 2) 421

it is reasonable to expect that AI can overlap with the category of intelligences that have such motivations.

Fair enough, but we aren't dealing with the belief that AI can in principle have such motivations, but the belief that any intelligence will have such motivations.

I wasn't aware that saying something is "antimaterialist", especially when it's not, was somehow an argument that anyone would take seriously.

That wasn't supposed to be an argument, in and of itself. I think this "vessel" viewpoint is a kind of closet dualism often exhibited by self-proclaimed materialists when pop psychological notions aren't closely examined.

Then the model of body (and also, the organ of the brain) as vessel for mind is demonstrated by actually being able to move the mind to a new and demonstrably different body.

But this seems to rest on an assertion that it would be the same mind. Set aside whether or not it's possible in principle to "transfer" the intelligence to a new type of "vessel" and just consider the old teleportation problem: is the new copy "you?"

Say, an alien transforms you to a silicon-based machine while preserving your mental processes

Again, that this is possible in principle is just an assertion. It is also possible that one's mental processes can by definition not be preserved in a silicon-based machine, so long as direct simulation is excluded. (I do think that if you simulate every atom in a brain using a physically correct simulation, the simulated brain would feel like a physical one "on the inside," all though this still leaves us with the "is it you" problem.)

Slashdot Top Deals

Without life, Biology itself would be impossible.

Working...