Comment Re:So define your terms (Score 1) 600
So, let's look at your "nothing else!". What does it mean to say that the mind/brain is machine plus software plus "else"? It means that, in reducing the mind to its constituents, you end up with a list of elements we already know: particles, their interactions, plus that "else". Can this "else" in turn be reduced to its own constituents? If yes, then said else is a machine in its own right, built from those "else-parts". If not, then your quest to find stuff ends there.
Now suppose we find that consciousness is an irreducible. That in some way or another there are consciousnesses floating around that get linked to particles in the forming of brains. That being the case, actually understanding consciousness, how and why it works, developing new consciousnesses, improving them, even improving our own, all become unfortunately impossible. They are givens, to be, so to speak, harvested from the source of consciousnesses atomically as such, forever and ever locked in the state they came, unchanging, outside the domain of our technology, intelligence, hopes and wisdom.
That's an extremely sad outcome, which is why I sincerely hope our minds are indeed reducible to machine and software. If they aren't, we'll hit a insurmountable brick wall, and that'll be it.
Thanks for the thoughtful reply. I don't think I disagree with you on anything important there. There are a few minor issues: is it reasonable to assume that anything that can be understood is necessarily amenable to being modelled in software, for instance. And I'm not sure I'd share your sorrow if some element of our minds does turn out to be irreducible. We'd still be left with an awful lot we can potentially hack, and I quite like to think that there's an element of mystery to the human condition.
However, I can be very hopeful that our minds are indeed machinery and software, to the point of sounding almost certain, due to the researches and advances made in biology, neurology, cognitive sciences etc. in the last few decades. They all point out strongly into this direction, so there's indeed great expectation the mind will be understood in a few more decades and thus opened up for betterment, and strong betterment at that.
I don't think there's much doubt that our minds are largely (perhaps very largely) mechanistic. And I'll accept that there's strong emerging evidence in support of that notion. I'm just not sure that the evidence is also evidence of "nothing else"
As for the matter of souls, I don't criticize technical versions of the concept, only naive religious ones that think of it as some kind of "non-matter matter". What I said above is all compatible with Platonic, Aristotelian and similar advanced concepts of the soul as truly immaterial.
I think I better take your word for it in this instance
Reality is most probably composed, as the sages of yesteryear figured, of matter (ordinary matter, energy, space and time) and form (immaterial math).
Interesting way of looking a it. Personally, I'd have put space and time under "form" since they are basically the shape of the universe rather than the substance thereof. I also find myself uncomfortable with the idea that every non-material entity in the universe can be reduced to mathetics. Unless "math" is a philosophical term of art that I've not encountered, of course. Both minor quibbles in any event
The soul of a thing is its mathematical structure, which doesn't depend on the specific particles that are following that structure. Back in the day it was thought this referred to the human shape, but nowadays science advanced enough to translated that fuzzy concept of "shape" into the far more specific notions of DNA, the structure of which (not the actual molecule in your cells) fits Aristotle's concept soul, as well as that of algorithms and software, the running of which in a hardware fits Plato's concept of soul.
That reminds me of Rudy Rucker's Ware Tetralogy. He makes q good case for souls as software. I'm not sure it doesn't miss the really interesting question though. Let me tell you how I see it.
The thing that interests me is consciousness. I know from personal experience that my existence has a strong subjective component. The question that really interests me how can we get an AI to share that experience, and if we can, how can we possibly be know?
Now, and engineer would probably approach the problem through functional equivalence. If the behaviours are the same under all circumstances then we can call that "good enough" and assume that equivalent behaviour indicates equivalent processes. From what you've written so far in this discussion, I'm guessing that you're broadly of that opinion yourself. For my part, I think that's just avoiding the issue.
This is what I meant by "magically self aware clockwork". Any combination of hardware and software can be modelled entirely in software by writing a software emulator for the hardware. It works both ways: any piece of software can be executed entirely in hardware. So whatever the program, there's no reason it can't be recreated in clockwork, along the lines of pre-Babbage calculating machines.
So let's try a thought experiment: suppose you reduce a human mind to software, and then re-implement that software as a clockwork calculator, albeit an extremely large one. Is that collection of cogs conscious in the same way that I know myself to be conscious. And if it is, by what process did that conscious come to reside in all that moving metal? Is there a critical number of cogs after which the machinery becomes capable of apprehending the beauty of a sunset (as opposed to merely recognising it as something humans would find beautiful). Or is that self awareness intrinsic in all clockwork to some lesser degree, such that a clock on a mantelpiece is actively enumerating seconds as they pass, rather than just mechanically moving over time.
And if you're happy with the notion of conscious wristwatches, how do you feel about simpler machines still? Do pulleys and levers also have some dim awareness of their condition? And more to the point, where do you draw the line, short of adopting Animism and declaring that all matter is alive and aware? Because I think that's a notion that many in the AI field would find deeply disturbing.
And then of course you might consider that, at a particle level, the distinction between the clockwork and the building that houses it is more or less arbitrary. Does that mean the building is conscious while the clockwork is running? Maybe we should extend that to the planet, or further still. Is there a line we can draw before we run into some sort of universal consciousness and start debating whether or not it should be labelled "God"?
On the other hand, if that internal subjective experience is not a component of all clockwork, that why does it suddenly arise? And by what mechanism? I think a lot of AI reserchers would very much like to draw a black box at this point, label the issue MAGIC and tell us all to stop asking awkward questions. Personally, I'm no happier with black boxes than you are.
The trouble is that Science is founded on the rigorous elimination of the Subjective. That, to my way of thinking makes it a poor tool for investigating Subjectivity. Science can find the footprints of the subjective world (electrical activity in the brain, behavioural statistics, etc etc) but it can't address the actual subjectivity directly. This leads a lot of scientists to dismiss the subjective as unimportant, or worse to deny aspects of its existence. I think that's a mistake - Black Boxery of the worst kind, if you will.
There's a lot more I could write on that subject, but I think I've rambled on for long enough. I suppose it all boils down to two questions: if you emulate me perfectly in software, does my emulation have the same subjective experience as I have in my daily life - or indeed any subjective experience at all? And whether it does or not, how can we possibly be sure? I don't have any answers, only questions.
Thanks again for the thought-provoking response