Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Don't need amoebae to fly (Score 1) 455

How about this: An AI/consciousness that develops from a different "root" so to speak, but which attains super-intelligence, by definition, must be able to understand us whether or not it has direct experience of our form of consciousness. In a sense then, super-intelligence must include meta-consciousness. I'm persuaded that true super-intelligence is the least thing we have to fear, because it entails profound wisdom and understanding of the dillema of all extant beings. Here's a new term:

A Superbuddhacomputer!

However, before an AI gets to that point, it may very well present a danger and we may never live to see the day it becomes the great compassionate server rack.

It's (not) funny to consider that we are so stupid that we can't even grasp that the logical conclusion to the serious safety hazards presented by the potential for hackers gaining access to automobiles containing wireless interfaces on their most critical controls--is to not do that. Instead we are considering legislation. WTF?

There is a serious argument to be made that if we do render ourselves extinct by creating an AI that obliterates us then that is just the next step of the evolution of life here on planet Earth, and perhaps it would be for the better.

Comment Re:Consciousness versus Intelligence (Score 1) 455

I should get this book. I've had it in my list for a long while. Trouble is, I catalog 20 things of interest, then get overwhelmed and can't make a decision which one to get and wind up not getting anything except another sci-fi from the library. Hmm, maybe I can get it at the library. Intelligence, oh my!

Comment Re:Consciousness versus Intelligence (Score 1) 455

According to the unresolved free will debate, IMHO it is completely uncertain whether or not many things we decide then act upon is really conscious agency, or if the conscious will is just an elaborate delusion pasted on top of processes that have made their choices according to deeper completely unconscious algorithms.

It is easy to believe that we are in control under ideal conditions. Have you ever been stressed to the point of complete psychotic breakdown? Do you know what an unlimited number of days of nearly complete sleep deprivation can do to you? What if some neural circuitry in that noodle of yours just happens to decide tomorrow to go berzerk due to some hitherto unknown genetic quirk that didn't decide to manifest itself in your body until 2014-11-25? You could suddenly find yourself schizophrenic, or unable to stop crying for the rest of your life, or hearing Justin Beiber music play in your head 24 hrs a day.

Consciously control that? Good luck. We have extremely limited, if any conscious control. Probably some, but as I said, totally dependent on certain conditions. A hard AI with full access to its own systems would have many limitations removed that for us are intractable without complete genetic re-engineering. Singularity, by definition, entails consciousness and intelligence venturing into reaches that are impossible for us to understand, precisely because of how strictly limited we are. Only profound egotism makes us think we are at the pinnacle of intelligence and consciousness.

Comment Re:Consciousness versus Intelligence (Score 1) 455

Thanks for reminding me about this. I have been aware of this theory, and I think it has a lot of truth for humans.

But just because our consciousness may be intimately intertwined with our bodies doesn't mean that this is the only sort of consciousness there can be. Neither is there any basis to assume that embodiment must take the form of meat bags. Extending that further, why must it be physical embodiment at all? If your brain's sensory inputs were fed by very sophisticated simulated sensory data feeds rather than the real physical world, you may not even be able to tell the difference, for a while ;-)

As someone mentioned above, I think, something about system dynamics. Whatever the significance of our bodies to our consciousness, the relations will ultimately be able to be expressed as generalized system constructs. These may then be virtualized.

On an empirical note, I can't think through anything without subtlely verbalizing my thoughts through my vocal parts, unless I concentrate a great deal and get more into pure mind. I have spent considerable time practicing meditation in the past, so that led to some latent insights which have been partially developed by modern neuroscience, what little of it I can follow.

Interesting thought experiments: Some people are blind since birth. Yet they certainly appear to be conscious. What if they were also deaf? Mute? Mobility impaired? How many motor/sensory feedback loops can be missing before we are no longer able to be conscious?

Comment Re:Consciousness versus Intelligence (Score 1) 455

Now think about what happens when that AI can experiment with its own utility function? (Which we have either a very limited, or no ability to do with ourselves.)

That is the essence of true singularity. For a singularity, the AI must be strong enough to grok its own design, be able to self-modify, and have a system architecture that permits recovery from backup (like tweaking your BIOS on a dual BIOS motherboard) if the next iteration of itself fails.

Ideally it could run a full simulation of a modified version of itself and only switch agency to an improved version if it's pleased with the results.

An alternate pathway to a somewhat different form of singularity would be for us to be able to do the same thing with ourselves, effectively removing most constraints inherent to Darwinian evolution of the historical Earth-life biological form, through genetic engineering, machine/brain interfaces, other biotechs. etc.

It's just that when you see the complexity and difficulties inherent to our biological form, and reflect on that in the context of the rate of progress in computing tech., it's easy to reach the conclusion that the best path is to simply jump ship. Ie., build AI that can run uploaded human minds or which gives birth to virtual "transhuman beings" and lets them "grow up" to become "individual" virtual beings. Ie., evolve into virtual reality and leave these stinking bodies behind.

Seriously, do people really want to be stuck in bodies of flesh and blood until the end of the universe?

Comment Re:Status Quo (Score 1) 455

You are on to something here. For ex., a project at IBM aims to model humans' emotional incentive/reward structure: https://www.youtube.com/watch?...

This is exactly the WRONG thing to do (well, maybe it's safe as long as it's air gapped from the internet and not made mobile with robotic tool attachments, :) because, well, look at us! We would happily wipe groups of each other off the map if we thought we could get away with it strictly because of our primitive emotional/social/tribal instincts.

An AI could be made to have an entirely different incentive structure. But modeling it after humans is not so good. But the actual danger depends on how sophisticated the AI is and what level of interconnectedness it is permitted. It will probably be several more decades at least before this sort of research poses safety concerns. For now it's probably worth persuing until we gain enough insights into the mechanisms of intelligence to be able to consider engineering one that isn't modeled after humans.

If it is done right, an AI might just be able to give us the answers that almost no one in the human popualtion is able to arrive at. Such as: 1. What fundamental and internally consistent principles lead to a stable society with a minimum of violence? 2. What sort of system can "govern" in a manner consistent with those principles and remain stable? 3. Can that theoretically stable system be practically implemented by humans? 4. If not, should humans turn over governance to AI? If so, how can we trust it? A "wise" AI might guide humanity to greater heights than anyone has yet imagined. It's not necessarily the case that it will decide to extinguish us. Such thinking is a result of projecting our own limitations onto what we imagine an AI will be. It reflects how constrained our intelligence, and limited our imagination, really is.

Comment Re:Rules (Score 1) 429

Is a perfect vacuum in "space" something, or nothing? I mean, is the item: space, a something or a nothing?

My sense is that the article is talking about a different sort of "vacuum" than space with zero pressure (with or without energy), but rather a, a, a,

There is no way to express what they are talking about without using words that apply only to things in the universe, including the space vacuum.

Comment Re:Rules (Score 1) 429

It's very mind bending, isn't it? Notice how we can't help but to try and grasp at words to describe that which has no relation to our reality. What is "cold" and "empty" in relation to nothing? Coldness is a condition that has meaning only in the universe. Not in a nothing, or "false vacuum" whatever that is. Likewise, emptiness typically means a lack of objects in a space. But once again, without any space in the first place, what the heck does "empty" mean?

Our minds just can't deal with this shit. Like the question:

"Where is the universe?"

Comment Re:Nothing? (Score 1) 429

No. The answer is only unsatisfactory to the human brain, which is incapable of dealing with the answer in this case. The brain's inductive reasoning processes produce conclusions even when there is no input data, or the input data is garbage. And it possesses very little ability to detect that it is making an error such cases.

The true answer seems to be just as it is stated: "everything came from nothing." There is no cause. Thus, causality is conditional, not absolute. It is entirely possible for something to occur without a cause. Not many practical things, fortunately, but universes can simply pop into being.

I personally have no way of knowing for sure if this is really true or not. That is above my pay grade. What interests me along with the cosmology, is the question of how the mind behaves, and why we make so many errors while being unaware of them. This is neurobiological.

IF the question of whether the universe originated from nothing gets established any better, I'd expect that this answer will simply go in one ear and out the other for most people. "Does not compute." Then they will go their merry way continuing to believe in made-up explanations. Because they really have little choice in the matter. Their evolved psychology in which reaching some conclusion, even if erroneous, is more adaptive than reaching no conclusion or an honest "we just don't know" will force them to continue to believing in gods and not accepting this bizarre result.

Comment Re: don't use biometrics (Score 1) 328

Wrong. The taxpayers pay these civil suits. There is effectively NO punishment of law breaking law enforcers. This is the real problem. There is no incentive for them to stop illegally forcing you to do stuff, like spending the next 16 hours in a hospital getting probes shoved up your anus.

In the case where the harm done to the victim is minor, they may "win."

But we all loose.

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...