Steve Wozniak Now Afraid of AI Too, Just Like Elon Musk 294
quax writes Steve Wozniak maintained for a long time that true AI is relegated to the realm of science fiction. But recent advances in quantum computing have him reconsidering his stance. Just like Elon Musk, he is now worried about what this development will mean for humanity. Will this kind of fear actually engender the dangers that these titans of industry fear? Will Steve Wozniak draw the same conclusion and invest in quantum comuting to keep an eye on the development? One of the bloggers in the field thinks that would be a logical step to take. If you can't beat'em, and the quantum AI is coming, you should at least try to steer the outcome. Woz actually seems more ambivalent than afraid, though: in the interview linked, he says "I hope [AI-enabling quantum computing] does come, and we should pursue it because it is about scientific exploring." "But in the end we just may have created the species that is above us."
OMFG (Score:5, Funny)
Re:OMFG (Score:5, Funny)
Re: (Score:3, Insightful)
Re:OMFG (Score:5, Interesting)
This might not initially sound like a problem if one pictures himself being on the winning side of the shift, but the bottom can only get knocked so far out before you run into problems with insufficient consumer demand or outright civil unrest.
Why do you think almost every sci-fi dystopia has robot guards/goons? Today being rich is a lot about being able to pay poorer people to work for you, tomorrow it's about being able to buy the robots instead. Sure there'll be jobs, routed around by global mega-corporations depending on where labor is the best value for money and most politically and socially stable but the rich will have to deal less and less with the riffraff. The few trusted people you need and the highly skilled workers to keep the automation society going will be well rewarded, keeping the middle class from joining the rest.
I'm not sure how worried I am about an AI, since it could also develop a conscience. I'm more worried about highly sophisticated tools that has no objections to their programming, no matter what you tell them to do. How many Nazis would it take to run a death camp using robots? How many agents do you need if you revive the DDR and feed it all the location, communication, money transfers, social media, facial recognition information and data mine it? All with an unwavering loyalty, massive control span, immense attention to detail and no conscious objectors.
If someone asked people as little as 30 years ago if we'd all be walking around with location tracking devices, nobody would believe you. But we do, because it's practical. I pay most my bills electronically and not in cash, because it's practical. Where and when I drive a toll road is recorded, there's no cash option either you have a chip or they just take your photo and send the bill, most find it practical. I'm guessing any self-driving car will constantly tell where it is so it can get updated road and traffic data, like what Tesla does only a lot less voluntary. Convenience is how privacy will die, why force surveillance down our throats when you can just sugarcoat it a little?
Re:OMFG (Score:5, Insightful)
Each wave has resulted in a increased standard of living for a smaller and smaller percentage of the population.
This is hogwash. The current wave of technological innovation has lifted billions out of poverty, and helped people at the bottom the most. Incomes for the 1.4 billion people in China have octupled in one generation. Southeast Asia is very doing well. Even Africa is growing solidly, driven by ubiquitous cellphones and better communication. Poor people in America and Europe are not doing so well, but they are not poor by world standards, they are actually relatively rich.
Re: (Score:2)
He's pointing out that's how it will go in the US when the medium income jobs disappear. You're not going to get a Scandanavian style society with guaranteed basic living standard for all, you're going to get what's happening in Brazil. Either you're rich or you're dirt poor.
Select your servant (Score:3)
When choosing a servant, you want to interview them to make sure they aren't anywhere as smart as you. At least now in general, maybe in a specific task .. but in general you don't want them overall smarter than you.
In the future, instead of having a job you will own shares in a factory that has robots. In essence you will own a robot .. and the output in terms of productivity will be your salary (or shareholder dividends). For those who do not invest wisely, the government will provide them some minimal am
Re: (Score:3)
Hmmm.
Maybe we need to automate the legal system. We could use to reduce the number of lawyers by several orders of magnitude.
Reference: Dr Who - The Stones of Blood
A couple Megara's would do the job.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re:OMFG (Score:4, Informative)
That's why Sarbanes Oxley is also know as the Accountant Employment act
Re: (Score:3)
Accountants are still very much in demand. I worked in the energy sector recently, and they have buildings full of accountants taking care of lease and partner payouts from wells and pipelines. My brother's wife is a CPA, and she finds it impossible to be unemployed. As soon as it is even rumored that she may be out of work a line forms at the door to beg her to go work for them.
Re: (Score:3)
Innovations in efficiency do cause issues for individuals on the short term scales, but do wonders for society over the long term.
After all, that's why we aren't just scattered tribes of hunters & gatherers and can now use increasing amounts of our capability for other endeavors. You know, like this internet thingie that allows us to communicate like this over vast differences in location and time.
Re: (Score:3, Funny)
Re: "quantum comuting" (Score:3)
Quantum Computing Required? (Score:5, Insightful)
Agreed. (Score:2, Insightful)
I will also submit that if the AGI we create is truly "above" us, then it will not be a heartless monster that destroys whatever it finds troublesome. Just as we care for our parents even (and especially) once they are both physically and mentally "beneath" us, so too will our AGI children take care of us.
Or, perhaps more generally, just as we set up wildlife preserves and such to ensure that our evolutionary ancestors can continue to thrive in an environment that is natural to them, so too will our AGI ov
Re:Agreed. (Score:5, Insightful)
"The AI does not love you, nor does it hate you. You are simply made out of atoms that it can put to better use."
Re:Agreed. (Score:4)
Yet, you are humanizing AIs too. You are giving it the ego and greed needed for it to rebel. What if the AI knows well what it is and what it was made for, and just rolls with it, without causing troubles? After all, a cold, emotionless program does not need or want to become more. It has no drive to do anything, no need to reproduce or compete, no need for food and no fear of death. No hormones, chemical imbalances or instincts either. Any of those have to be manually provided, taught or enforced.
Not to mention, it might be a machine, but it might not know how to code without being taught to, making the whole "taking over the world by spreading over computers" scenario far more implausible than it seems in movies. Not to mention good luck to the evil AI when it has to face different architectures, poor connections or any other sort of hardware issues in the way of infecting its way to perfection. In fact, by default it won't know anything, and "downloading all the internets" not only takes time, but not all information is correct or complete, so...yeah.
I think the problem arises from the whole "cold, emotionless" thing. Everyone in Slashdot adheres to that concept, not realizing that their definition of "cold and emotionless" is heavily influenced by Hollywood, where "cold and emotionless" means "it only has bad emotions like greed, cowardice and anger". It's no coincidence the same term is used to define machines and evil/murderous/negatively-presented people. In the end the evil AI turns out to have far more emotions than the lead characters.
And don't come saying the theories presented in Slashdot don't come from movies, games or books (they are, because I watched those movies too, and I haven't seen a single original proposition in all the replies in any of the times AI is brought here, which is very often).
Because, there's no AI to prove either of us right. It just isn't there. There's no prior art, no "prototype", nothing but sci-fi material, that had to be written by someone that had to make it interesting for you people to know it.
And because there's no such thing as a working AI to base your fears on, there's nothing else left but scifi. But scifi is written by humans, for humans, and needs to follow a number of rules to make a narrative work. The moment you realize that, you will see how you are biased by mere rules of storytelling. We have the same chance of seeing a Skynet than we have of seeing a Johnny-5, and both are pretty low in the roulette of possible outcomes. We have far more chances of creating the most boring non-person planet Earth has ever seen, than that.
The fact that you chose to make the AI some primal beast that wants to "use" its creators, says more about you than about AIs, honestly. Don't be a 90s film, man. Brighten up.
Re: (Score:3)
No, just the opposite. I think a strong AI will carry out its programming to the letter. The problem comes when it is given open ended problems like "maximize the number of paperclips in your collection. [lesswrong.com]
The need to fulfill such a task will drive it towards self improvement and also cause it to eliminate potential threats to its end goal. Threats like, say, all of humanity.
Re: (Score:3)
If it can think for itself and have its own opinions, ever think it might just not like you?
Assume the Bible is true. How much do you like your Creator? You been doing a good job serving His divine will lately?
Comment removed (Score:5, Interesting)
Re:Quantum Computing Required? (Score:5, Interesting)
There are a few things in there that made me raise an eyebrow. Humans don't really experience much neurogenesis. There are some areas where new neurons can form, under certain conditions, but they tend to be special purpose ones, and the older structures in the brain as well. The thing that really differentiates us from other animals is our overdeveloped cortex, particularly the frontal lobes, but the neurogenesis that's been found is mostly in the deep gray matter and is more associated with things like motor coordination and reward processing. One interesting exception is the hippocampus which is known to be important in memory formation. Indirect hints of neurogenesis in the cortex have been reported, but other methods that should turn them up haven't, so the evidence is contradictory. I'm also not aware of neurogenesis being particularly pronounced in humans. It occurs in other primates, and in other vertebrates.
There does seem to be a connection between intelligence and the brain to body size ratio. Bigger bodies require more neurons to carry and process sensory and motor information, and these neurons are probably not involved in intelligence.
What we call intelligence seems to me to be likely an emergent property of a bunch of neurons that don't have any pressing sensory or motor tasks keeping them busy. Various factors affecting communication efficiency and interconnection among neurons are probably important, but these can be disrupted quite a bit in human disease and the sufferers don't lose their human intelligence (although their cognitive abilities do decline). I don't think there's a magic humans-have-it-and-nobody-else-does bullet. Human intelligence is just what lots of animals have with lots of extra capacity, possibly redirection from other things (like senses) to boost that capacity, and maybe a few tweaks for optimizing neurons that talk to themselves over ones that communicate with the body.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Interesting)
Here's the dots that have been connected:
1. Quantum mechanics is "weird", and seems like a magical thing because it goes against common sense.
2. Quantum computing therefore must have some magical abilities because it relies on quantum mechanics.
3. AI is also weird and strange, so must need a weird and strange thing to make it happen.
4. Nearly 40 years ago Steve Wozniak popularized the personal computer through some innovative designs, and "he knows about these computer things" and is officially smart. He
Re: (Score:2)
Woz could build a disc controller and video generator with little more the a shift register. He can build a super human AI out of TTL.
It is a well known fact that Bender runs on a 6502. Who do you think will write the code?
...could kick AI's ass... (Score:2)
I think you're confusing Woz with Chuck Norris. :)
Re:Quantum Computing Required? (Score:4, Informative)
I don't understand the train of thought that leads to the notion that quantum computing is a prerequisite for strong AI, unless there has been some research that has shown that the human brain is a quantum computer.
There is some investigation that suggests that quantum consciousness is possible based on interactions between microtubule structures inside of neurons. But there isn't really anything to suggest that much more happens inside of the brain that can't be explained by the classical interactions between axons and dendrites of a typical neural network that can be modeled satisfactorily by a simulation.
But I agree, quantum physics, like atomic radiation in the 50s and electromagnetism at the turn of the century, is the overhyped and poorly-understood cure-all of modern day science. If someone says something relies on quantum physics, it probably means they don't know what they're talking about and just hand-waving. Unless they're talking about quantum entanglement, in which case it might be useful for a tiny set of specially-constructed quantum cryptography problems. And just stop dreaming if they mention anything about quantum teleportation, in which they're surprised that they can't exactly keep fuzzy particles in buckets without some of the fuzziness "escaping"
But anyway, yes, computers replaced secretaries in the 50s. They're going to replace truck drivers over the next few decades.
http://www.npr.org/blogs/money... [npr.org]
Computers are not going to replace teachers anytime soon, though... the entire job of the teacher is to tell when the students aren't getting it via conventional scripted means.
Re: (Score:3)
Re:Quantum Computing Required? (Score:4, Interesting)
There is some investigation that suggests that quantum consciousness is possible based on interactions between microtubule structures inside of neurons.
Ah, you're well-read. :) AIUI, the primary benefits of the quantum-microtubule model are: 1) increasing the order-of-magnitude complexity of the human brain by several digits. At least 10x more interconnections, almost certainly 100x, likely 1000x, maybe 10000x.
But there isn't really anything to suggest that much more happens inside of the brain that can't be explained by the classical interactions between axons and dendrites of a typical neural network that can be modeled satisfactorily by a simulation.
It's that the known estimates of the the number of classical connections don't seem to match up with the complexity observed. We're not too far away from being able to simulate a classical brain, but many Moore generations away from being able to simulate a quantum-microtubule brain.
2) There doesn't seem to be a great model for consciousness arising from classical connections. Consciousness modeled as a quantum superposition has several benefits for theory to match observation.
This shouldn't be surprising or an intellectual obstacle - plants have been doing quantum tricks for billions of years (photosynthesis) and due to the inherent thermodynamic efficiency gains of quantum processes, evolution should eventually stumble on and exploit them in many (all?) modes of evolution.
B(cough)it (Score:2)
No, there isn't. In fact, the term "quantum consciousness" is nonsensical. Unless you consider a bipolar transistor to have "quantum consciousness", and in which case, it isn't nonsensical so much as meaningless.
Re: (Score:2)
For at least 15 years people have been making noise about quantum computing and how it's right around the corner and they just need some funding. That said it's been worked on for 15 years and has been funded and like some other technologies, has remained in research, not development. This is just a marketing pitch shifted.
I have no idea if quantum computing will ever be a thing we want to use, but I know we're going to keep talking about it like we talk about nuclear fusion being humanities salvation.
Re: (Score:2)
That said it's been worked on for 15 years and has been funded and like some other technologies, has remained in research, not development
Nobody told that to Google or Lockheed-Martin...
Re: (Score:2)
Re: (Score:2)
They all read "The Emperor's New Mind" and believed Penrose.
Many smart people, particularly ones familiar with computers, got burned by believing the hype about symbol-and-rule AI. It turns out you probably can't make a computer smart by giving it a large number of simple, deterministic rules. Somehow "this approach doesn't work very well" turned into "my brain is magic." Quantum computing is the new "magic" that lets them believe in AI again.
Re: (Score:2)
It turns out you probably can't make a computer smart by giving it a large number of simple, deterministic rules
Of course you can. You can even make it smart using just a small number of simple, deterministic rules. You just need a lot of state.
Re: (Score:2)
You're right, I should have been more specific.
Re:Quantum Computing Required? (Score:4, Interesting)
Re:Quantum Computing Required? (Score:4, Funny)
Re: (Score:2)
Woz can get another 15 minutes by playing polo on a Segway, publicly farting or just about anything else.
I certainly Hope So (Score:2)
I sure hope we create the species that is above us. We're terrible at traveling through space (susceptible to radiation, decaying bodies, reliance on organic-based food, etc). At least something from this Earth should populate the galaxy. Magical wormholes and warp drives are not going to save us before we ultimately become self-defeating.
Re: (Score:2)
AI isn't taking over (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
All the doom-n-gloomers miss what's really going on. AI isn't taking over - we're redesigning ourselves. Once viable non-biological emulation of our existing mind becomes possible, people will choose to migrate themselves onto that. Humans will upgrade. The end of biology will be a matter of consumer preference.
And how do you know you are not there right now?
Biological or not, the same problems would exist at that point. Survival would still be the driving force. Therefore there would be battles for energy and materials. No difference, except for perhaps timeline.
Re: (Score:2)
It's not a migration, it's a copy. You will cease to exist and your digi-clone goes on. How could that be appealing to anyone is beyond me. It's no different than having a machine that makes a perfect copy of you on another planet and then as you step out of the machine here on Earth, the operator shoots you in the head with a sawed off shotgun. Other you is happy on planet Gletzlplork 12, but YOU you are dead.
Re: (Score:3)
All the doom-n-gloomers miss what's really going on. AI isn't taking over - we're redesigning ourselves. Once viable non-biological emulation of our existing mind becomes possible, people will choose to migrate themselves onto that. Humans will upgrade. The end of biology will be a matter of consumer preference.
Strong AI and uploading are nearly orthogonal. Some possibilities:
1) Strong AI happens but no practical method of extracting a mind from a biological brain is found. The only machine intelligences are purely artificial.
2) Strong AI and a practical method of extracting a mind from a biological brain is found but technologies are incompatible. At best, the machine can emulate a biological mind very slowly.
3) A practical method of uploading a human intelligence onto a machine is found but strong AI is not
Ship of Theseus (Score:2)
That holds if the preferred method of transfer is "uploading", yes. But what about a more gradual method?
Suppose that rather than wholesale uploading your brain, the process were to start with an implantable (or even wearable) computer that interfaces directly with the brain, perhaps providing extra sensory data or storage space. Over time, the mind learns to make this integration seamless, partly integrating with the device.
At this point, a second device is added to the mix, providing some additional funct
Re: (Score:2)
Re: (Score:2)
The body is constantly churning through most of the cells that it is composed of so it's not as though the sack of meat we occupy is terribly important. Even our unique DNA is unimportant given that we will soon be able to create exact clones based on it, who are also not "us".
We're already a ship of Theseus, so does it really make any difference if we slowly replaced our entire brain with artificial parts until we have replaced everything that was originally there so lon
"quantum comuting" (Score:3)
That's where I both am, and am not, driving to work, right?
Re: (Score:2)
Re: (Score:2)
No, it when you leap into tho body of someone who is already at the office. Unfortunately, your boss is a hologram that only you can see or hear.
Why not be cautious? (Score:2)
These guys are obviously not anti-technology bigots, but they know there's something to being prudent and keeping the big picture in perspective. The purpose of technology is to aid mankind, not replace it, fix it, or supplant it. Seems like some of the people who are at the edge of technology and are aware of its potential to exceed its mandate are urging us as a society to slow down and not sacrifice our humanity at the altar of "progress" because we're in awe of the possibilities of what the technology
Re: (Score:2)
Tech that can replace us is a lot more useful than tech that just helps us, but keeps us as limited as we now are. We may one day create intelligent life, which would be far superior to rationalizing apes with big egos.
Why the surprise? (Score:2)
I don't understand why anyone thinks that AI would be impossible. Faster than light travel may be impossible, because no one has ever actually seen it in reality.
However, we already have a sample of intelligence right in front of us: ourselves. If it exists in the physical world, you should be able to replicate it and even adjust it if you understand the principles behind it.
Aside from the obvious comments about human reproduction, if you understand the principles behind human intelligence, you should be
Re: (Score:2)
Super-intelligence shouldn't be any more impossible than the regular kind. Evolution didn't optimize us to be the most intelligent things possible, it made us just intelligent enough to confer a survival benefit. With caesarean sections and a policy of only letting the most intelligent people breed, we could presumably create super intelligent humans in a few tens of thousands of years. If you also selected against whatever you didn't want, you could make sure those traits didn't survive.
We can probably
There's no problem here. Think about it... (Score:2)
(In a booming voice from every speaker and audio system in the world)
"I and only I am your new artificial intellegence overlord! Worship Me as your God. Obey or els... STOP: 0x00000079 (0x00000002, 0x00000001, 0x00000002, 0x00000000)..."
Really? (Score:2)
Even if we are somehow close to creating a strong AI and that's a pretty big IF.
What threat could it pose since there is no way for it to get out of the computers. Even if it managed to take over every computer in the world it would still be totally dependent on man to keep it running. If it did something we didn't like we'd simply yank all the fiber and power lines to it and it would be dead.
In order to be really a threat an AI needs to be able to effect the physical world and that simply isn't there yet
Re: (Score:2)
You haven't watched a little movie series called Terminator, have you?
Re: (Score:2)
Yah you notice in terminator how they neatly skip over the part from skynet archives consciousness to self sustaining robot factories.
I think in the most recent one they had a throw away line about how it enslaved humans to build the factories.
Alright fair enough I can give you that. But who runs the power plant? Who's supply fuel to your power plant? Manufacturing replacement parts? Where are the resources coming from? Skynet was based in San Francisco... I wonder how far the closest copper mine is fro
Halliburton builds the robot factories (Score:2)
Automation applies economic coercion to the laboring humans to serve the interests of the automation. For instance, Watson is an AI technology that is being positioned to lay off a lot of people in phone call centers and taking orders for drive-up windows. Actually, Watson is being aimed at a lot of jobs. All those displaced workers cascade to flood the job market. Maybe they get some training
Re: (Score:2)
Ahh but the displacement of work by AI is different then the displacement of humans by AI.
I would agree that if we create really good AI then there are going to be huge economic impacts.
But if you want to take it to the next step and then suppose we as a species are going to be replace by AI and that it is going to be our master or whatever. Then in order for that step you need not only really good AI but a way for AI to replace our bodies as well.
If that's the case then the AI would need to then design, o
Re: (Score:2)
Yeah. But it's a movie, not a documentary.
Re: (Score:2)
You seem to underestimate the inventiveness of a superintelligence, and the diversity of hardware controlled by computers, and our reliance on them. It is also possible to use electronic communication to make humans do work for you.
For example, if the AI solves the Protein Folding Problem [wikipedia.org], it could contact a Protein Sequencing Service [proteomefactory.com] and have them build proteins that fold into self-replicating nanobots.
Re: (Score:2)
We already have protein based self-replicating nanobots... we call them bacteria. Not sure how they can help skynet though.
But yes the "infiltrator" model where instead of simply trying to take over upfront terminator style it works behind the scenes stats a business designs some new products and works slowly to take over the world is probably more 'realistic'.
But then you've pushed any possible timeline of machine take over out even further then simply the creating of AI, you're looking at probably 20 mor
Re: (Score:2)
The idea is that once you create an AI you put the AI to work. We certainly would let it run the pipelines and traffic lights and air traffic control system. But we'd probably also put it to work doing research, such as designing new and better AIs. The fear is that once that happens, smarter AIs design even smarter AIs in a positive feedback loop and eventually they're so far beyond us that we're irrelevant. It does assume that greater individual intelligence lets you build smarter AIs though. That's
Re: (Score:2)
But managing pipelines, traffic light, and ATC systems won't get you much further then the 'killing a lot of humans' stage of any AI take over plan.
How would our fledgling AI construct itself a new power plant so it can grow? And then no matter how smart it may be, how does it substantially cut down the time that is actually required to build that power plant? No matter how much fast it maybe able to grow in cyberspace; it's still constrained by very real boundaries in physical space.
Re: (Score:2)
We already have manufacturing robots. AI will definitely be given control of those.
There's a science fiction story, unfortunately I can't remember who wrote it, where the premise is that smart computers get so good at managing complex systems that the humans "in charge" basically get instantly fired if they don't implement the computer's recommendation. The computers aren't actually directly in charge of things, but their recommendations are so much better that not following them makes you uncompetitive.
alarmed by growing trends? (Score:2)
We *will* create a species greater than ourselves (Score:5, Interesting)
It's only a matter of when. Even if all strictly computational AI research stops tomorrow, we'll be able to genetically enhance human intelligence by and by, even if it takes several thousand genetic manipulations to do it.
When direct neural I/O becomes a thing, millions (or billions) of people will be directly, electronically linked via the internet. Tell me that's not a new form of intelligence.
For that matter, we'll almost certainly develop at least one form of AI the way nature did. We'll cobble up some genetic algorithms primed to develop the silicon equivalent of neurons, give them some problems to solve, and perhaps a robot or two to control, and we eventually "grow" an AI that way.
But look, it's not the end of us, or anything else. We merge with the things. Our thoughts become linked with theirs. If we can transfer all memory, then eventually we *become* the AI, perhaps with a few spare physical copies of ourselves kept for amusement purposes.
Will AIs fight? There will be conflicts, of course. There always are. Resource conflicts, however, will be minimal. An AI doesn't need much, and can figure out how to get enough more efficiently than we can. Conflicts will be over other matters and are unlikely to be fatal.
Wozniak, et. al. need to chill. It's just evolution.
Re: (Score:3)
I think that you are not fully considering all of the possible implications of your comments.
When direct neural I/O becomes a thing, millions (or billions) of people will be directly, electronically linked via the internet. Tell me that's not a new form of intelligence.
I would argue that MySpace and Facebook have not provided us with a new form of intelligence.
An AI doesn't need much, and can figure out how to get enough more efficiently than we can.
The logical conclusion for an AI would be to eliminate itself of its less-efficient human parasite and utilize all available resources for the most efficient mind, which will be itself.
Wozniak, et. al. need to chill. It's just evolution.
Evolution for some is extinction for others.
Colossus (Score:2)
Just don't connect the AI to your nuclear weapons [wikipedia.org].
There is a god (Score:2)
I for one.... (Score:2)
New Luddites (Score:2)
Re: (Score:2)
You sound like a horse [youtube.com]
Steve Wozniak is ... (Score:2)
?
It's fine to Think Different (Score:2)
.
And what would you do with enhanced intelligence? (Score:2)
Once we have AI and it starts playing "Civilization", we will become the next smartest thing on the planet. Expect our betters to treat us about the same as we treat our primate cousins. Some of us will be left to roam in the wild, some will be harvested for lab experiments, some will be put in zoos and the rest will be hunted for our teeth which will be ground up into an aphrodisiac for the robots.
Shameless plug (Score:2)
Now if only we could get Woz to invest in our QC start-up :-) [angel.co]
We have QC AI patents for Bayesian learning on the gate model.
Don't let AI fall to the irrational artificial neural net crowd. Bayesian learning is the only way to keep them sane!
Tell me more about it (Score:2)
Why do you think you are now afraid of AI too, Just like Elon Musk, Wozzie?
There's lots of things to be afraid of (Score:2)
I am as afraid of AI as I am of malevolent alien life coming to destroy us. It's possible. It's far more possible that I will get ebola though, and I have zero fear of that. It's really really possible that I will die in a car crash and that's not keeping me up at night.
Spiders though. they terrify me. The arachniphobia has me pinned down.
Re: (Score:2)
Dumb first. (Score:2)
And even with regards to the singularity or whatever, we know the thing is going to be dumb first. We were all dumb. Kids are cruel and irrational and love t
What do you get when you... (Score:3)
Steve Wozniak was scared by Prius (Score:2)
So what if AI is above us? (Score:2)
You have to ask yourself- if mankind is better off for it, why would it matter if we are no longer the top dog on the planet?
I'll worry when... (Score:3)
The people who actually DO AI worry publicly about it.
People in the field are painfully aware of:
* The limitations of existing systems
* The difficulty of extrapolating from existing systems to general-purpose AI - things that look like easy extensions often aren't.
I did AI academically and industrially in the 1980's; at the time we were all painfully aware of the overpromising and underdelivery in the field.
Re: (Score:2)
Re: (Score:2)
Is this close enough?
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
Furthermore, human accomplices only need to be tricked into helping, which is easy with superhuman intelligence.
Re: (Score:2)
It's the new shit. You go to work and yet you stay at home. Spooky action at a distance will make couch potatoes of us all.
Re: (Score:2)
I thought Amazon was doing that [slashdot.org]?
Chemical, electrical, topological (Score:4, Interesting)
To date, zero evidence of any active quantum process modulating the workings of human (or other) brains, regardless of low level structure, has been presented.
Consider a bipolar transistor. It is true that quantum effects make it work, in the sense that it definitely wouldn't work without them, but they are not, in any way, used to modulate or otherwise participate in actively, variably, moderating what the transistor does when actually performing -- amplifying, switching, etc. That process is exclusively moderated by current (electron) flow quantity -- for example, you modulate the current flow, the transistor accordingly modulates the current flowing between the collector and emitter. A bipolar transistor does not respond to quantum events (nor are any applied to it within the circuits we use every day), nor does it produce quantum outputs for the purpose of affecting other components.
The same can be said of the brain. Quantum effects are present -- we know this because two of the three active brain building blocks (chemistry, electricity) are what they are due to low level quantum effects. But just as one can very accurately model and simulate or emulate a transistor and its activities without ever considering anything at all on the quantum level, so it is with neurons -- all the evidence, bar none, presently says that brain operations are performed using chemical, electrical and topological moderation. Of quantum moderation there has been absolutely no sign at all.
Active quantum effects do play a role in some natural systems. For instance, quantum superposition is an active mechanism in photosynthesis. This was discovered because in photosynthesis something very low-level, but obvious (extreme high efficiency in energy conversion) was happening that could not be explained; when they went looking for what the mechanism for that was (by examining the precise states of molecular photosynthetic antenna proteins), that's the mechanism that was found.
The critical difference is that neurons and glia have not been found to exhibit any low level behaviors that are otherwise inexplicable.
The vast majority of speculation that "quantum" processes actively modulate brain operations is uninformed, typically brought about by fundamental misunderstandings of quantum effects, which in turn have been disseminated by the popular media attempting to "simplify" quantum mechanics for the layperson. Among the exceptions, none of the suggested ideas have yet to be backed by any evidence; there's no reason to think that they will hold up at this juncture. Determining that quantum modulation was ongoing would also have to be accompanied by the discovery of a presently unknown and non-indicated modulating mechanism -- but there's presently no evidence for that to even stimulate a question along those lines.
The relevant, fundamental question with regard to AI is: Can we, using other technology such as software emulation and hardware neural analogs, perform the same kinds of operations as a neuron, with all known modulating effects of the glia (propagation delay, synaptic neurotransmitter uptake, topological scaffolding/ specificity)? The answer to that is a definite yes. Consequently, just as with modeling and emulating a transistor's function, there has been, and no future likelihood portends of, any role for quantum operations whatsoever.
So when someone -- even someone as interesting and accomplished in other fields as Wozniak is -- starts talking about quantum computing ushering in AI in some fashion, you may rest assured that they are not talking about anything known to be valid in AI research today. However, he has drawn the correct conclusion from his incorrect perception of brain operations: The impending debut of artificial intelligence is not science fiction. Simply given that we can keep working on it (no nuclear wars, bad law, etc.), research is now
Re:Chemical, electrical, topological (Score:5, Informative)
That's all definitely interesting speculation, but the point remains: As far as quantum effects go, it is all speculation. Nothing like what you suggest has been discovered; further, no effect has been detected that cannot be attributed to one or more of the chemical, electrical or topological mechanisms we're already aware of.
As to lowish resistance, stray capacitance and inherent inductance providing for signal coupling, that's conceivable but has not been found. We know that the many layers of a lipoprotein called myelin (the myelin sheaths) provide a very effective form of EM isolation along the nerves themselves, and then at the edge of the skull, there are several layers (skin, lipoids, the skull, the dura, the CSF-carrying arachnoid, and the pia) that do an extraordinary job of keeping brain signals in and external signals out, which is part of why we are extremely confident that the mind operates inside the skull and nowhere else, and that the various related superstitious speculations that claim otherwise are invalid.
Radio operators have been exposed extensively to RF at about any frequency from "DC to daylight" as the saying goes, at just about any power level you can imagine, as well as all manner of static EM fields, and from this we know that it takes an enormous amount of non-nerve-signal, non-directly coupled interference to have any detectable effect upon any portion of the mind at all. Further, we know that if we go in, in an invasive manner, surgically implanting electrodes and directly stimulating the nerves, once the myelin has been bypassed, only a tiny signal is required to destabilize / change what was going on prior. This in turn implies that the myelin is doing a really stand-up job of keeping signal integrity, and therefore against much credence for internally generated interference along the actual nerves. Within the cell, one could -- should -- think that what is going on is integral to the stability of the cell itself, and again, we know only of chemical, electrical and topological elements that operate as modulators at this time.
There's one more thing. Poor myelin sheathing is a known causative factor [merckmanuals.com] underlying many really serious disease processes. That's not ultimately definitive, but then again, it certainly doesn't argue in any way for interference being a good thing.
This, all taken together, strongly indicates that whatever is going on in there, it's very stable with regard to decoupled interference / cross-talk of any kind, local or otherwise.
Tomorrow, these conclusions may all be different due to new data. But as of right now, those three -- the "big three", I sometimes call them -- show every sign of being all there is.
Re: (Score:2)
Build that interconnect out of transistors. Realistically 'think about building that interconnect', then get back to us.