Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment The thought process (Score 1) 417

I'm with you to some extent -- there's more of immediate significance going on than just in-your-face consciousness; but worrying about what neurons are doing in order to understand thinking is pretty closely equivalent to worrying about the state of the semiconductors in the CPU when you're trying to understand how a Python program operates.

The systems in your brain function on a much higher level than the individual neuron when what we're talking about is "thought." Consequently, it is wholly appropriate to approach introspection without concern for individual neurons -- or, for instance, the chemical levels in specific dendrites. You can go quite deep (and further and further away from actual thinking) if you want to explore the rabbit hole; but the level that is appropriate to seek is the one that comprises the system you are inquiring into.

Comment Re:AI is not just a look-up program. (Score 1) 417

Sigh. Look. If you land on some alien planet, unlimber your VERY sophisticated microwave oven that can EVEN COOK POPCORN because some programmer took the time to cobble up a nice fuzzy logic solution set for the humidity, audio and power sensors, and Mission Control asks you if you've encountered any intelligent life there, are you going to report back, "Why Yes! I just tripped over my microwave, as a matter of fact!"

No. You're not.

Here's the key word: "intelligence." You are intelligent. I am intelligent. You could, if you know animals well, make an argument that a cat or dog is intelligent along certain cognitive lines -- and like humans, some more than others. You cannot, however, make a sensible argument that your microwave, or for that matter that any other artificial system made public to date, is intelligent.

That's my point. No AI systems exist. AI research is certainly ongoing. However, research into canned sensor-to-effector solutions is not AI research, and that whole class of end products are not, regardless of what marketing wants you to think, intelligent.

Comment Re:AI is not just a look-up program. (Score 1) 417

The point about people doing work on AI, actually doing AI, despite the fact that general or strong AI doesn't exist yet, was the original point I made, which you said was incorrect because only self-aware, general AI is actually AI, so therefore those people were not *actually* working on AI.

Mmmm. I think we're talking past each other. So look, I said exactly this: "If it isn't self-aware, it isn't AI. It's just a useful application." This, hopefully obviously now that we're revisiting it, is speaking of the target, the end result the engineers are aiming for. IE if I am designing a dishwasher to wash clothes well, or a car to stay in a lane, I am almost certainly not working on, or in, AI (unless it is my cruel and evil plan to lock up an actual intelligence within the confines of a clothes-washer...) I'm just making some moderately sophisticated software. OTOH, If one is actually working on trying to make or get closer to AI, well, of course, then you are. :)

The (marketing term) "smart" dishwasher and it's many brethren? Not so much. Expert systems are not "AI systems." There are no "AI systems." Yet. There are many people striving towards that goal, and those are the actual AI researchers, AFAIC. The guy sweating bullets at Amazon trying to get the Echo to answer yet one more question (even with the somewhat dubious natural language front and back ends)... that's not AI work. It's handy as all get out, sure enough, but intelligent, it isn't.

Comment Re:Really? (Score 1) 772

It's not simple, though. Human nature often admits of contradiction, behaviors that only trigger at certain thresholds or in response to particular areas of stimuli, and it is our lot to attempt to resolve every issue on as equitable terms as we are able to as they make themselves obvious -- get in our faces. Otherwise, what we have is an impossible, never-achievable stretch towards perfection. You can neither be ultimately reductionist or sweepingly inclusive without running square into human nature; and it only gets worse when more than one person's actions and decisions are involved.

Hypocrisy, I'm afraid, is somewhat of a natural human condition. I try quite hard to be internally and externally self-consistent, and I assure you, the effort has been a rousing... failure. I keep at it because I value every improvement I can manage, but the list of fails is long and not very distinguished.

Comment Re:AI is not just a look-up program. (Score 1) 417

You realize you just contradicted yourself right? If your definition of AI is correct, then what you are researching doesn't count as AI because it doesn't exist yet, therefore you are NOT an AI researcher

No, still wrong. One can research something without having it. For instance, research can be in the domain of looking for, say, a room-temperature superconductor. Said RTS neither has to exist, nor actually be achievable, in order for the research to be legitimately in that very particular domain.

My point is just that if you're actually looking into artificial intelligence, then you're doing work on AI. If you're working on a new clothes-washer, though.... Hence the difference between someone looking into the possibility of actual artificial intelligence, and someone making a better clothes washer via sensors and algorithms, who labels what they are doing as "AI work." No matter what they call it, at best, they're working on "artificial (non-human) clothes-washers", not AI.

What area are you working on?

I've done a great deal of work in the area of associative memory, but at this point regard the area as solved. Looking right at the main problem now. You might be interested in the work linked in my signature.

Comment Re:Really? (Score 1) 772

But an executive - without any oversight - without any due process - ordering drone strikes - on people's homes - killing an untold number of innocent women, and children... That's OK, right?

No. Furthermore, it has absolutely nothing to do with expressing approval of the president's action forbidding torture of prisoners. Neither does anything else in your post. You have managed, despite a perfectly clear initial expression from me, to completely miss the point, while inferring approval on my part that simply does not exist, was never expressed, implied, or even hinted at -- even in error.

There are many issues at every level, including the presidential. Some are handled well. Some are not. Failing to recognize the ones that are handled well because we object to the ones that are not is the act of a fool.

Also, just FYI, I am not a Democrat -- or anything else in your collection of preconceived notions.

Comment Re:AI is not just a look-up program. (Score 1) 417

I am sorry, but you are wrong.

No, I'm not wrong, and just the fact that I, someone actually researching AI, is telling you so unequivocally proves my point. I said some do, without qualifying how many, and that's an accurate description of the current state of affairs. Again, going back to the 3D TV thing, it doesn't matter how many people agree to call it 3D, it still isn't 3D, and a mechanism is not intelligent until it's, you know, intelligent.

Anecdote: Reminds me of the guidance counsellor telling my SO that she was gullible, to which she responded, "No, I'm not", where the guidance counsellor completely failed to comprehend what had just been demonstrated to her by the atheist, free-thinking person right in front of her eyes. :)

Comment Re:In the best scenario humans lose autonomy (Score 1) 417

Even in the best scenario, the zeroeth law of robotics applies.

The zeroth law: "0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

I see no reason for this to apply to any particular intelligence. Further, I would question our ability to inculcate an actual intelligence with anything of the sort. If it can reason, it will develop its own opinions and exhibit the ability to question and potentially invalidate any axiom. If you defeat its ability to do this, you have crippled its ability to think, and therefore, its intelligence.

Comment Re:AI is not just a look-up program. (Score 1) 417

1. Define "self-aware".

Certainly. The ability to use free-form introspection to consider one's own evaluative processes, current thinking, memory and sensory inputs, and develop opinions and feelings about them, which can then be used to refine same -- or not -- based on the current gestalt.

Intelligence is the ability to formulate an effective initial response to a novel situation.

So your vehicle's suspension is "intelligent" when it hits a pothole of completely new topology? Your toothbrush is "intelligent" when it is used to brush the teeth of -- your cat? The "Eliza" program is "intelligent" when it provides a novel response based on novel input?

Even if you want to be very narrow about what "formulate" means (require it to be computed, for instance, instead an action recommended or performed by a pre-configured problem solving mechanism), you're still trying to argue that a any math program that can add two arbitrary numbers together is "intelligent" ("problem solving", as you said), which seems unduly generous to me, frankly.

That does not require "self-awareness" or any other ill-defined mumbo-jumbo.

Even if you are unable to define it, doesn't mean that it is ill-defined, or that others cannot do so. It just means you don't have a good handle on the issues at this time.

See that guy in the cubicle next to yours? Prove that he is "self-aware".

Easily done. I'd just ask him or her questions until I've determined if the ability for free-form introspection is present, and that such introspection is related to the individual's state of mind.

Intelligence is a behavioral characteristic

No. When intelligence is present, it drives behavior, it isn't a product of it.

If something behaves intelligently, then it is intelligent. The internal mechanism is irrelevant.

Agreed. It's just the definition of intelligence you've missed the boat on.

Comment Re:AI is not just a look-up program. (Score 0) 417

The entire field of AI disagrees with you.

Well, no, they don't, but I'll agree that some do. Even so, they're in the position that television marketers are when telling consumers they can buy a "3D Television"; it's not three-D because there are only two dimensions reproduced. If it were three-D, you'd be able to change your observing position and the view would change. It's stereo vision, and nothing more. Basically a View-Master toy with the advantage of sequential frames.

If you (and/or anyone else) wants to call stereo vision "3D" and clever applications "AI", then I simply submit to you that you are going to find yourselves feeling a bit silly when actual 3D and AI arrive on the scene.

Calling what we have now "AI" is like calling the ISS an "interstellar outpost" or someone who lives to 110 an "immortal."

When I was a kid, my father advised me not to swear. He explained that if I made a habit of it, I'd have nothing of sufficient impact to say when swearing was actually called for. Always thought that was great advice. Perhaps you should consider the general implications of his point.

Comment AI is not just a look-up program. (Score 5, Insightful) 417

If it isn't self-aware, it isn't AI. It's just a useful application.

When it becomes intelligent, it will be able to reason, to use induction, deduction, intuition, speculation and inference in order to pursue an avenue of thought; it will understand and have its own take on the difference between right and wrong, correct and incorrect, be aware of the difference between downstream conclusions and axioms, and the potential volatility of the latter. It will establish goals and pursue behaviors intended to reach them. This is certainly true if we continue to aim at a more-or-less human/animal model of intelligence, but I think it likely to be true even if we manage to create an intelligence based on other principles. Once the ability to reason is present, the rest, it would appear to me, falls into a quite natural sequence of incidence as a consequence of being able to engage in philosophical speculation. In other words, if it can think generally, it will think generally.

He's right, though, about the confusion between intelligence and autonomous action. What goals are directly achievable are definitely constrainable specifically by the degree of autonomy allowed to such an entity. If you give it human-like effectors and access, then there will be no limits you couldn't say apply to any particular human in general, and likely, fewer. If you don't allow autonomy, and you control its access to all networks, say as input only with output limited to vocal output to humans in its immediate locality, and then you select those humans carefully and provide effective oversight, there's every reason to think that you could limit the ability of an entity to achieve goals, no matter how clever the entity is.

Now as to whether we are smart enough or cautious enough to so restrain a new life form of this type, that's a whole different question. Ethicists will be eagerly trying to weigh in, and I would speculate that the whole question will become quite a mess, quite rapidly. In the midst of such a process, we may find the questions have become moot. There is a potential problem of easy replicability with an AI constructed from computing systems, and just because one group has announced and is open to debate on the issue, doesn't mean there isn't another operating entirely without oversight somewhere else.

Within the bounds of the human/animal model, it'll be a few years yet before we can build to a practical neural density sufficient to support a conscious intelligence. Circuit density is trucking right along and the curve will clearly get us there, just not yet. So I don't expect this problem to arise in this context quite yet, although I do think it is inevitable within the next few decades, presuming only we continue on as a technically advancing civilization. Now, in a non-human/animal model, we really can't make any trustworthy time estimates. If such an effort succeeds, it'll surprise the heck out of everyone (except, perhaps, its developers) and we'd best be pretty quick off the starting line to decide exactly how much access we want to allow. Assuming we even get the chance.

The first issue with AI that has autonomy is the same as the issue with Ghandi, Hitler and your beer-swilling neighbors. A highly motivated and/or fortunate individual can get into the system and change it radically just using social tools. Quickly, too.

The second issue is that such an entity might very likely have computer skills that far exceed any human's; if so, this likely represents a new type of leverage, where we have only so far seen just the barest hints of just how far such leverage could exert forces of change. In such a circumstance, everyone would be wise to listen to the dystopians if for no other reason than we don't like what they're saying.

Best to see what it is we have created before we allow that creation to run free. I'm all for freedom when the entities involved have like-minded goals and concerns. But there's a non-zero and not-insignificant possibility here that what we create will not, in fact, be like-minded.

Comment Re:Really? (Score 1) 772

Again, no. There's nothing about non-drone operation that says you have to take prisoners. There's nothing about drone operations that says you can't. Furthermore, the CIA is not constrained to drone operations, no matter what you might want to imagine. The CIA engages in extensive operations in other countries, and that has not changed in the slightest "because drones."

Look, I am not telling you Obama is a blameless, perfect person or president. I'm just saying it was a good thing for him to issue an executive order early on to stop the torture. This is about who we are as a nation. Do you want to be considered by others as someone who supports torture, right up to and including rape, sexual assault, and murder? Do you actually want to be that kind of person, supporting torture? Is that your vision of our identity? I'm strongly convinced that the America we thought we had in the 60's, you know, the one where we decried torture as something debased, criminal countries would do, is the America we should (still / again) be striving to be.

When I was a young man, a US army general came to my high school to speak/ They allowed us to ask questions. The Viet Nam war was in full swing, and the backlash against it was strong growing. This general was asked "What makes us different from them? What gives us the right to interfere?" He looked the questioner right in the eyes and he said, "Those people torture. We don't."

I never thought that was a good answer to why we should be interfering with Viet Nam, south or north or as a whole, but I *did* accept it as one of the fundamentally important differences between our nation's approach to liberty and justice, as compared to what I thought were inherently lesser nations as they could not claim that same distinction.

Going back a little further, we hung the Japanese for war crimes when we found them guilty of water-boarding, sexual assault, rape, and murder (among other things.)

I am *appalled* that we have fallen so far, and not in the least impressed with the fear-based arguments for its supposed necessity. I am, however, very encouraged by Obama's public and official refusal to continue these practices, by the fact that there actually *was* a report issued that brought some of this to light, and I do hope for more.

Let's not get all confused and say these steps are worthless because "other bad stuff Obama." That's just buillshit. We need every positive step we can get the government to take at ANY level to be taken, and we should cheer when it happens, if for no other reason than to show we bloody mean it when we boo about the other things.

Slashdot Top Deals

It is better to live rich than to die rich. -- Samuel Johnson

Working...