Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re:Except that... (Score 1) 556

Paraphrasing you, I thought you said:
        A: From the observation that the chances of life are (exceedingly) small
        B: it is valid to conclude there is a designer.

All I said is that B is not a necessary result of A. That's what I think "still a perfectly valid conclusion" means.

However, you may have meant that B is not *ruled out* by A. That I agree with. That life was designed is not ruled out by observing that the chance of life occurring (when we take the universe as a random system) is small. But starting from A it is not valid to definitively conclude B. A does not imply that B is true.

A is consistent with both conclusions, that there is no designer or that there is.

Intelligent Design proponents set up these probabilistic arguments to show that the "probability" of evolution being true is small. Then they argue from ignorance, saying, "because I can't think of any other explanation for life, given that it seems exceedingly unlikely that life evolved on its own, then it must be true that there is a designer". There could be some other explanation for life that we haven't thought of yet. No one has proved that there are only two choices. So, even if someone proves that evolution is definitively not the answer (with probability 1.0), we still can't conclude that a designer is the answer.

Calculations like these are what drive science. First of all, we know they are wrong to begin with. We are trying to capture an immensely complex process by a few numbers and a very limited kind of structure (multiplying probabilities). Therefore, arriving at a very small probability for live evolving by chance raises questions. Are the probabilities right? Are there conditional probabilities that we haven't taken into account? Is the process really random like we are assuming? Are our other assumptions correct? Even if we convince ourselves that we're in the ballpark, a small probability may be surprising, but that doesn't make it wrong.

A more scientific kind of reaction to this is the anthropic principle: http://en.wikipedia.org/wiki/A.... The money quote for me is the weak anthropic principle, "which states that the universe's ostensible fine tuning is the result of selection bias: i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing and reflecting upon any such fine tuning, while a universe less compatible with life will go unbeheld."

Thus, given that the calculated chance of life evolving is small, one alternative to a designer is the idea of parallel universes (i.e. a multiverse: http://en.wikipedia.org/wiki/M...).

Comment: Re:Except that... (Score 1) 556

That life was designed is not a "perfectly valid conclusion". Observing that the chances of life are small supports no conclusion about design or not. The probability of lots of things are small. Not all of them imply a designer. Observing that the chances of life are small raises *questions*; it doesn't provide answers.

Comment: Same old ID argument (Score 2) 556

The whole of "intelligent design" arguments come down to this same argument: something is really unlikely; therefore the only possibility left is god. It's not a scientific argument. It's not even a logical argument. It's an emotional one predicated on couching it in emotional terms and then relying on the fallacy that unlikely things never happen, or pseudo-mathematically, "p == 0 for p epsilon, for suitably small values of epsilon".

It's really an argument from ignorance. "Anything I can't understand must have been made by god."

Comment: Re:Assumptions define the conclusion (Score 1) 574

by ldbapp (#48522377) Attached to: Hawking Warns Strong AI Could Threaten Humanity

I agree.

You said, "By that logic we want nothing either...". That is a key point. We know what it means to want something, but we *don't* know how that desire or our awareness of it arises in practice, in our brains. That is, we don't know how to implement it, even if we had the ability to fabricate actual neurons. You can, and people do, define "Strong AI" as the attempt to "create an artificial living mind". In that case, you've defined it as something we don't know how to do (yet). Hence my comment about making conclusions about something starting from a point that is not in line with reality.

As you said, "What qualities [the strong AI we eventually do build] shares with us will likely be one of those things that can't be answered for certain until we actually create it." Totally agree.

Comment: Re:Assumptions define the conclusion (Score 1) 574

by ldbapp (#48514037) Attached to: Hawking Warns Strong AI Could Threaten Humanity

So you construct a fantasy world with whatever you imagine is or will be, and then want to discuss what will happen in that world. Fine, it's a fun thing to do, but you can't then bring your conclusions back to the real world.

I think we're arguing along different lines here. You want to posit a scenario and then discuss what happens within that scenario. I'm saying that the conclusions you draw from such a discussion only apply to reality insofar as the initial scenario matches reality. Your scenario doesn't. You start with "create a true, self aware, synthetic mind ...". That's nowhere near reality, so whatever conclusions you draw are also nowhere near reality.

And that's my point. It's useful to consider "what would happen if" because people do have the goal of creating a "strong AI", but it is speculation. The reality is that all we know how to do now and in the foreseeable future is build specialized, though flexible, algorithms to perform complex tasks. Talking about these as if they are "intelligent", or "want" things, or can "think" just makes it difficult to be productive. There is already real danger in having autonomous cars, autonomous planes, autonomous soldiers, and other complex computer controlled machines. We'd be better served discussing the real risks than fretting over some sci-fi world in which machines have become super-human fictional CyberMen.

Our autonomous cars will be faced with situations like the train moral dilemma (do nothing and it will kill 5, but you can divert it to kill just 1). That problem needs to be faced and an answer provided without resorting to pretending that the autonomous car has "will" or "morality" or a "desire" to minimize some mathematical function related to the number of deaths caused. Autonomous cars, as much as they may seem to have a "goal" of taking us to our requested destination, are just algorithms we created tied to machines we created. We have designed them with a goal in mind, but we have to understand what they *are*, not what we wanted them to be.

Comment: Re:Assumptions define the conclusion (Score 1) 574

by ldbapp (#48513947) Attached to: Hawking Warns Strong AI Could Threaten Humanity

I'm allowed hyperbole. Pout.

But seriously, AI's also want nothing. They are simply machines, too. More complex, of course, but still machines. That's my point. You imbue your hypothetical AI with all the qualities of a human, plus extra. You called it a synthetic mind. So we're starting the discussion by presuming something that doesn't exist, and then concluding basically whatever we want. We then try to say that conclusion applies to the real world. That's what Hawking did. He assumed an AI that can supersede us, concluded that it will supersede us, and then inferred that AI is a threat to humanity. It's a baseless argument based on something that doesn't exist, and that we don't know how to build.

Comment: Re:Assumptions define the conclusion (Score 1) 574

by ldbapp (#48511483) Attached to: Hawking Warns Strong AI Could Threaten Humanity

This is like saying, I'm afraid of automobiles because eventually they will want to travel at the speed of light and will therefore suck up all the energy in the universe in the attempt. Automobiles will almost certainly want to travel as fast as possible because in order to be useful as an automobile, it needs to go fast.

Comment: Re:Assumptions define the conclusion (Score 1) 574

by ldbapp (#48511447) Attached to: Hawking Warns Strong AI Could Threaten Humanity

We benefit daily from programs that are nowhere near as intelligent than us. Why is it "that the only way we're likely to benefit from creating an AI is if it's vastly more intelligent than us"? We benefit from non-intelligent machines of all sorts. We benefit from Google. We benefit from Roombas. We benefit from autonomously-driven mining equipment. This list goes on for pages.

In any event, you are conflating the premise with the conclusion.

Comment: Re:Assumptions define the conclusion (Score 1) 574

by ldbapp (#48510413) Attached to: Hawking Warns Strong AI Could Threaten Humanity

Here's a trivial algorithm: int add(a, b) { return a + b; }.

No matter how much RAM you give the computer running this algorithm, it will never be faster. No matter how fast you make the clock speed of your CPU, this algorithm will never be able to subtract numbers. No matter how much electricity you allow this algorithm to consume, it will never add three numbers at the same time.

Those are situations in which having more resources doesn't help.

You then suggest ways in which algorithms could be improved to use more resources. Fine, that's engineering. The hope/goal of AI is that we can find the kind of algorithms you hypothesize about. But we don't currently have algorithms that "merely require more resources" to get smarter.

I think having a "what-if" conversation can be very useful. (I particularly enjoy them, in fact.) However, my point is that the conclusion that AI will supersede humans is based on the assumption that we have an AI that *could* supersede a human. We don't have any such AI, and we don't know how to build one. So that hypothetical conclusion is effectively the tautological implication of assuming the outcome.

My point is that speculation does not result in being able to draw actual conclusions about our actual future. If we can't achieve the pre-conditions, we won't suffer the conclusions.

Comment: Re:Assumptions define the conclusion (Score 1) 574

by ldbapp (#48510311) Attached to: Hawking Warns Strong AI Could Threaten Humanity

Clearly, I am a poor author. My point, which has mostly gotten lost, is that speculating about what an AI is or will be and then drawing conclusions about what it will do tells us nothing about what might happen *in reality*. That is because, *in reality* we do not have AIs anywhere near the capability given to them in such hypothetical scenarios as the paperclip maximizer. Moreover, we do not know how to build such AIs. Thus, with speculative premises, the conclusions are just as speculative.

There can be value in "what-if" conversations, but if the premises are unlikely to ever be realized, then so are the conclusions.

Comment: Re:Assumptions define the conclusion (Score 1) 574

by ldbapp (#48509531) Attached to: Hawking Warns Strong AI Could Threaten Humanity
a) No. b) No. c) No. d) No.

All of your points are the kind of uninformed assumptions I'm pointing out, in addition to some of them being just wrong.

Getting more resources does not necessarily make an algorithm smarter. It doesn't even always make it faster. Assuming you have some magical algorithm that "merely require[s] more resources" is just wishful thinking. Show me the algorithm. There isn't currently such an algorithm.

You can, if you will, define AI as you do in c). However, then there is no AI now, and may never be. You're speculating. And the self-aware requirement is very unlikely to be satisfied in our lifetimes. We literally don't even know how self-awareness/consciousness is implemented in ourselves, let alone how it would be implemented in something we create.

When you say, "I don't see why", and "it would likely", you're just speculating.

There's nothing much to be gained by positing unrealistic CyberMen with hypothetical powers and then trying to draw conclusions about what life with AI will be like. All the powers people like to hypothesize do not exist, and we don't currently know how to make them exist. So whatever conclusions you draw are just speculative fiction. Fun, and perhaps a useful philosophical/ethical pursuit, but it's ultimately fiction.

Comment: Assumptions define the conclusion (Score 4, Interesting) 574

by ldbapp (#48507055) Attached to: Hawking Warns Strong AI Could Threaten Humanity
Much commentary on robotics and AI is based on unknowable assumptions about capabilities that may or may not exist. These assumptions leave the commentator the freedom to arrive at whatever conclusion they want, be it utopian, optimistic, pessimistic or dystopian. Hawkings falls into that trap. From TFA: "It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." This assumes a lot about what a "super-human" AI would and could do. All the AI so far sits in a box that we control. That won't supersede us.

So commentary like this usually assumes the AI has become some form of Superman/Cyberman in a robot body, basically like us, only arbitrarily smarter to whatever degree you want to imagine. That's just speculative fiction, and not based on any reality.

You have to imagine these Cybermen have a self-preservation motivation, a goal to improve, a goal to compete, independence, soul. AI's have none of that, nor any hints of it. Come back to reality, please.

Comment: Re:Actual PhD students getting slandered? (Score 1) 448

by ldbapp (#47316603) Attached to: $500k "Energy-Harvesting" Kickstarter Scam Unfolding Right Now
He did not confirm a device, though I didn't ask. He confirmed his involvement in biz.dev, and said it was only part time. He expressed personal confidence in the project, but that's all.

There was one odd thing. I sent email to him @ucla. He replied from wetaginc.com explaining that it is because iFind isn't related to UCLA. Then he offered to send an empty message from the UCLA account. I glanced at the headers of his email and found references to eigbox.net, which seems to be implicated in SPAM related stuff. It could be innocent. He may just be being careful to separate his professional activities, and his email provider/ISP may use eigbox. Or there could be a MITM thing going on. A group looking to KS scam $1/2M could certainly be savvy enough to impersonate the people who's names were stolen.

My level of curiosity isn't high enough to pursue any further. ;)

Nothing in progression can rest on its original plan. We may as well think of rocking a grown man in the cradle of an infant. -- Edmund Burke

Working...