Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Anthropomorphizing (Score 1) 421

That's a very short term view.

With regards to my second paragraph, it may indeed be only a matter of (a very long) time. However:

when we create something at least as clever as us, that may very well be the end of our era.

What I tried to critique in my first paragraph is exactly this implicit imputation of human-like motivations to the supposed AI.

Comment Re:Funny, that spin... (Score 2) 421

Especially if those with domain-specific expertise need to save their careers.

The problem with this argument is that it always rules out the opinions of those most likely to have correct opinions on any subject. You need evidence for specific instances of corruption involving the specific individuals whose statements are being evaluated in order for this to-the-wallet argument to have any weight.

Comment Re:Anthropomorphizing (Score 2) 421

-- quest for power / energy / food, survival, and maybe even reproduction

But where do these come from? I submit that each one of these is only suggested here because we already have these motivations.

we're a biological vessel for intelligence

I consider this antimaterialist. Our bodies aren't vessels (except in that they're literally full of fluids) we inhabit, they are us.

Comment Anthropomorphizing (Score 4, Interesting) 421

IMHO, all of the fear mongering is based on anthropomorphizing silicon. It implicitly imputes biological ends and emotionally motivated reasoning to so-called AI.

I think that folks who don't have hands on experience with machine learning just don't get how limited the field is right now, and what special conditions are needed to get good results. Similarly, descriptions of machine learning techniques like ANNs as being inspired by actual nervous systems seems to ignore 1) that they are linear combinations of transfer functions (rather than simulated neurons) and 2) even viewed as simplified simulations, ANNs carry the very strong assumption that nothing happening inside a neuron is of any importance.

Comment Re:Article is stupid (Score 1) 236

One man, Harry Daghlian, working alone at night, let slip one cube too many, frantically grabbed at the mound to halt the chain reaction, saw the shimmering blue aura of ionization in the air, and died two weeks later of radiation poisoning. Later Louis Slotin used a screwdriver to prop up a radioactive block and lost his life when the screwdriver slipped. Like so many of these worldly scientists he had performed a faulty kind of risk assessment, unconsciously mis-multiplying a low probability of accident (one in a hundred? one in twenty?) by a high cost (nearly infinite).

(Emphasis mine). That quote is from Genius, James Gleick's biography of Richard Feynman - the author of TFA apparently makes the same mistake.

Comment Re:Harder: self-stabilizing parachute, or balance (Score 1) 496

Well there is bound to be some shit floating around any gravity well. As long as we are talking spherical cows in a vacuum just make the massless parachute larger? How do you actually define "atmosphere" anyways?

Say we want to land on Mars. The atmosphere is 100 times less dense than on Earth, so we'll need 100 times more parachute. I'm sure that's still a lot less mass and complexity than the balancing rockets...but what if we want to make a round trip, and need to land on Mars and Earth?

We're going to need parallel parachute systems of different sizes, or a parachute that can be re-packaged combined with variable unfurling, and suddenly the parachute option is looking a lot more complicated. On the other hand, if you get the balancing rockets to work, you can use them anywhere, over and over again.

Slashdot Top Deals

After the last of 16 mounting screws has been removed from an access cover, it will be discovered that the wrong access cover has been removed.

Working...