Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Fallacy of Climate Control (Score 1) 248

People have been able to change the environment for ages (and the climate depends somewhat on the environment). Deforestation and overgrazing, for example. Done on a large enough scale such things will change the climate downwind. Vegetation affects the albedo and temperature and rate of evaporation and also particulates and volatile organic compounds -- global CO2 changes are not the only way to affect climate.

Comment Re:Not surprising (Score 1) 291

The problem is not want to buy but can afford to buy. Tesla is at the high end of what I would consider the car pricing range if you leave out the super premium and exotics. As a result, many people who might preferentially buy one simply can't afford one.

Sure, but that's only an issue if the regulations specify Tesla levels of performance and efficiency. I'm suggesting the regs could be written with the most efficient ICE automobiles on the market *today* as the benchmark for what is feasible. These are by not necessarily fantastically expensive, nor are they hair-shirt city cars. The Mazda 3 is a four door sedan that seats five and has an engine that delivers 184 hp at 26 mpg city/35 highway; MSRP is 18.8K$. If you need a people mover you can get a seven passenger Mitsubishi minivan rated 25 city/31 highway for 23.2k$.

It's clear that the current state of the art in ICE makes affordable, practical cars that exceed the current average mileage technologically feasible. They're being sold now. If on the other hand you want high performance, e.g., to go 0-60 mph in under 4 seconds, then you're talking big bucks and exotic technology.

What manufacturers won't be able to do is slap a tarted-up body on a primitive $26,000 truck chassis, call it an SUV, and charge $50,000 for it. I'm talking about the Silverado based Suburban. I think there's a place in the world for such vehicles, but it's insane to charge an additional 24k to slap two rows of seating in place of a pickup bed; there's plenty of headroom to charge a gas guzzler tax on that one.

Comment Re: Tiny black holes (Score 4, Insightful) 156

Man creates fancy cancer causing agent, lets call it ... agent orange. Did God create cancer?

All those "carcinogenic" substances you hear about don't cause cancer -- they increase the rate of mutation, which wouldn't ever cause cancer if the cells were better designed. To put it another way, if people didn't naturally get cancer it would be almost impossible to design a substance that would give them cancer. If an engineer had designed human DNA, then that engineer would be blamed if random mutagens would routinely cause cancer -- that's why we have fail-safes and error-correcting code. Human cells also have fail-safes and error correcting code, but they're poorly designed.

Just as an example, the naked mole rat has additional fail-safes and so is almost immune to cancer.

Comment Re:Not surprising (Score 1) 291

Indeed. But it's also true that change per se puts more stress on less innovative or agile companies, especially companies that have massive investments sunk into older technologies. No matter what rules you set it'll benefit some companies over others; rules that are very favorable to GMC would be unfavorable to Tesla and vice versa. They'll both argue that rules that benefit them the most are best for the country.

I'll say this for Tesla's position, though: the notion that it's physically impossible to build fuel efficient cars that people will want to buy is balderdash.

Comment Re:Obvious deflection. (Score 1) 262

Because it happened with computers so it must be more dangerous because I do not understand computers.

No, because computers allow for auto-targeting, self-deploying weapons. Though soldiers are notorious for unquestioningly following orders, computers really do unquestioningly follow orders. Imagine if there were a large army of robot soldiers and some crazy hacker got control of them -- or worse, a politician.

Comment We have no idea what "superintelligent" means. (Score 4, Insightful) 262

When faced with a tricky question, one think you have to ask yourself is 'Does this question actually make any sense?' For example you could ask "Can anything get colder than absolute zero?" and the simplistic answer is "no"; but it might be better to say the question itself makes no sense, like asking "What is north of the North Pole"?

I think when we're talking about "superintelligence" it's a linguistic construct that sounds to us like it makes sense, but I don't think we have any precise idea of what we're talking about. What *exactly* do we mean when we say "superintelligent computer" -- if computers today are not already there? After all, they already work on bigger problems than we can. But as Geist notes there are diminishing returns on many problems which are inherently intractable; so there is no physical possibility of "God-like intelligence" as a result of simply making computers merely bigger and faster. In any case it's hard to conjure an existential threat out of computers that can, say, determine that two very large regular expressions match exactly the same input.

Someone who has an IQ of 150 is not 1.5x times as smart as an average person with an IQ of 100. General intelligence doesn't work that way. In fact I think IQ is a pretty unreliable way to rank people by "smartness" when you're well away from the mean -- say over 160 (i.e. four standard deviations) or so. Yes you can rank people in that range by *score*, but that ranking is meaningless. And without a meaningful way to rank two set members by some property, it makes no sense to talk about "increasing" that property.

We can imagine building an AI which is intelligent in the same way people are. Let's say it has an IQ of 100. We fiddle with it and the IQ goes up to 160. That's a clear success, so we fiddle with it some more and the IQ score goes up to 200. That's a more dubious result. Beyond that we make changes, but since we're talking about a machine built to handle questions that are beyond our grasp, we don't know whether we're making actually the machine smarter or just messing it up. This is still true if we leave the changes up to the computer itself.

So the whole issue is just "begging the question"; it's badly framed because we don't know what "God-like" or "super-" intelligence *is*. Here's I think a better framing: will we become dependent upon systems whose complexity has grown to the point where we can neither understand nor control them in any meaningful way? I think this describes the concerns about "superintelligent" computers without recourse to words we don't know the meaning of. And I think it's a real concern. In a sense we've been here before as a species. Empires need information processing to function, so before computers humanity developed bureaucracies, which are a kind of human operated information processing machine. And eventually the administration of a large empire have always lost coherence, leading to the empire falling apart. The only difference is that a complex AI system could continue to run well after human society collapsed.

Slashdot Top Deals

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...