Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:And the purpose of this exercise is? (Score 1) 465

Nobody can predict what will happen between the U.S. and Russia, but I'd be really surprised if things got so bad that U.S. companies didn't feel comfortable shipping goods through Russia. It's not like we're talking about a third-world country or anything.

And what you say about damage is downright silly, because the same concern applies equally for a bridge inside our borders. In fact, by your standards, the docks where those boats load their cargo should never have been built, because if one of the minimum-wage immigrants carrying cargo on his shoulders out to a small boat in waist-deep water dies of a heart attack, it doesn't prevent other workers from loading cargo, whereas if a dock collapses, it does, and those workers can be used for other things if we suddenly no longer need boat shipping. I mean, the only way that logic even starts to make sense is if a serious failure is highly probable, and if that's the case, then it means they got the design wrong.

Besides, the cost of a Bering Strait bridge could be a lot lower than you might think. They would need one segment of it to be tall enough to let shipping traffic through—possibly between the two Diomede Islands—but the rest of it could ostensibly be a simple pontoon bridge, which is relatively cheap.

Most of the cost of the project would likely be for that one span between the two islands that's tall enough to let ships pass under it. That would cost several billion dollars, in all likelihood. The remaining 55 miles, assuming other pontoon bridges are any indication of cost, should be the neighborhood of $5 million to $10 million per lane-mile. At 55 miles long, a four-lane pontoon bridge should cost a couple of billion dollars, give or take, which is about as much money as we waste on a single B-2 bomber.

Of course, a pontoon bridge in that area would have to be specifically designed to withstand the rather severe storms that the Bering sea experiences, which could drive the cost way up. On the other hand, the project is so huge that economies of scale would kick in and bring the component cost way, way down (because you'd be building over 18,000 identical 16-foot segments), which would probably balance that out to a large extent.

Of course, I am not a bridge engineer, so my estimates could be way off, but I wouldn't be at all surprised if someone were able to come up with a design that fell under the $10 billion mark, or about twice the cost of the Bay Bridge. Heck, the tunnel that Russia proposed was only sixty or seventy billion, so that estimate probably isn't too far off the mark.

Comment Re:And the purpose of this exercise is? (Score 2) 465

But not cheaper and faster. A boat from China or Japan takes 10-14 plus loading and unloading time (which, if you're sharing a boat with a bunch of other companies, can potentially add weeks of delay before the boat leaves the dock), and air shipping is relatively expensive. With two or drivers trading off, you could potentially do California to Japan by truck in about a week.

Having a bridge between North America and Asia could be absolutely huge for shipping, as a potential midpoint between the two shipping methods. Whether it will be or not is another question.

Comment Re:Obvious deflection. (Score 2, Interesting) 262

Because there is no good way to lay blame when damage occurs.

With a non-autonomous weapon, the person who pulls the trigger is basically responsible. If you're strolling in the park with your wife, and some guy shoots her, well, he's criminally liable. If some random autonomous robot gets hit by a cosmic ray and shoots your wife, nobody's responsible.

This is a huge issue for our society, because the rule of law and criminal deterrence is based on personal responsibility. Machines aren't persons. The death penalty for a machine is stupid (watch out, robot, if you kill someone we'll take out your batteries!). The number of ways that things can go wrong without the owner of the machine having a reasonable amount of liability is huge.

What if the autonomous weapon malfunctions in the field? Is the owner responsible for having deployed in that particular location? Is the manufacturer responsible for the bugs that occur? What if the machine is operating outside of recommended parameters? What if the machine was hacked, and the bug occurs due to a faulty communication issue, ie the message was sent to authorize targeting your wife, but then a fraction of a second later another message was sent rescinding the order, but the message was garbled or never arrived due to a netwoking delay in transit on Amazon's cloud servers? What if the machine's owner deploys thousands of vermin killing robots around the city without incident every day, but it just happened to kill your wife because she was misidentified as a rodent?

The fact is that AIs and autonomous robots have no legally useful place in society (unlike nonautonomous robots). There is almost no deterrence value in threatening an owner with fines (how much is reasonable in the rodent example?) and there is no value in destroying the offending machine (an autonomous machine is not alive, and it may be the identical model from a manufactured run of 1 million products, so what's the point of scrapping that one unit?). There is no point is blaming a random customer who bought the machine and probably has no clue at all how it operates or how to detect malfunctions. And you can bet that the manufacturing chain is full of lieability disclaimers and insurance companies will pass the buck. So what hope is there for avenging your wife? And if it goes to trial (against whom?) how long and how much cost will be spent for an uncertain outcome?

The ethical issues surrounding blame are serious, and at the risk of going slightly off topic, they are similar to the issues of terrorism. If a suicide bomber blows himself up in a crowded place, you can't pick up his pieces and stick them in jail. Nothing you can do to him has any deterrent effect, and going after his family or friends is, at best, a legal nightmare and an ethical problem. The issues surrounding autonomous machines are a bit like that, because, well, the fact that it's an *autonomous* machine means that no human being was actually pulling the trigger or directly making the choice to shoot.

Comment We have no idea what "superintelligent" means. (Score 4, Insightful) 262

When faced with a tricky question, one think you have to ask yourself is 'Does this question actually make any sense?' For example you could ask "Can anything get colder than absolute zero?" and the simplistic answer is "no"; but it might be better to say the question itself makes no sense, like asking "What is north of the North Pole"?

I think when we're talking about "superintelligence" it's a linguistic construct that sounds to us like it makes sense, but I don't think we have any precise idea of what we're talking about. What *exactly* do we mean when we say "superintelligent computer" -- if computers today are not already there? After all, they already work on bigger problems than we can. But as Geist notes there are diminishing returns on many problems which are inherently intractable; so there is no physical possibility of "God-like intelligence" as a result of simply making computers merely bigger and faster. In any case it's hard to conjure an existential threat out of computers that can, say, determine that two very large regular expressions match exactly the same input.

Someone who has an IQ of 150 is not 1.5x times as smart as an average person with an IQ of 100. General intelligence doesn't work that way. In fact I think IQ is a pretty unreliable way to rank people by "smartness" when you're well away from the mean -- say over 160 (i.e. four standard deviations) or so. Yes you can rank people in that range by *score*, but that ranking is meaningless. And without a meaningful way to rank two set members by some property, it makes no sense to talk about "increasing" that property.

We can imagine building an AI which is intelligent in the same way people are. Let's say it has an IQ of 100. We fiddle with it and the IQ goes up to 160. That's a clear success, so we fiddle with it some more and the IQ score goes up to 200. That's a more dubious result. Beyond that we make changes, but since we're talking about a machine built to handle questions that are beyond our grasp, we don't know whether we're making actually the machine smarter or just messing it up. This is still true if we leave the changes up to the computer itself.

So the whole issue is just "begging the question"; it's badly framed because we don't know what "God-like" or "super-" intelligence *is*. Here's I think a better framing: will we become dependent upon systems whose complexity has grown to the point where we can neither understand nor control them in any meaningful way? I think this describes the concerns about "superintelligent" computers without recourse to words we don't know the meaning of. And I think it's a real concern. In a sense we've been here before as a species. Empires need information processing to function, so before computers humanity developed bureaucracies, which are a kind of human operated information processing machine. And eventually the administration of a large empire have always lost coherence, leading to the empire falling apart. The only difference is that a complex AI system could continue to run well after human society collapsed.

Comment Re:It's coming. Watch for it.. (Score 1) 163

The overriding principle in any encounter between vehicles should be safety; after that efficiency. A cyclist should make way for a motorist to pass , but *only when doing so poses no hazard*. The biggest hazard presented by operation of any kind of vehicle is unpredictability. For a bike this is swerving in and out of a lane a car presents the greatest danger to himself and others on the road.

The correct, safe, and courteous thing to do is look for the earliest opportunity where it is safe to make enough room for the car to pass, move to the side, then signal the driver it is OK to pass. Note this doesn't mean *instantaneously* moving to the side, which might lead to an equally precipitous move *back* into the lane.

Bikes are just one of the many things you need to deal with in the city, and if the ten or fifteen seconds you're waiting to put the accelerator down is making you late for where you're going then you probably should leave a few minutes earlier, because in city driving if it's not one thing it'll be another. In any case if you look at the video the driver was not being significantly delayed by the cyclist, and even if that is so that is no excuse for driving in an unsafe manner, although in his defense he probably doesn't know how to handle the encounter with the cyclist correctly.

The cyclist of course ought to know how to handle an encounter with a car though, and for that reason it's up to the cyclist to manage an encounter with a car to the greatest degree possible. He should have more experience and a lot more situational awareness. I this case the cyclist's mistake was that he was sorta-kinda to one side in the lane, leaving enough room so the driver thought he was supposed to squeeze past him. The cyclist ought to have clearly claimed the entire lane, acknowledging the presence of the car; that way when he moves to the side it's a clear to the driver it's time to pass.

Comment Re: Just goes to show you UNIX SUX (Score 1) 68

So if you are an ISP providing a secondary DNS service, you're happy to create accounts with ssh/rsync access for 10 000 customers who all have more lax security than you do?

Sure. You give them all a shell account with access to their own zone files, and you require them to provide a public key for authentication (no passwords to attack). Then, you have a separate process that watches for changes and updates the official zone files that the daemon reads. Clearly, a daemon that has write access to all of the zone files is inherently less safe than a series of ssh accounts, each with access to only a single user's files, coupled with a daemon that has only read-only access to copies of the original zone files.

Comment Re:Don't buy the cheapest cable (Score 1) 391

There are chemicals you can apply to plastic to make it less brittle, chemicals which are banned in most of the developed world because of their carcenogenic side-effects.
The computer magazine I read conducted a test of various components at the start of the year and had a very big surprise. I believe product lines were dropped.

Comment Re: Solution: Don't Trust Anyone (within reason) (Score 1) 82

Dear AC, you seem to be a cheapskate. You want "free labor"? Fuck off. Free software gives *anyone* the ability to pay someone who knows what he's doing to look at, and modify, the code. What more could anyone want? (except for cheapskates like you, but those people's " complaints" aren't worth addressing anyway) That's the beauty of Free: you don't *have* to trust any Google's, Microsoft's or Apples or anyone with your security, because you can choose who will do the work and what exactly the criteria will be for the investigation

Slashdot Top Deals

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...