Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:This run at driverless cars will fail (Score 1) 114

I think it's a solvable problem.

As long as the manufacturer has a sufficient income stream and a way of making sure that cars with known flaws are fixed there is no reason they couldn't cover the liability for all their cars and they of course have the option of taking out an insurance policy against that eventuality. The key will be ensuring that revenue stream. The nightmare situation for a manufacturer is being held responsible for a product they no longer make any income from or have any control over.

For this reason I imagine when self-driving cars first hit the market it will be on an "all-in lease" basis where the manufacturer remains in control and can therefore respond effectively to dangerous flaws before they run up too much liability. I would expect sales of self-driving cars to require some legislative moves to define the extent of the manufacturer's liability.

Comment Re:Question (Score 1) 114

Prototype planes are registered as "experimental aircraft". That means that the authorities have looked at it and decided it's safe enough for a test pilot to fly it. Proper type approval comes later when the manufacturer has gathered enough evidence by (among other things) actually flying the plane.

In the USA home built aircraft are also registered as experimental aircraft (despite not being truely "experimental" in most cases) and get much the same level of scrutiny. Other countries may have different rules on homebuilts.

As for which airports it's going to depend on the type of plane. Big planes are going to be built and tested somewhere there is a big runway. Big runways are expensive and politically difficult to built so those facilities are likely to be built next to an existing one which may also form part of a fairly major airport. Airbus do their assembly and testing at tolouse international airporpot. Boeings main manufacturing facilities seem to be attatched to non-international but still reasonablly large airports. Smaller planes are obviously built and tested at smaller airports

Much as the only way you really find out how a plane copes with flying and get the snags out of the design is to perform test flights the only way you really find out how well a self driving car (or a human driver for that matter) handles real road conditions and what situations it has trouble handling is to test it on real roads. Simulations and lab tests are important but they are not a substitute for real world testing. Having experianced humans arround during that real world testing to intervene is also a good idea (again for both human drivers and manchine drivers).

Comment Re:Why do I get the funny feeling that (Score 4, Informative) 265

Do they really need one?

I can't find an exact figure for the donation but according to http://www.openbsdfoundation.o... it was in the $25K to $50K range. That may be a lot for an opensource project running on a shoestring budget but it's pretty trivial to MS. If they get some good PR and some help with the windows port of openssh out of it then it's probablly money well spent.

Comment Re:It won't work that way (Score 1) 307

I'd've loved to see ARIN put a "you can only get v4 space if you show us that you're doing a serious v6 deployment too" policy on their last /8.

I think they should have done that long before the last /8 and they should have carefully defined what was meant by "serious v6 deployment". Something along the lines of

1: all IPv4 customers of the requester must be offered IPv6.
2: For new customers any provider supplied equipment must support IPv6 in it's default configuration and all instructions must cover IPv6.
3: all existing IPv4 customers of the requester must be explicitly contacted and instructed on the steps needed to get IPv6.
4: All public services operated by the requester must be offered on IPv6.
5: The company must operate local relays for 6to4 and teredo and direct all internal customer traffic for 2001::/32 2002::/16 and .

But they didn't. ICANN and the rirs knew or should have known that continuing as they were would lead to v4 exhaustion before serious IPv6 deployment but they did it anyway.

Comment I call BS on the pracitical applications. (Score 4, Insightful) 148

TFA seems to conflate the ideas of speed ratio and force multiplication. That is only true if the mechanism is perfectly efficient. In practice some of the input force will instead be consumed opposing friction in the mechanism and the output force will be limited by the stretch of the parts. So the maximum force multiplication achived may be substantially lower than the speed ratio.

To make a high ratio gearbox practical for force multiplication the low torque high speed parts need to be small to minimise friction while the low speed high torque parts need to be large to prevent them from breaking.

To make it practical for accurate rotational positioning again the low speed parts need to be large, otherwise flexibility in those low speed parts will compromise the ability to accurately maintain position.

Comment Re:Post should have clarified: (Score 1) 179

AIUI we have a situation where some miners are enforcing stricter rules than others.

If the strict miners significantly out mine the loose ones then not much will happen. The blocks that don't pass the strict rules will quickly be forked off and die and noone sensible accepts a one-block-confirmed transaction for anything important.

However if the loose miners out mine the strict miners you get a long lasting fork between the strict and loose miners. People whose clients only enforce the loose rules will see what is going on in the loose fork. People whose clients enforce the strict rules will see what is going on in the strict fork. The two forks will most likely contain most of the same transactions but there may be some cases where someone manages to engineer that the same bitcoins are transferred to different places in the two forks, newly mined coins will also go to different people in the two forks.

If the mining rate of strict and loose miners are approximately equal then you have a mess. The two forks could run in paralell for some time and then the strict fork could either hit a run of good luck or be boosted by miners switching to the strict rules and kill the loose fork. AIUI that is the present situation.

It is most likely this will end with the strict miners outnumbering the loose ones, any transactions that only happened on a loose branch will then be killed.

Comment Re:Well... (Score 1) 377

It does but it isn't always practical to use it.

If all your users do is create and edit files then sure you can use the --update flag and omit the --delete flag making the rsync operation a lot safer.

but if your users are more active that is not so practical. Assuming this storage is used as a work area by developers they are likely to be doing things like deleting files and sometimes even deleting files and replacing them with a copy of an older file (for example deleting a dirty copy of a source tree and replacing it with a clean one). So to copy all the changes you need to use rsync in a far more agressive mode without the --update flag and with the --delete flag.

It was probablly a mistake to put the agressive rsync in a cronjob, it would almost certainly have sufficed to use a less agressive rsync in the cronjob and only use the agressive one manually for the final sync but I can see how someone inexperianced would fail to think of that.

It was also of-course a mistake not to defuse the old server when decomissioning it. Ideally by BOTH disabling the cronjobs and disabling the credentials that allow the decomissioend server to talk to the active servers.

 

Comment Re:Fricking finally. (Score 1) 307

Normal NATs use one internet ip:port combination for each active (activity may be determined by timeouts and/or by watching for connection closures) internal ip:port combination. That means you can only have ~65K active outgoing connections per internet IP.

You could build a high ratio NAT which didn't do that. Technically for basic connectivity to work the source IP/port combination only needs to be unique for a given destination server (possiblly even a given port on a given destination server) but building such a thing would totally break most nat traversal techniques and hence break things like P2P and online gaming even worse than a normal NAT would.

NAT also really doesn't help much on the server side because people expect their services to be on well-known ports. For some services you can host multiple hostnames on the same ip either by serving them from the same server or using a reverse proxy but for others that is less practical. One specific case of interest is https. Right now for https services that matter people want a dedicated IP because of older clients that don't support SNI but as windows XP and android 2.x decline that will become less of an issue.

Comment Re:It's the end of the world as we know it! (Score 2) 307

The unusual thing about comcast is they are an insanely large triple play provider with a heavy reliance on IP. Their triple play services ended up using about 8-9 IP addresses per household* . Of these only one (the customer's internet device) needed to be a public IP but comcast's system was so damn large and IP hungry that they ran out of space in net10 and had to start using public IPv4 addresses for internal management.

So while most non-botique access providers were probablly thinking "meh, when the IPv4 crises hits we can keep going almost indefinitely with CGN, lets let someone else be the early adopter of IPv6" comcast didn't have that buffer. They faced a stark choice between stopping expansion of services, federating their network**, or adopting IPv6. They chose IPv6.

That is why comcast is so ahead of the game on IPv6.

* http://meetings.ripe.net/ripe-...
** That is splitting it into multiple sections to allow IP reuse and redesigning their management systems to cope with it.

Comment Re:It's the end of the world as we know it! (Score 1) 307

That part is not true. ICANN owns the address space, and their agreements state they can take some or all of it back if it isn't being used. The company I work for lost all of our /19 because they discovered we lied and had no intention of even using the space.

The big legacy assignments predate those agreements. It is much less clear legally whether ICANN and/or the RIRs have the right to reclaim legacy space than with more recent assignments.

There is also the question of how much legal power ICANN has over IP addresses in the first place. Is there actually any law that you should route traffic for an IP address to the organsation that ICANN says owns it? Is there any law preventing the teir 1 providers from collectively telling ICANN to go fuck themselves and setting up their own body to decide who has the right to advertise IPv4 addresses on their networks and hence the internet? I'm not aware of any.

And there isn't much point, reclaiming those blocks would have just slightly delayed the end of cheap and easy IPv4. Since the widespread adoption of IPv6 is highly dependent on the end of cheap and easy IPv4 I doubt reclaiming those blocks would have made much difference in the end.

Slashdot Top Deals

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...