Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:What could possibly go wrong? (Score 1) 825

That was the general practice unless there were overriding concerns. Another thing was that you had to have a straight of at least 1mi (I might have the distance wrong) every so often so it could be used for aircraft.

I don't have the energy to dig up my bibliography of sources but it was certainly not hard to find a lot of detail about the highway system. Quite a lot of thought went into its design, and the most interesting thing is that commerce was really a gravy side-effect.

I read an interview with one of the original architects and he said the one thing they really screwed up was running the interstates so close to cities, and having so many ramps in those areas.

Comment Re:What could possibly go wrong? (Score 1) 825

This is not selfishness, I live on the other side of the country where there's no chance in hell of ever getting speed limits raised to 90mph. Nor do I think they should be. Rather, I think highway speeds ought to be 80th percentile. But I have spent lots of time in cars in NV and Utah and NM as a passenger and frankly it seems that the biggest problem out there is simple highway hypnosis. It's a long $#@^ way between anything in those states. Shorter travel time can be a huge win.

Anyway, you are wrong about tighter speed limits, but that's probably because you have not looked at the literature.

What has been shown time and again is that reducing the speed differential between traffic reduces accidents and fatalities, independent of actual speed. Your chances of dying in a crash are higher if you're going faster, but if you can reduce the incidence of crashes it can be -- and is -- a net win. So the goal here is to reduce accident rates as much as possible.

The engineers say that the way to do this is the 80th percentile rule; you let traffic free-flow and watch how fast it goes. Set the speed limit to the 80th percentile, rounded up a little (5mph in the US, 10k elsewhere). Set minimums at 10mph (20kph) lower.

The statistics say that traffic travelling 10mph faster *or* slower than average sees accident rates climb to 300% normal. Moreover, the slower side sees multi-vehicle accident rates climb 900%! Slower drivers cause a lot of accidents, and they involve other people much more often.

Now let's put that into the context of a typical 55mph US highway. Average traffic speed is 67mph on those highways. Minimum limits are 45mph. That means that someone -- legally -- going at the lower limit is actually going more than 20mph too slow! Very, very dangerous, both to themselves and to everyone else. But someone going 70mph -- 15mph too high according to the law -- is statistically very safe.

Given these numbers typical interstate traffic speed limits should be 70 or 75mph, not 55 or 65mph, and minimums should be 60 or 65mph respectively. That's what the engineering says. We have, unfortunately, eschewed engineering in favor of politics.

So, we have some great data from when the NMSL was repealed and a lot of limits jumped to 65mph. The first really interesting figure is that average traffic speeds jumped -- to 69mph. This put to lie the idea that traffic is just going to run at the tolerance limit of the police regardless of the speed limit. In fact, traffic tends to drive at "comfort" speeds, which unsurprisingly are somewhere near the design speed of the road.

With such a minimal increase in typical speed you wouldn't expect a large change in fatalities. There was a significant change though, absolutely -- but not when normalized for vehicle miles traveled. Moreover the fatality rate for the road system as a whole dropped by something like 5%. It's believed that this is because the change in highway limits made drivers prefer the safer interstates to the less safe rural highways (now 70mph was unlikely to get you a ticket).

Anyway, I spent awhile researching this stuff awhile back and may even still have a bibliography buried in my archives somewhere but I encourage you to do the research yourself. Even Wikipedia mentions this stuff, you could start there.

I note that many of these figures are multinational. The data supports this in the US, the UK, France, and Germany at a minimum. The best studies of this are in France and Germany. (Germany is an odd man out though; the autobahn is pretty safe even though it has severe vehicle speed differences; driver training might have a lot to do with it. US driver training is pathetic, nigh on nonexistent, and I would not recommend autobahn-style laws here.)

Here are a couple of additional factoids for you:

- Average accident speed on non-interstates in the US is 27mph. Average accident speed including interstates is 29mph. What this means is that most accidents do not have speed as a significant factor. This is not actually terribly surprising when you go look at where high accident rates occur. It's not the highways! It's not even close. Which brings us to:

- Fewer than 10% of accidents happen on highways. The US interstate system has the safest roadways in the country by far.

So where are accidents and fatalities happening in large numbers? Surface streets, at intersections. Failure to obey traffic control devices and failure to yield are the two biggest causes. These accidents happen at relatively low speeds, but it turns out that even 30mph is a honking big hunk of energy when you're talking about a couple of two-ton vehicles colliding.

I note that US traffic safety programs target speed almost exclusively, with almost no effort spent on either training or enforcement at intersections. This is unbelievably stupid public policy. The fine structure shows this bias too; speeding fines are large and grow very rapidly, but fines for running a red light tend to be near nominal. The red light runner is far more dangerous.

Comment Re:What could possibly go wrong? (Score 2, Insightful) 825

You've never been to Nevada, have you? 90mph is not stupid fast in much of the state. Dead flat straight roads for hundreds of miles ... That's Nevada.

As a general rule the US interstate system was designed to be safe at 75mph in 1950s military vehicles. It is no great trick to be safe at higher speeds in modern cars, particularly in a big empty state like NV. Heck, in that area 80mph limits were the norm until they passed the national speed limit.

Comment Re:Yes and no... (Score 1) 397

Manual management, when it's done properly, is certainly smaller ... but you have to balance that against the much larger chance of making errors[1] -- not just a significantly increased tendency to leak, but also serious errors like double-frees and use-after-free and the development time spent tracking that stuff down. (To say nothing of the costs of dealing with customers when their software crashes.) In addition GC mechanisms can have lower overall CPU costs, there are interesting optimizations available when you're doing things in bulk, but you pay for that in less predictability.

There's give and take and strong reasons to pick one or the other depending on the application type. If you step back and think about it, though, there aren't that many cases where the benefits of GC are outweighed by its costs. Software using GC tends to be easier to write and much less prone to crashing, and unpredictability is not usually in the user-perceptible (to say nothing of critical) range. Given that most of the cost of software is writing and maintaining it anything you can do to depress those costs is a big win.

Obviously there are cases where the tradeoffs are too expensive. Cellphones, as you point out, may be one of those -- but Android seems to be doing fine with Java as its principal runtime environment. (Honestly, your typical smartphone has way more memory and CPU than servers did not so long ago, to say nothing of the set-top boxes for which Java was originally designed.) Operating systems, realtime systems, and embedded systems are other cases.

[1] Brooks' _Mythical Man Month_ makes a strong case for development systems that optimize for reduced errors even at the cost of some performance. He was talking about assembly versus high level languages but (perhaps not surprisingly) the more things have changed the more they have remained the same. We have a lot of data on development and maintenance costs of various software environments now and costs tend to be lowest in the cases where the environment makes it harder for programmers to screw up. Usually by significant margins; my experience in comparing Java and C# versus C++ indicated an average time-to-completion differential of 300%, and a bug count reduction of more than 90%, over the long term. In some cases -- like network applications and servers -- the improved libraries found in Java and C# versus C++ yielded order-of-magnitude improvements. These are numbers you can take to the bank. Even if the first generation of the software is slow relative to a language like C++, the ability to rev the software three times as fast means much faster algorithmic development. It is often the case that the Java code would outperform C++ within three or four versions if the code was particularly complicated.

Of course C++ -- as with C before it -- is a particularly lousy language when it comes to memory management. It's a lot closer if you're using something that has, for instance, real arrays or heap validity checking. It annoys me to no end that none of the standard C++ development environments builds in debugging aids as a matter of course on non-release builds. If they exist at all they're hidden features (look at all the debugging heap features Microsoft Visual Studio will give you if you can figure out how to turn it on), and most of the time you have to go buy expensive add-ons (like BoundsChecker or Purify) to do things that would be trivial for the compiler writer to manage with a little extra instrumentation and integration with the library system. Alas. I actually had better debugging tools at my disposal for C++ in 1994 than I do today (although Valgrind is not bad at all), and that really pisses me off given how much money Microsoft gets for the tools I use.

Comment Re:Yes and no... (Score 1) 397

There are a lot of possible answers to that. The most obvious is that you want to limit the growth of the JVM; with garbage collecting there is a tendency to grow the heap without limit.

That was never a satisfactory answer to me, though, because it is not at all difficult to set up an heuristic to watch GC activity and grow the available heap when it looks like memory is tight -- and certainly to try it before throwing OutOfMemory! In fact, it wasn't very long before JVMs that did this started popping up (I think Microsoft's was the first; I think it's pretty darn ironic that the best JVM out there when Sun sued Microsoft was actually Microsoft's, and by no small margin).

IMO it's an anachronism that Sun's JVMs still have the hard fixed limit without even the choice to turn it off. Larger Java applications (e.g. Eclipse, Maven, and of course the web app servers) regularly break for no other reason than running into heap limits even when there is plenty of memory available on the system. I find it a huge and unnecessary irritation.

Comment Re:Maybe. (Score 1) 397

It's always been the case that there are application-visible differences in JVMs. (Remember that Java is "write once, debug everywhere." It's not just a funny tag line.) You have to detect and work around them somehow and the combination of vendor string and version is a reliable way to do that.

Comment Re:Yes and no... (Score 2) 397

The application didn't do that deliberately, it was a side effect of launching the JVM with a too-small maximum heap size. Because the option to change the VM size is specific to the individual JVM implementation you can't just guess which flags to put in there. They did the reasonable thing in the case that they couldn't identify the JVM and didn't pass any option; unfortunately that meant it ran out of memory.

Comment Re:So Jobs is not a liar? (Score 3, Informative) 373

It's not just smart phones. I had aNoxia 97xx that would drop calls if held a certain way. It always seemed obvious to me that it was attenuation.

Having said that,I thought Apple was nuts to expose the metal, and had presumed originally that it was covered in clear polymer. Every school kid radio fan knows what happens to the signal if you grab the antenna, right? So why would you make a phone with a naked antenna?

On the other hand I've played with a few 4s and the issue is IMO not nearly as severe as the tempest would imply, and while most people I know can reproduce the problem several indicate that the phone works in places where previous models didn't. If I were in the market I would still buy one. I would use a case as a matter of course anyway (put one on the 3gs immediately, have you seen what these things cost?). Not Apple's case, I swear Apple has no idea how to make a good case.

Comment Re:First post (Score 1) 578

Since IBM most likely wouldn't own the copyrights on that code, they wouldn't have standing to raise the issue, and so probably didn't even care to check. AT&T vs BSD was AT&T vs U of C copyrights.

That's a great point, but showing migration of code in the opposite direction sure would have been damning in front of a jury even if IBM couldn't have hoped for damages. Then again it's obvious at this point there was no need....

Comment Re:First post (Score 1) 578

Xenix would have, it was originally another V7 fork just like BSD as I recall, and it evolved following the SysV line -- quite parallel to BSD. The last time I used it it was pretty much SVR3, no BSD in sight.

OpenServer and UnixWare, on the other hand ... those are pure SVR4 and SVR4 owed a hell of a lot to BSD.

Comment Re:First post (Score 4, Interesting) 578

That's true, but in the push to get UNIX into the commercial space the SysV interfaces were released as an open specification. This was actually covered during the trial.

The fact of the matter was that the Linux folk didn't copy code, something that would have been obvious to any observer following it's development. The idea that there were vast amounts of stolen code was ludicrous if you knew anything at all about the internal structure of the two operating systems.

There was always the possibility of code that got injected during the large commercial code donations by e.g. IBM or SGI, and in fact the only piece of code that showed actual derivation came from SGI ... But it turned out to be both a very small amount of code and buggy to boot. As soon as people got a look at it they excised it in favor of working, original, code.

I personally expected it to go more the way of the AT&T veresus BSD case, where it turned out that AT&T had stolen tons of code from BSD, not the other way around. The Linux emulation layer in SCO UNIX seemed a particularly likely candidate. Either that turned out not to be the case or IBM simply didn't push the issue (perhaps because SCO was having so much trouble proving anything in their claims) though.

SCO's strategy always seemed to me to be a shakedown, scare companies into license agreements. Why they went after one of the deepsest pockets first is beyond me, IBM was very likely to fight given their investment, but it was clear early on that management was not very competent.

Comment Re:e readers are insanely overpriced (Score 1) 255

You raise some good points, but your "can they read two books you bought for your e-book" is off base. Sure, if you only have one e-book reader; if you have one per person it becomes much more flexible. For the last few years that was an expensive proposition (although in my experience the things paid for themselves in a few months) but prices are falling fast (and have you priced bookshelves recently?)

My daughter has my old Kindle and my wife uses the Kindle reader on her phone. We can all share the same library, meaning for instance that my daughter and I can (and do) read the same book at the same time with only one purchase.

One of the big wins for us, though, is space: We have thousands of paper books already, way too many to put on shelves. The expansion slewed a lot with the influx of e-readers.

Comment Re:We are staying on XP (Score 1) 1213

2.1GHz huh? That's not a 1998 processor. The fastest Intel processor available in 1998 -- late 1998 -- was a 450MHz Pentium II Xeon. Neither Vista nor Win7 will install on anything even close to that.

It wasn't until 2001 that Intel crossed the 2GHz line, and 2002 when there was a 2.1GHz processor in their lineup. That, I think, sets the tone for analyzing the rest of your system specs.

That 1998-era 50GB drive? Umm, no. Drives in 1998 time were generally in the single-digit gig range (much too small to even install Win7). Here is the announcement of a series of new machines from Dell that year:

http://news.cnet.com/Compaq%2C-Dell-ship-new-computers/2100-1001_3-212040.html

We'll get back to that announcement in a minute.

IBM released a 10G drive in July 1998:

http://www.tomshardware.com/reviews/15-years-of-hard-drive-history,1368-2.html

So that pretty much sets the upper limit of what would have been available. 50G drives were around in 2002-3, which is probably not coincidentally the same time frame as your 2.1GHz processor.

Now, the G1 mentioned in the article above was a pretty good Dell system in 1998, the kind of thing you bought to run NT4. Its maximum RAM? 256M, one quarter of what you say you installed.

I'm too lazy to go figure out at what point it was possible to buy a Dell desktop system that was expandable to 1G, but I am willing to bet it's somewhere around 2002, just like all of the other specs of your system.

So I would have to conclude that you actually installed Win7 on a 2002 or 2003 era machine, and it will run very poorly with only 1G RAM; my personal experiments showed that the systems' responsiveness was downright awful below 2G (32-bit).

Cheers!

Comment Re:We are staying on XP (Score 1) 1213

I think you misunderstand, I was talking about moving to a completely different machine. It's drop-dead simple to do.

It's true that an iMac or mini are not very upgradeable internally, but that's more the form factor than anything else, and you can substitute newer bigger drives internally if you like. I have generally hung FireWire drives off them instead, but YMMV.

I've personally done at least twelve full version upgrades of MacOS, including one TiBook that had full version upgrades five times. I had one problem across all of them: That TiBook had a nine-letter password set back when it was new with 10.1, and became impossible to log into when upgraded to Leopard (10.5) because it had only one account. There was an upgrade bug where passwords of more than 8 characters that had been set with 10.1 would not carry through.

It took me about 45 minutes to work out how to fix that (the obvious approach using the boot disk got me an admin account, but I still had to reset the password on the old one, and that was mildly annoying). That is around seven hours less time than the minimum I have ever spent on a Windows upgrade, and considerably less time than I had to spend trying to figure out how to get Vista Home to talk to my NAS boxen (MS changed the minimum security requirements for network shares in Vista Home for some inexplicable reason).

I am more than a little dubious about the claim of a 1998-era PC running W7. That would likely max out at 512M unless it was exotic for the time (meaning server-class hardware), and W7 wouldn't install on something that small, and the CPU and graphics would not be anywhere near W7 minimums either. I got complaints installing it on what were pretty well configured 2005-era machines and they ran poorly even doing basic things until I put at least 2.5G RAM in them.

In contrast I had a 1998 era G3 clamshell Mac running Tiger (the last version that would install from a CD), and I had Leopard running on a 2001 TiBook and 12" G4 Powerbook. The funny thing to me as I advanced from 10.1 to 10.2 to 10.3 to 10.4 and finally 10.5 was that each upgrade actually ran better on the same hardwar than its predecessor *despite* having greater capabilities.

I have to compare that to Windows. XPSP2 doubled RAM requirements and Vista quadrupled XPSP2. Win7 didn't get much worse than Vista, but it is of course not much more than a service pack to Vista. I've never seen a version of Windows that ran better on the same hardware than the one before since WinNT 3.1->3.5.

Snow Leopard makes a big break in that they dropped support for non-Intel, which means that machines I bought expecting 6-year lifespans are only going to get 5 before I hand them off to someone else. In the world of Windows PCs I'm lucky to get much more than 2 years before I roll the machine down into the Linux server farm and get something that runs the latest Windows reasonably well.

YMMV, but in terms of longevity Apple has done very, very well in my experience. And in terms of ease of migration to new hardware, which is what I was talking about previously, they are second to none.

Comment Re:We are staying on XP (Score 1) 1213

At my day job we're still developing on XP, I think mostly because that's what the customers mostly use. But we want to move to W7 because it's difficult to do development on machines limited to 3.2G ... and more and more customers are using it.

Regarding the pain of upgrading, I thought that until I got a Mac. Migration to new hardware, upgrades, and disaster recovery are all really easy. You wouldn't believe how pissed I was at Microsoft the first time I migrated to a new machine. Plug the old one into the new one, push a button, and 20 minutes later the new one had the complete environment of the old one - data, apps, settings, even drivers. It takes me days to get a new Windows box up and running.

Personally I want people on W7 because it is vastly more secure than XP. Maybe I will have to spend less time fixing the damn things.

Slashdot Top Deals

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...