Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:wrong OS? NO! Wrong QUESTION! (Score 1) 1348

There are actually several fairly decent image editors on the web now (there weren't even a year ago), like pixlr.com. I'm not uninstalling my copy of Photoshop any time soon, for lots of reasons, but every passing day these programs get closer in functionality and for a whole lot of uses they're already there.

Regardless, I think content creation is going to need a PC or something like a PC for a good long time to come. The combination of high-bandwidth and precise input (keyboard[1], Wacom tablet) and horsepower is enough to take something like an iPad out of the picture completely for a lot of things. Of course, not very many people actually *do* those things, and for some very common tasks -- like constructing a presentation -- an iPad could well be superior. (I've done it with Keynote; It's *almost* there, but several UI annoyances are big enough to make me go back to the desktop. I could totally see using it exclusively with a few UI tweaks though, and in some cases it's already a lot better.)

[1] Of course, you can get a keyboard for an iPad if you want one. That kind of negates the beauty of the device if you ask me, though.

Comment Re:3 Menu Clicks (Score 1) 403

This is pretty similar to the position I'm in. I skipped the HTPC in favor of a Tivo, even though it was more limited, because every experiment I did with HTPCs ended with spending a ton of money to get something pretty fragile. OTA recording worked great, video capture through a cable box worked fine, but HDTV pretty much nuked it. The Tivo was a far better solution, both in that it works (and my wife loves it) and that it wasn't all that expensive over the long term.

Still, the Tivo has been a long-term disappointment. It does the DVR thing brilliantly, but it was obvious to me right from the outset that it could be the center of the AV stack if they put a little effort into it. But they didn't! And every new feature they add has leveraged Tivo's servers, which are so underprovisioned that you often get old waiting for a key click to be responded to. I will continue to use the Tivo until I find the cable connection to be redundant (5 years out, I bet) but I can already see its end-of-life.

$100 for an Apple TV is so cheap, and the interface so clean, that it's worth a shot just to see what it's like. Heck, I'd do it just for Netflix streaming. I will almost certainly buy one before Christmas, as soon as I get around to getting an HDMI switcher so I have an input to hook it up to.

I was surprised that the Logitech Google TV was $300, I expected closer to $200 based on the specs. I think that's going to be a hard sell in a recession economy (I'm certainly not lining up to buy one just yet and I'm very gadget-happy). My gut call is that Apple has the right idea, a dirt-cheap platform that tries to do a few things very well. If they manage to get enough TV content I would drop my cable subscription in a heartbeat, but even without it access to my iTunes database through my AV stack is worth $100. It's "good enough."

And I think my rationalizations come to the heart of the marketing problem with these devices: None of these will go mainstream without something in addition to TV, if only because none of them will have enough content to seriously compete with cable. There must be Something Else.

I think we're going to see the iTunes store for the Apple TV within the next year and they're going to push gaming hard. If they do that I could totally see this device doing a Wii and selling a gazillion units as a cheap little game platform. (Seriously, $5 games on my TV? I would totally do that.) Apple already proved they can sell iPod touches on that model very effectively. If they get that kind of volume the TV content people will sit up and notice. I wonder if that wasn't Jobs' game plan from the start of the Apple TV reboot.

Of course, Google TV could do the same thing (and I know they're talking about it). $300 though ... that is not a "take a chance" price, and despite huge gains they still don't have anything like the developer infrastructure of the iPhone/iPod touch to leverage.

No matter which way it goes I guess we will get cool new gadgets though, so bring it on :-).

Comment Re:What could possibly go wrong? (Score 1) 825

That was the general practice unless there were overriding concerns. Another thing was that you had to have a straight of at least 1mi (I might have the distance wrong) every so often so it could be used for aircraft.

I don't have the energy to dig up my bibliography of sources but it was certainly not hard to find a lot of detail about the highway system. Quite a lot of thought went into its design, and the most interesting thing is that commerce was really a gravy side-effect.

I read an interview with one of the original architects and he said the one thing they really screwed up was running the interstates so close to cities, and having so many ramps in those areas.

Comment Re:What could possibly go wrong? (Score 1) 825

This is not selfishness, I live on the other side of the country where there's no chance in hell of ever getting speed limits raised to 90mph. Nor do I think they should be. Rather, I think highway speeds ought to be 80th percentile. But I have spent lots of time in cars in NV and Utah and NM as a passenger and frankly it seems that the biggest problem out there is simple highway hypnosis. It's a long $#@^ way between anything in those states. Shorter travel time can be a huge win.

Anyway, you are wrong about tighter speed limits, but that's probably because you have not looked at the literature.

What has been shown time and again is that reducing the speed differential between traffic reduces accidents and fatalities, independent of actual speed. Your chances of dying in a crash are higher if you're going faster, but if you can reduce the incidence of crashes it can be -- and is -- a net win. So the goal here is to reduce accident rates as much as possible.

The engineers say that the way to do this is the 80th percentile rule; you let traffic free-flow and watch how fast it goes. Set the speed limit to the 80th percentile, rounded up a little (5mph in the US, 10k elsewhere). Set minimums at 10mph (20kph) lower.

The statistics say that traffic travelling 10mph faster *or* slower than average sees accident rates climb to 300% normal. Moreover, the slower side sees multi-vehicle accident rates climb 900%! Slower drivers cause a lot of accidents, and they involve other people much more often.

Now let's put that into the context of a typical 55mph US highway. Average traffic speed is 67mph on those highways. Minimum limits are 45mph. That means that someone -- legally -- going at the lower limit is actually going more than 20mph too slow! Very, very dangerous, both to themselves and to everyone else. But someone going 70mph -- 15mph too high according to the law -- is statistically very safe.

Given these numbers typical interstate traffic speed limits should be 70 or 75mph, not 55 or 65mph, and minimums should be 60 or 65mph respectively. That's what the engineering says. We have, unfortunately, eschewed engineering in favor of politics.

So, we have some great data from when the NMSL was repealed and a lot of limits jumped to 65mph. The first really interesting figure is that average traffic speeds jumped -- to 69mph. This put to lie the idea that traffic is just going to run at the tolerance limit of the police regardless of the speed limit. In fact, traffic tends to drive at "comfort" speeds, which unsurprisingly are somewhere near the design speed of the road.

With such a minimal increase in typical speed you wouldn't expect a large change in fatalities. There was a significant change though, absolutely -- but not when normalized for vehicle miles traveled. Moreover the fatality rate for the road system as a whole dropped by something like 5%. It's believed that this is because the change in highway limits made drivers prefer the safer interstates to the less safe rural highways (now 70mph was unlikely to get you a ticket).

Anyway, I spent awhile researching this stuff awhile back and may even still have a bibliography buried in my archives somewhere but I encourage you to do the research yourself. Even Wikipedia mentions this stuff, you could start there.

I note that many of these figures are multinational. The data supports this in the US, the UK, France, and Germany at a minimum. The best studies of this are in France and Germany. (Germany is an odd man out though; the autobahn is pretty safe even though it has severe vehicle speed differences; driver training might have a lot to do with it. US driver training is pathetic, nigh on nonexistent, and I would not recommend autobahn-style laws here.)

Here are a couple of additional factoids for you:

- Average accident speed on non-interstates in the US is 27mph. Average accident speed including interstates is 29mph. What this means is that most accidents do not have speed as a significant factor. This is not actually terribly surprising when you go look at where high accident rates occur. It's not the highways! It's not even close. Which brings us to:

- Fewer than 10% of accidents happen on highways. The US interstate system has the safest roadways in the country by far.

So where are accidents and fatalities happening in large numbers? Surface streets, at intersections. Failure to obey traffic control devices and failure to yield are the two biggest causes. These accidents happen at relatively low speeds, but it turns out that even 30mph is a honking big hunk of energy when you're talking about a couple of two-ton vehicles colliding.

I note that US traffic safety programs target speed almost exclusively, with almost no effort spent on either training or enforcement at intersections. This is unbelievably stupid public policy. The fine structure shows this bias too; speeding fines are large and grow very rapidly, but fines for running a red light tend to be near nominal. The red light runner is far more dangerous.

Comment Re:What could possibly go wrong? (Score 2, Insightful) 825

You've never been to Nevada, have you? 90mph is not stupid fast in much of the state. Dead flat straight roads for hundreds of miles ... That's Nevada.

As a general rule the US interstate system was designed to be safe at 75mph in 1950s military vehicles. It is no great trick to be safe at higher speeds in modern cars, particularly in a big empty state like NV. Heck, in that area 80mph limits were the norm until they passed the national speed limit.

Comment Re:Yes and no... (Score 1) 397

Manual management, when it's done properly, is certainly smaller ... but you have to balance that against the much larger chance of making errors[1] -- not just a significantly increased tendency to leak, but also serious errors like double-frees and use-after-free and the development time spent tracking that stuff down. (To say nothing of the costs of dealing with customers when their software crashes.) In addition GC mechanisms can have lower overall CPU costs, there are interesting optimizations available when you're doing things in bulk, but you pay for that in less predictability.

There's give and take and strong reasons to pick one or the other depending on the application type. If you step back and think about it, though, there aren't that many cases where the benefits of GC are outweighed by its costs. Software using GC tends to be easier to write and much less prone to crashing, and unpredictability is not usually in the user-perceptible (to say nothing of critical) range. Given that most of the cost of software is writing and maintaining it anything you can do to depress those costs is a big win.

Obviously there are cases where the tradeoffs are too expensive. Cellphones, as you point out, may be one of those -- but Android seems to be doing fine with Java as its principal runtime environment. (Honestly, your typical smartphone has way more memory and CPU than servers did not so long ago, to say nothing of the set-top boxes for which Java was originally designed.) Operating systems, realtime systems, and embedded systems are other cases.

[1] Brooks' _Mythical Man Month_ makes a strong case for development systems that optimize for reduced errors even at the cost of some performance. He was talking about assembly versus high level languages but (perhaps not surprisingly) the more things have changed the more they have remained the same. We have a lot of data on development and maintenance costs of various software environments now and costs tend to be lowest in the cases where the environment makes it harder for programmers to screw up. Usually by significant margins; my experience in comparing Java and C# versus C++ indicated an average time-to-completion differential of 300%, and a bug count reduction of more than 90%, over the long term. In some cases -- like network applications and servers -- the improved libraries found in Java and C# versus C++ yielded order-of-magnitude improvements. These are numbers you can take to the bank. Even if the first generation of the software is slow relative to a language like C++, the ability to rev the software three times as fast means much faster algorithmic development. It is often the case that the Java code would outperform C++ within three or four versions if the code was particularly complicated.

Of course C++ -- as with C before it -- is a particularly lousy language when it comes to memory management. It's a lot closer if you're using something that has, for instance, real arrays or heap validity checking. It annoys me to no end that none of the standard C++ development environments builds in debugging aids as a matter of course on non-release builds. If they exist at all they're hidden features (look at all the debugging heap features Microsoft Visual Studio will give you if you can figure out how to turn it on), and most of the time you have to go buy expensive add-ons (like BoundsChecker or Purify) to do things that would be trivial for the compiler writer to manage with a little extra instrumentation and integration with the library system. Alas. I actually had better debugging tools at my disposal for C++ in 1994 than I do today (although Valgrind is not bad at all), and that really pisses me off given how much money Microsoft gets for the tools I use.

Comment Re:Yes and no... (Score 1) 397

There are a lot of possible answers to that. The most obvious is that you want to limit the growth of the JVM; with garbage collecting there is a tendency to grow the heap without limit.

That was never a satisfactory answer to me, though, because it is not at all difficult to set up an heuristic to watch GC activity and grow the available heap when it looks like memory is tight -- and certainly to try it before throwing OutOfMemory! In fact, it wasn't very long before JVMs that did this started popping up (I think Microsoft's was the first; I think it's pretty darn ironic that the best JVM out there when Sun sued Microsoft was actually Microsoft's, and by no small margin).

IMO it's an anachronism that Sun's JVMs still have the hard fixed limit without even the choice to turn it off. Larger Java applications (e.g. Eclipse, Maven, and of course the web app servers) regularly break for no other reason than running into heap limits even when there is plenty of memory available on the system. I find it a huge and unnecessary irritation.

Comment Re:Maybe. (Score 1) 397

It's always been the case that there are application-visible differences in JVMs. (Remember that Java is "write once, debug everywhere." It's not just a funny tag line.) You have to detect and work around them somehow and the combination of vendor string and version is a reliable way to do that.

Comment Re:Yes and no... (Score 2) 397

The application didn't do that deliberately, it was a side effect of launching the JVM with a too-small maximum heap size. Because the option to change the VM size is specific to the individual JVM implementation you can't just guess which flags to put in there. They did the reasonable thing in the case that they couldn't identify the JVM and didn't pass any option; unfortunately that meant it ran out of memory.

Comment Re:So Jobs is not a liar? (Score 3, Informative) 373

It's not just smart phones. I had aNoxia 97xx that would drop calls if held a certain way. It always seemed obvious to me that it was attenuation.

Having said that,I thought Apple was nuts to expose the metal, and had presumed originally that it was covered in clear polymer. Every school kid radio fan knows what happens to the signal if you grab the antenna, right? So why would you make a phone with a naked antenna?

On the other hand I've played with a few 4s and the issue is IMO not nearly as severe as the tempest would imply, and while most people I know can reproduce the problem several indicate that the phone works in places where previous models didn't. If I were in the market I would still buy one. I would use a case as a matter of course anyway (put one on the 3gs immediately, have you seen what these things cost?). Not Apple's case, I swear Apple has no idea how to make a good case.

Comment Re:First post (Score 1) 578

Since IBM most likely wouldn't own the copyrights on that code, they wouldn't have standing to raise the issue, and so probably didn't even care to check. AT&T vs BSD was AT&T vs U of C copyrights.

That's a great point, but showing migration of code in the opposite direction sure would have been damning in front of a jury even if IBM couldn't have hoped for damages. Then again it's obvious at this point there was no need....

Comment Re:First post (Score 1) 578

Xenix would have, it was originally another V7 fork just like BSD as I recall, and it evolved following the SysV line -- quite parallel to BSD. The last time I used it it was pretty much SVR3, no BSD in sight.

OpenServer and UnixWare, on the other hand ... those are pure SVR4 and SVR4 owed a hell of a lot to BSD.

Comment Re:First post (Score 4, Interesting) 578

That's true, but in the push to get UNIX into the commercial space the SysV interfaces were released as an open specification. This was actually covered during the trial.

The fact of the matter was that the Linux folk didn't copy code, something that would have been obvious to any observer following it's development. The idea that there were vast amounts of stolen code was ludicrous if you knew anything at all about the internal structure of the two operating systems.

There was always the possibility of code that got injected during the large commercial code donations by e.g. IBM or SGI, and in fact the only piece of code that showed actual derivation came from SGI ... But it turned out to be both a very small amount of code and buggy to boot. As soon as people got a look at it they excised it in favor of working, original, code.

I personally expected it to go more the way of the AT&T veresus BSD case, where it turned out that AT&T had stolen tons of code from BSD, not the other way around. The Linux emulation layer in SCO UNIX seemed a particularly likely candidate. Either that turned out not to be the case or IBM simply didn't push the issue (perhaps because SCO was having so much trouble proving anything in their claims) though.

SCO's strategy always seemed to me to be a shakedown, scare companies into license agreements. Why they went after one of the deepsest pockets first is beyond me, IBM was very likely to fight given their investment, but it was clear early on that management was not very competent.

Comment Re:e readers are insanely overpriced (Score 1) 255

You raise some good points, but your "can they read two books you bought for your e-book" is off base. Sure, if you only have one e-book reader; if you have one per person it becomes much more flexible. For the last few years that was an expensive proposition (although in my experience the things paid for themselves in a few months) but prices are falling fast (and have you priced bookshelves recently?)

My daughter has my old Kindle and my wife uses the Kindle reader on her phone. We can all share the same library, meaning for instance that my daughter and I can (and do) read the same book at the same time with only one purchase.

One of the big wins for us, though, is space: We have thousands of paper books already, way too many to put on shelves. The expansion slewed a lot with the influx of e-readers.

Comment Re:We are staying on XP (Score 1) 1213

2.1GHz huh? That's not a 1998 processor. The fastest Intel processor available in 1998 -- late 1998 -- was a 450MHz Pentium II Xeon. Neither Vista nor Win7 will install on anything even close to that.

It wasn't until 2001 that Intel crossed the 2GHz line, and 2002 when there was a 2.1GHz processor in their lineup. That, I think, sets the tone for analyzing the rest of your system specs.

That 1998-era 50GB drive? Umm, no. Drives in 1998 time were generally in the single-digit gig range (much too small to even install Win7). Here is the announcement of a series of new machines from Dell that year:

http://news.cnet.com/Compaq%2C-Dell-ship-new-computers/2100-1001_3-212040.html

We'll get back to that announcement in a minute.

IBM released a 10G drive in July 1998:

http://www.tomshardware.com/reviews/15-years-of-hard-drive-history,1368-2.html

So that pretty much sets the upper limit of what would have been available. 50G drives were around in 2002-3, which is probably not coincidentally the same time frame as your 2.1GHz processor.

Now, the G1 mentioned in the article above was a pretty good Dell system in 1998, the kind of thing you bought to run NT4. Its maximum RAM? 256M, one quarter of what you say you installed.

I'm too lazy to go figure out at what point it was possible to buy a Dell desktop system that was expandable to 1G, but I am willing to bet it's somewhere around 2002, just like all of the other specs of your system.

So I would have to conclude that you actually installed Win7 on a 2002 or 2003 era machine, and it will run very poorly with only 1G RAM; my personal experiments showed that the systems' responsiveness was downright awful below 2G (32-bit).

Cheers!

Slashdot Top Deals

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...