Forgot your password?

Comment: High time to stop lowering gas taxes (Score 1) 554

by Bruce Dawson (#48395491) Attached to: The Downside to Low Gas Prices

As the article mentions, inflation adjusted gas taxes have been dropping for 21 years. That doesn't make sense. At the very least they should be returned to their 1993 levels and indexed for inflation. Roads are crowded and a gas tax would relieve that by encouraging alternatives. It would also reduce pollution, reduce carbon emissions, reduce oil imports that kill the balance of trade and finance people who use the money to try to kill us.

Gas taxes really are good.

As to the complaints that the gas taxes are being used to fund other things, such as bicycle paths and mass transit -- I'm not sure how true that is, but you would be foolish to fight it. Alternatives to driving are crucial. Paving huge amounts of land makes walking and biking very difficult so drivers *owe* the non-drivers a bit of help. And drivers benefit *greatly* from mass transit. With no mass transit the traffic congestion would be even worse.

And, given that drivers park free almost everywhere it is truly rich to hear drivers complaining about having to subsidize transit. The implicit subsidy that cars get through free parking is orders of magnitude greater (read "The High Cost of Free Parking" for all the details).

Comment: Re:Basic maths required. (Score 1) 239

by Bruce Dawson (#48213085) Attached to: Where Intel Processors Fail At Math (Again)

It is of course well known that, for double precision, sin(x) == x if x 1.49e-8. They teach that in kindergarten these days.

However the article is about sin(double(pi)), and pi is actually greater than 1.49e-8. Therefore range reduction needs to be done, and that is where things go awry.

Yes, the caller could do the range reduction, but this is not trivial to do correctly and it really should be done by the sin() function. With glibc 2.19 it is.

Comment: Re:Exact mathematical value isn't the ideal (Score 1) 239

by Bruce Dawson (#48122013) Attached to: Where Intel Processors Fail At Math (Again)

But the parent you refer to (this comment's great grandparent?) says "Any serious calculation requires an error calculation to go with it." Sure. I can agree with that.

And that's the whole point of the article. If somebody does an error calculation based on Intel's documentation then they will have an incorrect error calculation -- in some cases grossly incorrect. So the claim that an error calculation is needed actually *supports* the article (mine, BTW), while arguing against it.

I think ledow is violently agreeing with my article, perhaps without realizing it.

Comment: Re:Article is wrong - it is documented (Score 1) 239

by Bruce Dawson (#48121995) Attached to: Where Intel Processors Fail At Math (Again)

As the other reply said, the function contract and an explanation of the implementation are different beasts. The Intel manual said that fsin is accurate across a huge range of values. Elsewhere the Manual hinted at Intel's range reduction algorithm such that a numerical analyst could suspect that there was a problem. So, to a numerical analyst the documentation was contradictory. To anybody else it was clear -- guaranteed one ulps precision. The documentation was therefore at best misleading, but for most people it was just wrong. Linus Torvalds was fooled, for example.

> If you let x = pi, then people would ordinarily expect that sin (x) = 0.

Many people would expect that, although the article (my article) most certainly did not expect that.

The calculation being done in the article takes into account the fact that double(pi) is not the same as the mathematical constant pi and it uses that and the properties of sine to measure the error in double(pi). This is an unusual but perfectly reasonable calculation to make. It failed because of the limitations of fsin. Those limitations are contradictory to the documentation. Hence the article.

> A more precise approximation according to Wikipedia would have been...

You don't need to go to Wikipedia, you just have to RTA. It lists a 192-bit hexadecimal approximation of pi.

> The reality is that the correct result would have been zero

No. Zero is the correct answer if doing symbolic or infinite precision math, but I did not make that assumption because I was doing double-precision math.

Comment: Re:FPUs are **inherently** approximations (Score 1) 239

by Bruce Dawson (#48117395) Attached to: Where Intel Processors Fail At Math (Again)

There's an alternate test that works on the Windows 7 calculator to show that it is not implemented on the FPU. Calculate square root of four, and then subtract two. You should get zero but you don't.

That is an error that no FPU would make. The IEEE standard requires a correctly rounded result for square root, and the Windows 7 calculator fails to deliver that on even very simple inputs like four. The Windows 7 calculator does its calculations to more digits of precision than double precision, but it doesn't do its calculations accurately. Oops.

Comment: Re:example from TFA. try it (Score 1) 239

by Bruce Dawson (#48117367) Attached to: Where Intel Processors Fail At Math (Again)

How would you like that example number to be given? In hexadecimal for maximum readability? Giving it in decimal is necessary for communicating the issue.

But the reality is that that number can show up during calculations. Intel promises to calculate it's tangent to a specific precision, but Intel's documentation is incorrect. That is the problem -- hugely misleading documentation.

The issue of not being able to fully specify a particular real number is actual crucial to the article though, but in a different way. The example is sin(double(pi)), what it should be, what that should allow, and how that fails. You should read the article. I think it's excellent.

Comment: Re:Exact mathematical value isn't the ideal (Score 1) 239

by Bruce Dawson (#48117331) Attached to: Where Intel Processors Fail At Math (Again)

> Any serious calculation requires an error calculation to go with it.

Sure. That sounds good. And in order to make that error calculation you need to consult the documentation to know how accurate the instructions you are using are. Let me see -- Intel says that fsin is accurate to one ULP. That's sufficient. Error calculation done.

That's why the inaccuracies in their documentation matters.

> I'll tell you now that I wouldn't rely on a FPU instruction to be anywhere near accurate.

Really. Well that seems foolish. As required by the IEEE standard the x87 FPU supplies correctly rounded results for add, subtract, multiply, divide, and square root. At double and long-double precision you can't do better. Those can be composed into higher-level functions with well defined accuracy if you know what you are doing.

It's funny that there is one group of people asking when this tiny error could even matter ( and another group saying that even without this error the precision is insufficient. You guys should talk.

Comment: He measured single-threaded compiles?!!! (Score 2) 132

by Bruce Dawson (#45535705) Attached to: Speed Test 2: Comparing C++ Compilers On WIndows

Visual C++ has this handy /MP option which tells the compiler to do multi-threaded compiles. On some of our build machines (with 16 cores) this gives an almost linear increase in build speeds. It's obvious from the author's discussion of multi-core that he is not aware of this option and did not use it.

A performance benchmark which doesn't turn on the go-fast option is not going to produce meaningful results.

The author also doesn't discuss debug symbols. VC++ generates debug symbols by default, whereas the other compilers do not. Generating builds without symbols is not a reasonable scenario for most builds, so this makes the file size comparisons rather meaningless.

Comment: Re:iTunes (Score 1) 519

by Bruce Dawson (#43732131) Attached to: iTunes: Still Slowing Down Windows PCs After All These Years
Oh yeah -- I forgot that I blogged about this iTunes annoyance. Luckily I found a way to delete orphaned references to moved or deleted music files, but it really shouldn't have been necessary. Here's the post:

Comment: Re:iTunes (Score 1) 519

by Bruce Dawson (#43732089) Attached to: iTunes: Still Slowing Down Windows PCs After All These Years
There are many ways that I might want to add media to or remove media from my computer. I might use explorer to copy them, SyncToy to synchronize with another machine, use the command prompt to delete files, etc. iTunes could detect all of this -- it's not hard -- but it doesn't. It forces users to manually rescan their music folders in order to find new files. Doing this scan often leads to duplicate listings of files, and it fails to detect when files have been deleted. It's lousy.

Comment: Re:iTunes (Score 1) 519

by Bruce Dawson (#43729113) Attached to: iTunes: Still Slowing Down Windows PCs After All These Years
Why would I want to put my Media in that particular folder? I should be able to put my media anywhere in the music library and have my music player figure it out. Zune and Windows Media Player do this fine, and equally importantly they notice when files have gone away and they remove them from their catalog. Handy. iTunes doesn't. Zune and Windows Media Player have other problems of course...

Comment: Re:iTunes (Score 2) 519

by Bruce Dawson (#43728239) Attached to: iTunes: Still Slowing Down Windows PCs After All These Years
iTunes scans your folders for new files periodically? First of all, I have never seen it do that. It never notices when music files have been added or deleted. That is probably its biggest weakness compared to other music players. Second, if iTunes did want to stay synchronized with what was on the hard drive (crazy idea) then directory notifications are a far more efficient way of doing that.

Comment: "twice the perf" misses the point (Score 1) 405

by Bruce Dawson (#41374733) Attached to: Are SSDs Finally Worth the Money?

> twice the performance

This comparison misses the point about SSDs. Yes, SSDs may have somewhat better bandwidth, and may improve startup times slightly, but that is not their advantage. They have awesomely better seek times, which makes some operations hundreds of times faster. Putting Visual Studio's .sdf files on an SSD avoids lots of VS 2010 hangs.

This blog post I wrote discusses the random I/Os to the Windows Live Photo Gallery SQL database at startup. On my photo collection I see 5,000 random disk I/Os, which are painful on a laptop HDD but would be a non-issue on an SSD:

In situations like this an SSD is probably a *hundred* times faster than an HDD. Database accesses seem to be a common scenario where an SSD is worth its weight in gold.

In short, if an SSD is only twice as fast then it's not worthwhile. If it's ten to a hundred times faster, then hell ya.

Comment: Re:Not likely (Score 1) 3

by Bruce Dawson (#35499716) Attached to: LED lamps less efficient due to poor design
Yep. It's actually a trivial problem to avoid in this case. You just need to locate the transformer downstream of the switch instead of upstream. That's it. That's all it takes to go to perfect efficiency (when turned off). This implementation is equivalent to manually unplugging the light when it isn't in use, putting it on a switched socket, etc. However the consumer shouldn't have to do these hacks -- the lamp should turn off fully. It's easy. Whether it's likely depends on whether we demand it.

"How to make a million dollars: First, get a million dollars." -- Steve Martin