Follow Slashdot stories on Twitter


Forgot your password?

Comment Ten hundred, not thousand (Score 1) 90

Please stop describing this book as "using only the thousand most common English words". The word 'thousand' is not one of the thousand most common English words, which is why Randall describes the book as "using only the ten hundred most common English words". Missing that detail is practically missing the entire point.

Comment No, those slowdowns are not normal (Score 2) 517

I've been running the same Windows install on my laptop for 4.5 years and it still feels quite fast to me. I installed an SSD last year, which obviously helps a lot. Prior to that there was the predictable delay whenever I launched a program that I hadn't run for a while (that wasn't in the disk cache), and now I don't even have that. I have *lots* of programs installed, but I see none of the sluggishness which you describe.

A noticeable slowdown in four weeks is quite odd, unusual, and not normal.

The problem with your report is that it is hopelessly vague. What is slow? Launching programs? Running programs? Poor frame rate in some games?

Do you have enough memory? Do you have enough CPU cores?

Three possibilities come to mind:
1) You don't have enough RAM. If so (if there aren't many GB available at all times according to task manager) then get more.
2) Your CPU is overheating. While doing performance investigations for Valve I found that a lot of game slowdowns were caused by thermal throttling: https://randomascii.wordpress....
3) Something else is wasting CPU or memory. When I did hit sluggishness a few years ago I investigated and found the buggy device driver that was clearing the system disk cache: https://randomascii.wordpress....

So no, it's definitely not normal. To figure out what is going on you need to monitor specific details about your system in order to find and fix the root cause. slow/sluggish is not an actionable bug report.

Comment False advertising, IMHO (Score 1) 85

If a buy a computer with a CPU that is rated at X GHz then that CPU had better be able to maintain that frequency, always. Otherwise it's a meaningless number. CPUs can already overclock themselves (Turboboost) above that frequency so if they can also legitimately underclock themselves then the 'rated frequency' is completely meaningless. I don't think that is acceptable. I encourage all slashdot readers to test their new computers under load and if they cannot maintain their rated frequency RETURN THEM! Or better yet, file a formal complaint for false advertising or fraud and then return them.

I blogged about this a while ago and I think the problem has only gotten worse. Lots of consumers are getting a crap experience because of insufficient cooling, manufacturers are selling rigs that can't do what they promise, and software developers waste time dealing with complaints about slow games/etc.


Comment Re:Not that easy to see (Score 1) 53

> who've either spent thousands on astrological equipment

Well there's your problem -- you should have been focusing on telescopes instead of the Zodiac.

I've got a 6" Dobsonian telescope -- not terrifically expensive, under $1,000 I'm sure -- and I've enjoyed Jupiter moon transits before. It's no Hubble, but I enjoy it.

Comment Re:Almost, but not really (Score 1) 61

> the human eye has difficulty seeing more than 60hz.

Not true. And, a broad claim like that conflates many different concepts. Flicker fusion can require 85 Hz to not cause headaches for some people (especially with the low persistence needed for non-blurry VR), and smooth motion continues to feel smoother up to at least 120 Hz.

In addition, lower frame rates generally mean increased latency, and latency is probably the biggest cause of VR nausea.

But don't take my word for it. This blog post does a great job of summarizing the latest research on the topic:

I have no idea what cheap CPUs and server I/O have to do with motion tracking, but tracking a single point (translation and rotation) is exactly what is needed for VR -- that point is the user's head, and tracking it with low latency is what makes VR work.

> The only difference between now and 20 years ago...

is everything. The technology is orders of magnitude cheaper and more capable.

Comment Re:C versus Assembly Language (Score 1) 226

What about the add with carry? That's the particularly hairy bit. Even if clang/gcc/VC++ recognize the pattern and turn it into optimal code, add with carry is a case where assembly language is cleaner and more elegant than the equivalent high-level language code.

I'm not a fan of inline assembler because it often gives you the worst of both worlds -- incomplete control over code generation, and worse syntactic messiness than pure assembly language. But yes, a mixture of C++ and assembly is definitely the right solution, either inline assembly or a single separate function to do the messy math.

Comment Re:Let the games begin (Score 1) 226

Actually the trend is in the opposite direction -- fewer of the math functions are implemented in hardware than used to be. There are many reasons (optimized out-of-order CPUs and old/slow transcendental implementations) but one significant reason is that the new glibc math functions are generally correctly rounded -- exactly correct. Whereas the hardware versions are often not -- as I discussed in this recent blog post:


Comment Re:C versus Assembly Language (Score 1) 226

High-precision math is an excellent time to use assembly language. Assembly languages generally have a way to express ideas like a 32x32->64-bit multiply (and 64x64->128-bit multiply), and add-with-carry. High-level languages generally support neither of those options directly. To tell the compiler that you want a 32x32->64-bit multiply you generally have to have two 32-bit inputs, then cast one of them to 64-bit, and hope that the compiler doesn't actually generate a 64x64 multiply.

For 64x64->128-bit multiplies the problem is more difficult because many languages don't have a 128-bit type, and yet these multiplies are crucial for getting maximal multi-precision performance on x64.

Without access to the carry flag a programmer in a high-level language has to do things like:

a0 += b0;
if (a0 https://randomascii.wordpress....

Comment High time to stop lowering gas taxes (Score 1) 554

As the article mentions, inflation adjusted gas taxes have been dropping for 21 years. That doesn't make sense. At the very least they should be returned to their 1993 levels and indexed for inflation. Roads are crowded and a gas tax would relieve that by encouraging alternatives. It would also reduce pollution, reduce carbon emissions, reduce oil imports that kill the balance of trade and finance people who use the money to try to kill us.

Gas taxes really are good.

As to the complaints that the gas taxes are being used to fund other things, such as bicycle paths and mass transit -- I'm not sure how true that is, but you would be foolish to fight it. Alternatives to driving are crucial. Paving huge amounts of land makes walking and biking very difficult so drivers *owe* the non-drivers a bit of help. And drivers benefit *greatly* from mass transit. With no mass transit the traffic congestion would be even worse.

And, given that drivers park free almost everywhere it is truly rich to hear drivers complaining about having to subsidize transit. The implicit subsidy that cars get through free parking is orders of magnitude greater (read "The High Cost of Free Parking" for all the details).

Comment Re:Basic maths required. (Score 1) 239

It is of course well known that, for double precision, sin(x) == x if x 1.49e-8. They teach that in kindergarten these days.

However the article is about sin(double(pi)), and pi is actually greater than 1.49e-8. Therefore range reduction needs to be done, and that is where things go awry.

Yes, the caller could do the range reduction, but this is not trivial to do correctly and it really should be done by the sin() function. With glibc 2.19 it is.

Comment Re:Exact mathematical value isn't the ideal (Score 1) 239

But the parent you refer to (this comment's great grandparent?) says "Any serious calculation requires an error calculation to go with it." Sure. I can agree with that.

And that's the whole point of the article. If somebody does an error calculation based on Intel's documentation then they will have an incorrect error calculation -- in some cases grossly incorrect. So the claim that an error calculation is needed actually *supports* the article (mine, BTW), while arguing against it.

I think ledow is violently agreeing with my article, perhaps without realizing it.

Comment Re:Article is wrong - it is documented (Score 1) 239

As the other reply said, the function contract and an explanation of the implementation are different beasts. The Intel manual said that fsin is accurate across a huge range of values. Elsewhere the Manual hinted at Intel's range reduction algorithm such that a numerical analyst could suspect that there was a problem. So, to a numerical analyst the documentation was contradictory. To anybody else it was clear -- guaranteed one ulps precision. The documentation was therefore at best misleading, but for most people it was just wrong. Linus Torvalds was fooled, for example.

> If you let x = pi, then people would ordinarily expect that sin (x) = 0.

Many people would expect that, although the article (my article) most certainly did not expect that.

The calculation being done in the article takes into account the fact that double(pi) is not the same as the mathematical constant pi and it uses that and the properties of sine to measure the error in double(pi). This is an unusual but perfectly reasonable calculation to make. It failed because of the limitations of fsin. Those limitations are contradictory to the documentation. Hence the article.

> A more precise approximation according to Wikipedia would have been...

You don't need to go to Wikipedia, you just have to RTA. It lists a 192-bit hexadecimal approximation of pi.

> The reality is that the correct result would have been zero

No. Zero is the correct answer if doing symbolic or infinite precision math, but I did not make that assumption because I was doing double-precision math.

The generation of random numbers is too important to be left to chance.