## The Trouble With Rounding Floats 456

lukfil writes

*"We all know of floating point numbers, so much so that we reach for them each time we write code that does math. But do we ever stop to think what goes on inside that floating point unit and whether we can really trust it?"*
## Decimal Arithmetic (Score:4, Insightful)

double, of course, only defers the problem.What about encoding

floats as a pair ofints orlongs: one to express the numerical value, and the other its tenth power;id est, decimal arithmetic [ibm.com]?## Re:Decimal Arithmetic (Score:5, Insightful)

## Re:Decimal Arithmetic (Score:3, Interesting)

I bothered to ask the question of what to use for monitary

usage at a financial institution in my recent past. I was

a bit ( pardon the pun ) suprised to get a blank stare, to

have to explain what I was talking about. Floats where good

enough. Course, I had a problem in

a list of values ( testing, each was

sum wasnt 1.0. Had to do a bunch of

decimal.parse(value.ToString())

to get things to sum up correctly.

## Re:Decimal Arithmetic (Score:3, Insightful)

## Re:Decimal Arithmetic (Score:5, Informative)

## Re:Decimal Arithmetic (Score:3, Informative)

## Re:Decimal Arithmetic (Score:4, Informative)

## Re:Decimal Arithmetic (Score:5, Informative)

For example, if your input consist of one large number, and tons of small ones, then rounding-errors mean that starting with the large number gives a much smaller result than starting with the small ones.

If I scale it down to smaller numbers, you see why:

1.0*10^5 + 1.0*10^1 = 1.0*10^5

So, adding a "small" number to a "large" number gives you simply the large number.

If you repeat this, a million times, your result is still simply the large number.

So you could end up concluding that 1.0*10^5 + (1.0*10^1 + 1.0*10^1 ..[1000000 times]...) = 1.0*10^5

That is an order of magnitude wrong. The correct result is 1.1*10^6

Practical result ? You need to think about your input. If it *may* look like this, you need to add up by repeatedly adding the two smallest numbers. Easy to do with a priority-tree. pseudocode like this:

MS-Excel, by the way, does *NOT* do this in it's SUM() function, if you feed it a "large" number and *many* "small" numbers, you get horrendously wrong results. Because of the relatively high precision of floats and doubles though, you need to use larger numbers than in my example here.

## Error diffusion is another way. (Score:4, Informative)

Another solution, is the Kahan summation algorithm [wikipedia.org].

Wich, grosso-modo, keeps track of the error at each step, and injects it back at the next.

In your example, in each iteration, the algorithme notice that tha 1.0e1 is missing from the sum and carries it to the next addition. A few iterations later, the carry is big enough to be added to the result.

The advantages are : you don't need to first load all components in a tree, then itteratively sort them and process them all until you're done. In fact you can even use this algorithme in a streaming fashion, were you don't enven need to know how much value will come.

The disadvantages are : some compilers are able to guess that the carry "should mathematically be 0" (actually true in a perfect world with infinite precision numbers) and could "optimise" the code back to a plain normal sum function bypassing the algorithm (and won't subsequently use any other sum-correction algorithm).

## Re:Error diffusion is another way. (Score:3, Interesting)

Lots of programmers though, are unaware of the finer details of floating-point numbers.

As evidenced by MS-Excel failing to give the correct answer, even when as we've now demonstrated, there's multiple, simple, correct algorithms for doing so.

It *is* surprising to do the equivalent of 1000 + (1+1+1 ...[10000 times] +1) and get 1000 a

## Re:Decimal Arithmetic (Score:3, Interesting)

MS-Excel, by the way, does *NOT* do this in it's SUM() function, if you feed it a "large" number and *many* "small" numbers, you get horrendously wrong results.This caused some big problems for me in a previous job. I was using something similar to server-side javascript to generate financial reports (including summing and currency conversion) which the customer was testing by trying to get the same results in Excel. I knew there was a floating point issue in my code, but even after I fixed it it didn't mat

## Re:Decimal Arithmetic (Score:3, Interesting)

## Re:BCD isn't the answer (Score:3, Insightful)

I mean, do you need a cash register than can tally sums > $1000000?You do realize that there's more to business than cash registers, right?

A 32-bit fixed point number maxes out at 21,474,836.47 which is severely limiting for all but small-sized businesses and tiny governments.

64-bit fixed point number (max 92,233,720,368,547,758.07) are obviously better, but are only efficient on 64-bit machines, which are

stilla minority of installed machines.## Re:Decimal Arithmetic (Score:5, Interesting)

Friends of mine went off to work "In The City", when I quizzed them about their use of numbers for stock prices etc they were equally dismayed that things were being passed around as doubles. Often encoded as ASCII text in data streams as well, requiring different people to write their own ASCII->DOUBLE conversion depending on the representation of the stock tick. I think this kind of madness is quite prevelant.

As someone else pointed out, if you want to do things properly you can end up needing very big integers.

Perhaps the best option is to make sure people can only by and sell equities etc in numbers that can be exactly represented as doubles on a computer. It sounds crazy, but it's not as crazy as it looks. One of the reasons stocks etc are quoted as they are is probably due to the ease of the mental arithmetic.

Kudos to the parent of your post. At least he knows what he is having to do is dodgy and cares enough to check!

## Re:Decimal Arithmetic (Score:5, Interesting)

I'd be not only dismayed but very surprised to find anything which interfaces to the London Stock Exchange passing stock prices around as doubles, or as any other kind of floating point number.

The LSE feeds all use 18 digits for values, with the first 10 being implicitly before the decimal point and the remaining eight being after the implicit decimal point. This is very handy because it means all the values can be manipulated using 64 bit integers. The LSE rules also state very precisely how rounding must be handled. If you try to submit a multi-million pound deal and your calculation of the consideration is out by just one penny then the deal will be rejected.

No-one with the slightest clue about how to code would use floating point maths in any kind of financial program, particularly not one where they're working with the LSE.

## Re:Decimal Arithmetic (Score:4, Informative)

I'm happy to comment on it without being anonymous. I designed and oversaw the implementation of the LSE feeds (to and from) for the stockbroking part of a large UK high street bank which shall be NatW^H^Hmeless. If you tried to implement the internals using floating point arithmetic it would be pretty much impossible to get it to pass the LSE's conformance tests, which all assume you will use integer arithmetic and explicit rounding according to their rules.

## Re:Decimal Arithmetic (Score:3, Interesting)

Can you outline examples of these conformance tests, or even better, are they freely available? I assume these are intended to make sure things that go on the wire have a sane value, fall between certain daily trading limits etc (to prevent things like the Mizuho cock-up [economist.com]) [*].

I suspect that the main culprit of "dodgy doubles" is likely to be people throwing together ad hoc codes behind the scenes, not the official interface to the exchanges.. (the "front" and "back" doors I mentioned earli

## Re:Decimal Arithmetic (Score:2)

I bothered to ask the question of what to use for monitaryusage at a financial institution in my recent past.

http://en.wikipedia.org/wiki/Packed_decimal [wikipedia.org]

All CISC CPUs had opcodes to do the work, but AFAICT only COBOL (being, of course, a Business Oriented Language) implemented BCD as a primary data type.

Damned shame, too, since it eliminates

allthe hassle of working with financial software.## Re:Decimal Arithmetic (Score:3, Insightful)

The RPG language also had an implementation of BCD,Argh, "forgot" about RPG.

and probably any compiler for the IBM S/370 line would at least have a library for it,Having a library isn't the issue. Early versions of TurboPascal also had BCD libraries.

That you used via function calls. Very

notuseful.To be practical, a datatype needs to be usable by the 5 base arithmetic operators.

as I believe the IBM mini/mainframe architectures had implemented it in hardware.The System 3x0 "CPU" is extremely CISC.

## Re:Decimal Arithmetic (Score:5, Insightful)

Another advantage in the formal classes is you get the theory that allows you to make decisions on what data types to use and when. Sometimes you need the precision of BigNum systems, (crypto for example), and sometimes the accuracy of float is enough. For example, in a lot of financial applications, float would be good enough since 2 decimal places is enough. If you need performance, float will beat any BigNum system hands down. However, if you are dealing with decimals on top of decimals, (such as calculating someone's dividend from a mutual fund where they own partial shares), you might need BigNum. Either way, with the proper theory and good understanding of the formats, you can make these decisions.

These situations are why I am a big supporter of actual software engineering instead of programming. Sure, standard programming is great for a lot of situations, but serious applications need to use software engineering practices. You wouldn't build a bridge without an engineer, so why build an application that handles billions of dollars without applying the same rules and principles?

## Re:Decimal Arithmetic (Score:3, Insightful)

These situations are why I am a big supporter of actual software engineering instead of programming. Sure, standard programming is great for a lot of situations, but serious applications need to use software engineering practices. You wouldn't build a bridge without an engineer, so why build an application that handles billions of dollars without applying the same rules and principles?I could not agree more. The issue is not to get it done fast or cheap. The issue is that the person designing the solution d

## Re:Decimal Arithmetic (Score:2)

For example, in a lot of financial applications, float would be good enough since 2 decimal places is enough.No, if anything has to add up, then it's BigNums only. Financial apps grow, and choosing doubles to store your money is just asking for trouble.

## Not really CS101... (Score:2)

## not really (Score:3)

Do you understand that

is not the same as

and that

Can be true?

Every day, many people make the mistake of using floats when wat they really wanted was the ability just to represent large numbers. For example, in Mac OS X, the system uses doubles as representations of time. This is the worst idea I can think of. First of all, floats are imprecise and time is the thing tha

## Re:not really (Score:4, Informative)

The first paper recommended for learning more about floating point arithmetic is usually Goldberg's famous [sun.com].

What Every Computer Scientist Should Know About Floating-Point ArithmeticI can't remember whether the paper specifically discusses the failure of floating point arithmetic to obey the mathematical laws of arithmetic, but even if not, the background it provides is probably enough for you to understand the reasoning yourself.

## Re:Decimal Arithmetic (Score:3, Insightful)

We had assignments to not only perform matrix ops but also give the expected error, etc.

Maybe the author of the article should either go to a better school or pay more attention to the classes.

Tom

## Re:Decimal Arithmetic (Score:3, Informative)

Uh oh, we just re-invented floating point. Oh well, nice try.

If you were just trying to get better accuracy by using base 10 rather t

## Re:Decimal Arithmetic (Score:3, Funny)

Last I checked, they use binary internally

exclusively, not primarily.Unless things have changed and nobody told me.

Cheers

## Re:Decimal Arithmetic (Score:5, Insightful)

Show of hands: Who did not already understand that floats are approximations? Anyone? I didn't think so. I've gotta wonder why this story ever made it into Slashdot. This is more worthy of Time magazine where it can be spun as a startling new revelation into the dirtier corners of computer science and foisting a lie on the public.

## Re:Decimal Arithmetic (Score:2)

## Re:Decimal Arithmetic (Score:2)

I was expecting some information about a FPU unit - parallel processing, pipelining and all that.

But it links to: "The trouble with rounding floating point numbers"

Kind of shallow...

## Re:Decimal Arithmetic (Score:2)

What about encoding floats as a pair of ints or longs: one to express the numerical value, and the other its tenth powerOld news. Of course that is the way to do it if you need exact decimals. If you have a limited range, then you can also just use one int and a fixed exponent, i.e. fixed-point arithmetric. Ise a long-number package (e.g. GNU mp) if you need more precision.

The whole article is about a very old and very well known and understood porblem. My guess is the real problem is the quality of the pro

## Re:Decimal Arithmetic (Score:2)

IEEE floats are encoded as binary data, that is, as a base-2 fixed-point number. We first assume that the first bit (the only one before the decimal (binimal?) point) is 1. We can assume this because, in base 2, a properly aligned SCI number will have 1 as its first digit. As such, we don't have to store it.

The later digits represent sucessive divisions of base-2: 1/2, 1/4, 1/8, 1/16, etc. There is also stored the shift.

So, basically, they're stored as 1.BBBB x 2^N

Is this the most effic

## Why I only use decimal values (Score:4, Interesting)

## Re:Why I only use decimal values (Score:2)

The potential error in any float is 1/(2^N) where N is the number of bits used to store the significant digits (called the mantissa)... just like the potential error in a written decimal number is 1/(10^N).

So? Well, for stuff where you need standard (N=3) float-error, use N=10 for binary. You won't find this, mind you.

Just for reference:

IEEE 754 (standard floating point numbers) in 32-bit uses 23 bits for its mantissa, and 8 bits as its exponent. Maximu

## Re:Why I only use decimal values (Score:3, Interesting)

## Re:Decimal Arithmetic (Score:5, Informative)

Is there any fundamental reason why decimal arithmetic in a computer should be more accurate than binary arithmetic in a computer?No, no, the problem is not with the precision! The problem is that when input and output is decimal, but the calculation is binary, then you get additional errors from the conversion that badly educated programmers do not expect.

## Re:Decimal Arithmetic (Score:4, Informative)

There is an excellent article about all of this detail, linked from TFA at sun: http://docs.sun.com/source/806-3568/ncg_goldberg.

Granted, I have never written any code where this matters, but I had never realized really just how bad some of the implications are in some cases.

## Re:Decimal Arithmetic (Score:3, Insightful)

For example for finance, floating-point is useless, people generally do something like use a single int to store number-of-cents.

The issue ain't accuracy per se, it's accuracy with *certain* numbers (thous representable in base10).

In a financial program people expect $0.40 * 1000000000 to come out as *precisely* 400000000 and not 399999999.99

## Re:Decimal Arithmetic (Score:5, Informative)

The issue is actually a pretty commonly understood situation when going from decimal floating point numbers to binary IEEE floats (I have another comment on here describing how they're stored), and it basically comes down to this:

Floats of any sort are stored as an int with an int shift (a.aa x b^c). As such, there will be aliasing problems based on the prime components of b. A known percentage of divisors will produce repeating numbers. For example, any division of 3,5,7,11.... in base 2 will be repeating. Any division of 3,7,11,13... in base 10 will be repeating.

No, there's nothing you can do about it. Use higher precision if needed, and otherwise get over it.

## decNumber libary from IBM (Score:5, Informative)

http://www2.hursley.ibm.com/decimal/decnumber.htm

The decNumber library implements the General Decimal Arithmetic Specification[1] in ANSI C. This specification defines a decimal arithmetic which meets the requirements of commercial, financial, and human-oriented applications.

The library fully implements the specification, and hence supports integer, fixed-point, and floating-point decimal numbers directly, including infinite, NaN (Not a Number), and subnormal values.

The code is optimized and tunable for common values (tens of digits) but can be used without alteration for up to a billion digits of precision and 9-digit exponents. It also provides functions for conversions between concrete representations of decimal numbers, including Packed Decimal (4-bit Binary Coded Decimal) and three compressed formats of decimal floating-point (4-, 8-, and 16-byte).

## Re:decNumber libary from IBM (Score:4, Informative)

Rational number arithmetic is a more general solution. Any number that can be expressed in decimal or floating-point notation is rational; any rational number can be expressed as (n/d), where n and d are integers. We have "bigints;" unbounded-magnitude integers constrained only by the memory of the computer they are stored on. Rational numeric data types pair two bigints together to give you unbounded magnitude and precision, and have been implemented for decades.

They probably aren't directly supported in your favorite programming language because they are slow to work with when you need very high precision; after each calculation, the rational number needs to be reduced to its lowest terms. This involves factoring, which takes time proportional to the the terms themselves.

Consider the use of integers, floats, or decimals only as an optimization when it has been shown that an application is suffering a serious performance hit because of rational arithmetic, and when you can use a faster data type knowing that your program will perform within accuracy goals.

For 90% of computing problems, monetary calculations included, you shouldn't even have to worry about what numeric type you're using. Your language should assume rationals unless told otherwise. Common Lisp, Scheme, and Nickle do exactly that.

C developers can use GMP [swox.com]. Other developers can use one of many bindings to GMP.

## Use A Proper Decimal Library (Score:2, Informative)

## I am Intel of Borg (Score:5, Funny)

There have been many examples, such as the original pentium bug. Of course, there was a bug in Windows Calc, it was 2.01 - 2.0 = 0 (If I remember correctly).

## The author is seriously confused (Score:5, Insightful)

Similarly, the spacecraft problem mentioned is one of an errant cast, not because of dilution of precision in floating point calculations.

The author could really pick his examples better-- as mistakes in numerical programming happen often and are often of great import.

## Re:The author is seriously confused (Score:3, Informative)

Pensioners shortchanged of 'float' [ncl.ac.uk]

Ariane casting problem (float -> 16 bit int) [ncl.ac.uk]

## A good example of the evils of math. (Score:2)

As usual, it's not just one thing

## Re:A good example of the evils of math. (Score:5, Informative)

Actually the problem was that they used a float to store the system time (time since power on) in the ground radar unit. It allowed the clock to be used in calculations without a conversion. A float will store an integer just fine (and accurately) until the number gets too large and then the units part drops off the bottom of the precision and the increment operator no longer makes any sense. This was a design decision that made sense for the role for which the missle platform was originally designed. The patriot was originally designed to be used in the European Theater (if the cold war ever turned hot) and as such would never remain in one location for more than a very few days.The clock is reset everytime they move the battery (they power off the ground tracking radar when they move). The use in the gulf war was in a strategic role (not tactical) which kept them continuously operating in a single location for long periods of time, and the shortcut they used came back to haunt them (as usual). If they had reset the system every few days, the problem would not have occured.

## Not news. (Score:5, Insightful)

## The business... (Score:2)

This reminds me of something someone I knew said once: you don't really have to be

## Re:Not news. (Score:2)

Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went from all the addresses in New EnglandFor those of us who aren't programming geniuses- what

wouldyou use to store a monetary amount, besides a floating-point format?## Re:Not news. (Score:2)

For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?In databases? Currency formats. Which are specifically designed not to lead to rounding errors. Some of them even allow you to specify the number of places after the decimal.

In code? A numeric type designed for currency work (typically an add-in library). In a pinch you can use a 32bit integer and use the last 2 decimal digits as "cents", but you'll run the risk of overfl

## Re:Not news. (Score:2)

In code? A numeric type designed for currency work (typically an add-in library). In a pinch you can use a 32bit integer and use the last 2 decimal digits as "cents", but you'll run the risk of overflow.In my case, only on the negative side

## Re:Not news. (Score:2)

Without the use of any libraries? Integers -- just use cents as the base unit of currency, and convert to dollars strictly on input and display.

If you're dealing with amounts of cents that could

possiblystart overflowing even a 32-bit int (that is, billions of cents, or tens of millions of dollars), then the application's important enough to be worth the cost of further resea## Re:Not news. (Score:3, Informative)

Unfortunately,

mostof the obvious alternatives are either somewhat restrictive, or have relatively poor performance. For example, on a 64-bit machine## Re:Not news. (Score:2)

For those of us who aren't programming geniuses- what would you use to store a monetary amount, besides a floating-point format?E.g. long long with cents as the unit. Gives you a maximum value of 180 000 000 000 000 000 full currency units. Should be enough for most apps and gives you exact calculations. And takes only 64 bit,

same as double.

## Re:Not news. (Score:2, Informative)

I looked for a page that described the advantages

of BCD, but I could not find one. So I'll have a

stab at it myself. Basically, while slower, BCD

can maintain arbitrary precision. If you have

monitary items and you have a good handle on the

range of values, you can store and operate on these

values without any rounding losses at all.

## Re:Not news. (Score:2)

Decimal data types. In COBOL or PLI (in which most of these applications are written in) you use a PIC data type.For example,

says the number has 4 integer digits and 3 fractional digits. It also may not hold a negative number. You add an S character to the front to allow negative numbers.

The language runtime interprets the numbers and there is no approxim

## Re:Not news. (Score:2)

## Re:Not news. (Score:5, Funny)

Using a float to represent monetary amounts and expecting them to be free of rounding errors is as stupid as using integers to store zip codes and wondering where the leading zeros went from all the addresses in New England.Hrrmm, well...

That would explain our lack of customer response in New England...

## Re:Not news. (Score:2)

It's not stupid, it's just ignorant.

Your comparison with integer zip codes is totally bogus: that's not an arithmetic error, it's just sloppy formatting. The variable contains a perfectly accurate value — you just have to remember to output it with a %05d format. (Of course, since you don't do arithmetic on zip co

## science; business (Score:5, Insightful)

He talks about scientific applications, but actually very few scientific calculations are sensitive to rounding error. Remember, they sent astronauts to the moon using slide rules. Generally for scientific applications, you just don't want to roll your own crappy subroutines for stuff like matrix inversion; use routines written by people who know what they're doing. (And know the limitations of the algorithm you're using. For example, there are certain goofy matrices that will make a lot of matrix inversion algorithms blow chunks.)

For business apps, the classic solution was to use BCD arithmetic. But today, is it more practical (and simple) just to use a language like Ruby, that has arbitrary-precision integers, so you can just store everything in units of cents? A lot of machines used to have special BCD instructions; do those exist on modern CPUs?

## Re:science; business (Score:2)

## Re:science; business (Score:3, Funny)

But today, is it more practical (and simple) just to use a language like Ruby, that has arbitrary-precision integers, so you can just store everything in units of cents?Hmm.... if you use integers of any given finite precision, aren't you still subjecting yourself to round-off error? (e.g. ((int)4)/((int)3) == 1!!) On the other hand, if you use a string-based infinite-precision datatype, what happens when you try to compute an non-terminating number (e.g. 1.0/3.0)? Perhaps your program crashes after tr

## This is not a "problem" per se (Score:3, Interesting)

float(and the big brotherdouble) is inaccurate. Its no surprise. A 32-bit Float is but a single simple tool in a programming language. If anyone is surprised by how Floats behave then they are, most likely, inexperienced.You don't start addressing a problem in software just by assuming Float or Double will magically fill every need. An experienced programmer needs to have a knowledge of how to use, and how

notto use, the programming tools at hand. TFA about floating point numbers is very introductory (at the end it mentions that the next article will tell us how to "avoid the problem"... I assume it will go on to cover some basic idioms.) In a way it misses the point: Floating-point rounding is not a "problem". Floats and Doubles always do their job, but you have to know what that jobis! The behaviour of floating point numbers should not be a big surprise to a seasoned coder.For example: You can't use float or double to store the numerical result of a 160-bit SHA-1 hash... you have to use the full

160 bits. (Duh, right?) So, if you use a mere 32 bits (float) or 64 bits (double) to store that number, you are going to sacrifice a lot of accuracy!## Re:This is not a "problem" per se (Score:2)

For example: You can't use float or double to store the numerical result of a 160-bit SHA-1 hash... you have to use the full 160 bits. (Duh, right?)?

I don't get it...why would you even try, what's the point of mentioning something so obvious? It's like me saying: "Hey don't even try storing your new titanium, five-iron in your asshole: since your rectum is only a few inches wide and a five-iron is over a meter long!"

## Re:This is not a "problem" per se (Score:3, Informative)

## Use Gnumeric and the Source. (Score:2)

## Numbers and bases (Score:5, Insightful)

We have the same problem in everyday numbers. Try representing 1/3 in any finite number of digits. You can't. The big thing about floating-point numbers that trips people up is that we're used to thinking in base 10. Floating-point numbers in computers typically aren't in base 10, they're in base 2. The rounding problem he describes is simply us getting confused and wondering why a fraction with an exact representation in base 10 doesn't have an exact representation in base 2. The obvious solution is the one he alludes to at the end: don't use base 2. Computers have had base-10 arithmetic in them for decades, in fact the x86 family has base-10 arithmetic instructions built in (the packed-BCD instructions). COBOL has used packed-BCD since it's beginning, which is why you don't find this sort of calculation error in ancient COBOL financial packages running on mainframes.

## Re:Numbers and bases (Score:2)

## Re:Numbers and bases (Score:2)

## Re:Numbers and bases (Score:3, Funny)

Try representing 1/3 in any finite number of digits.0.3. All you need is base 9

## Re:Numbers and bases (Score:2)

We have the same problem in everyday numbers. Try representing 1/3 in any finite number of digits. You can't.That one is real easy: Don'r use a "real" format, i.e. floats. Use a "rational" format, i.e. two integers and the value is the one divided by the other. This is one of the standard formats in the Gnu multiprecision library.

Should be obvious really from standard shool mathematics, float approcimates

R, butQcan be done exactly and is wherever needed and at least somebody has elementary mathematics sk## Re:Numbers and bases (Score:3, Insightful)

Try representing 1/3 in any finite number of digits. You can't."1/3"

You can. I just did. So did you. In base-10, even. In fact, the answer is the same for base-4 or higher. Using only two digits, "1" and "3".

Anyrational number can be represented using a finite number of digits, using... (wait for it) a RATIO.(Represent one-third in Base 2? why that would be "1/11". One-third in Base 3 would be "1/10".)

## It used to be much worse. Kahan fixed it. (Score:5, Interesting)

Due to the efforts of Willam Kahan [berkeley.edu] at U.C. Berkeley, IEEE 754 floating point, which is what we have today on almost everything, is far, far better than earlier implementations.

Just for starters, IEEE floating point guarantees that, for integer values that fit in the mantissa, addition, subtraction, and multiplication will give the correct integer result. Some earlier FPUs would give results like 2+2 = 3.99999. IEEE 754 also guarantees exact equality for integer results; you're guaranteed that 6*9 == 9*6. Fixing that made spreadsheets acceptable to people who haven't studied numerical analysis.

The "not a number" feature of IEEE floating point handles annoying cases, like division by zero. Underflow is handled well. Overflow works. 80-bit floating point is supported (except on PowerPC, which broke many engineering apps when Apple went to PowerPC.)

Those of us who do serious number crunching have to deal with this all the time. It's a big deal for game physics engines, many of which have to run on the somewhat lame FPUs of game consoles.

## Floating Point Numbers are trouble... (Score:2)

I've seen programmers who never realized these facts and had them ask wh

## The summary is nonsense (Score:2)

On the other hand, people that programm with floats and do not know or understand IEEE754 are asking for trouble. But that is true with every type of library. Knowledge and insight can only be replaced by more knowledge and greater insight. El-Cheapo outsourcing to India or hiring people without solid CS skills as programme

## Re:The summary is nonsense (Score:2)

## This is why you would choose... (Score:5, Informative)

The use of transforms for handling numerical calculations is an old trick. It is probably best-known in its use as a very quick way to multiply or divide using logarithms and a slide-rule, prior to the advent of widely-available scientific calculators and computers. Nonetheless, devices based on logarithmic calculations (such as the mechanical CURTA calculator) can wipe the floor with most floating-point maths units - this despite the fact that the CURTA dates back to the mid 1940s.

## Re:This is why you would choose... (Score:4, Informative)

LNS can be effective to around 24bits of precision, and then the hardware requirements for the LNS unit's adder/subtracter become too overwhelming. This is because multiplications and divisions are fast on LNS units (with minimal hardware) as just require an adder, however handling subtraction is much more difficult. The simplest (naive) methods of making an adder and subtractor involve using large ROM lookup tables. Fancier, more efficient units using smaller roms and small multipliers to help get better values (I don't remember all the details offhand). Sometimes they'll even trade precision for faster performance. This can result in chips with single cycle multiplies and divides, but multi-cycle additions and subtractions. For low precision calculations requiring many divides and multiplies LNS processors can often achieve the best performance. However for many applications an efficient LNS unit with sufficient precision just isn't practical.

Phil

## Quick Way to Check FPU (Score:2)

whether we can really trust itThe second part of the question can be easily answered. Compile the computer program in two ways. First, set the compiler to not use the floating-point unit (FPU). Just generate the instructions for explicitly doing the floating-point computations in software. Run the compiled code and save the results.

Second, set the compiler to explicitly use the FPU. Generate FPU in

## Bah. Author doesn't understand arithmetic. (Score:5, Insightful)

The author goes on and on about how floating point numbers are inaccurate, and unable to precisely represent represent real values, like this is something new, or even something different from the number approximations we normally use.

The reason the examples the author cites can't be represented precisely is that floating point numbers are ultimately represented as base-2 fractions, and there are a bunch of finite-length base-10 fractions that don't have a non-repeating base-2 representation. Guess what? We have *exactly* the same problem with the base-10 fractions that everyone uses all the time. Show me how you write 1/3 as a decimal!

The problem

isn'tthat floating point numbers are inherently problematic, the problem is that we typically use them by converting base-10 numbers to them, doing a bunch of calculations and then converting them back to base 10. Floating point rounding isn't an unsolved problem -- floating point rounding works perfectly, and always has. It's just that the approximations you get when you round in base 2 don't match the approximations you get when you round in base 10.Bottom line: If you care about getting the same results you'd get in base 10, do your work in base 10. This is why financial applications should not use floating point numbers.

## Re:Bah. Author doesn't understand arithmetic. (Score:2)

## Must read floating-point articles (Score:3, Interesting)

and

When bad things happen to good numbers [petebecker.com] (as well as Becker's other floating-point columns on that same page)

## average joe (Score:2, Interesting)

when i use my calculator, it doesn't give rounded off numbers. i suspect lots of programs will have problems with rounding off but i don't seem to notice it. is it that insignificant?

## Re:average joe (Score:2, Interesting)

## Re:average joe (Score:3, Interesting)

Not true.

In the math class I teach, I do the following: have everyone take a calculator and do "2/3".

Half of the calculators say this: "0.666666666" (rounded down).

Half of the calculators say this: "0.666666667" (rounded up).

In truth, an exact answer requires an infinite sequence of "6"'s. The calculator (or any computer) must decide whether to round up or down to fit it into its display space (or memory). You always have some round-off error -

## Well, Yeah, but .... (Score:3, Informative)

Basically, I agree with you, but as Hamming pointed out in the 1950s you can get yourself into trouble with some thing like:

A = small number, e.g. size of smallest feature in a Celeron-M CPU in microns

B = big number, e.g. distance to Andromeda galaxy in microns

C = (A+B) ... (some set of clever operations) ... - B

The math is fine, but the implementation won't yield the correct answer. because

## Slight clarification (Score:2)

as opposed to

'Computer implementation of the storage and manipulation of floating point numbers'

Only the latter might be suspect, depending on the implementation.

Whatever happened to what used to be known as 'scientific notation' for what are also called 'real numbers'? Eg, you store the mantissa (eg "37") and the exponent (eg -2) and there is no approximation involved, although the mantissa might have a set maximum length, so you might have trouble storing, for example 1.00000000000

## Floating error (Score:2)

## Comp Sci 101 (Score:5, Informative)

Any serious developer of business software knows all about this and avoids floating point at all cost for financial calculations. Scientists however do use them carefully since the math they do is usually much more performance (speed) sensitive and the calculations are a little more complex than what tends to be done on the business side (ie _most_ business calcs are relatively simple).

## Old news, but an unsolved problem (Score:3, Informative)

The first one is the one mentioned in the article, and something everyone who didn't sleep through his IT classes should know: Computers calculate binary, and converting floats from binary to decimal isn't possible without error. There is no way to represent 0.37 in binary, in IEEE754. No matter how many bits you spend on the mantissa. Now, you can argue that, if you make it "big enough", it doesn't matter anymore since it's well within the error margin and when you round it to, say, 5 after decimal, the error vanishes. True. But when you start calculating, when you multiply or, worse, exponentiate, the error grows in big leaps.

Another, less obvious, problem is hidden underneath the way the IEEE754 works: Your error grows as your numbers grow. This might seem obvious, but it is interesting how many people overlook this flaw and problem in everyday life. Since according to the IEEE754 standard, real numbers are stored as exponent and mantissa, if you're dealing with BIG numbers, a fair deal of your mantissa is spent on the "pre-comma" part of your number, so you're losing precision. You can't reliably say that "a double is good for 5 behind dot, no matter what", you have to take into account how many of those precious mantissa bits are spent before you even get to ponder what's left for your precision.

This isn't so much a problem of processors. It's a problem of people understanding how their processors work.

## The problem is using floating point improperly (Score:5, Interesting)

The trick is to carefully calculate exactly how much error each operation can generate. It is possible to know exactly how many bits of your result contain valid information. If you need more accuracy, you can split it into multiple operations. As long as the final accumulated error in their result is less than

Another interesting problem occurs with floating point results. You cannot expect the complete answer to be exactly identical on all machines. Even on the same machine, compiler settings affect the answer: x87 differs significantly from SSE. If you are doing something that needs bitwise identical results on all machines, you need to either implement it with integer math, or do what GIMPS does and do error tracking.

Melissa

## Re:The problem is using floating point improperly (Score:4, Informative)

P.

## rounding algorithms (Score:4, Informative)

## Good thing I use double instead of float.... (Score:3, Funny)

## Clasic testbokk issue (Score:3, Interesting)

Why would this count as "news". Everyone who has to deal with this would already know about it.

## Re:Obligatory (Score:3, Informative)

## Re:Tax (Score:2)

Would certainly be a real kicker to find out that these tiny miscalculations caused everyone to pay more tax than they needed to. Where's my refund check?!I'm sorry but you already spent it. The money that wasn't there from the budget surplus we didn't have was spent on providing tax relief that wasn't actually much of a relief in an attempt to stimulate the economy, which it didn't do.

## Another true story (Score:3, Interesting)

I once wrote interest-calculation software for a bank. This was new software to replace their old stuff. Naturally, I stored the values in cents, not guilders/dollars/euros, to avoid rounding errors (which really have a big effect in interest calculations).

When I delivered my software, they compared my output to the output their old software produced. There were small differences. They asked me where these came from, and I traced them back to rounding errors in their old software. I showed them this by ex