One piece of popular programming wisdom is "never using floating-point numbers for money." This has never made sense to me. A 64-bit floating-point number can represent 15 decimal digits, which is all balances less than 10 trillion (9 999 999 999 999.99), with two digits after the decimal place. Excel does all computation with 64-bit floats, and it is used for tons of financial calculations. After investigating, my conclusion is that the common wisdom is good advice but overly simplified. If you round the result of every computation, then you *can* get exactly correct financial calculations using floating-point numbers, for realistic ranges of values. However, it can be tricky to make sure you round in the right places, so using a money-specific data type is an easy way to avoid these errors, and is still good advice. The challenge is that you need to carefully manage rounding when computing with money, no matter how you represent it. Floating-point numbers seem like they should "just work," and in most cases, they will. This means rounding bugs will be hidden until the right (wrong?) input is processed.

Let's start by discussing how to get the "right" answer for financial calculations (I have not worked in banking, so please correct me if I get this wrong). Financial calculations are typically written by humans in contracts in decimal (base 10). That is, we use numbers such as CAD $1.35 (2 decimal places), and interest rates like 1.85% (0.0185, 4 decimal places). Payments can only be made with some fixed precision, such as two decimal places in Canadian dollars. When we compute fractional values that have more decimal places than the payment system (e.g. taxes or interest), then we have to decide how to round. The exact rule to be used will vary. In many cases, such as US banking interest payments, there is no required standard, which I find surprising since banking has a reputation for punishingly exact regulation. I use round half to even in this article, since it is less biased and is the default rounding mode for IEEE floating-point numbers. No matter what rule we choose, the correct results are what we would compute "by hand," using the decimal math we learned in school.

Floating-point numbers in our computers are binary (base 2). This means there are some base-10 numbers that can't be represented exactly when converting between the two. Let's look at a few concrete examples. These are using IEEE 754 64-bit floating point values, and should be the results you will get with C/C++/Java double, Go float64, JavaScript, and Python (and probably nearly every programming language?).

`0.1 + 0.2`

: This equals 0.3, but in floating-point:`(0.1 + 0.2) == 0.3`

is*false*. This is because 0.1, 0.2 and 0.3 cannot be precisely represented in base 2 floating-point. See Bruce Dawson's explanation for details.`1.40 * 165`

: produces 230.99999999999997 but should be exactly 231. This can cause errors when computing the total price for a purchase order, particularly if we truncate the output instead of rounding.- Round half to nearest even to 2 decimal places (e.g. Python 3):
`round(2.675, 2)`

produces 2.67 but should produce 2.68, since 2.675 is halfway between 2.67 and 2.68, and it should round to the even digit. This is because 2.675 is actually 2.6749999999999998, which is slightly below half. The opposite error happens with round(2.665, 2) which produces 2.67 but should produce 2.66. - 1.25% of $0.40. This is equivalent to
`0.0125*0.40`

which should produce 0.005000, which if using round to even rounds to $0.00. However, in floating-point numbers, it is above the halfway point (0.00500000000000000097), so it rounds up.

We can solve these problems by rounding after every operation. The "intuitive" reasoning is if we were computing numbers by hand, we need a fixed number of decimal places: e.g. 2 for basic totals. The floating-point numbers approximate the decimal numbers with a tiny bit of error. Since we "know" the exact answers have a finite number of decimal digits, we can just round off the lower part of the numbers, which will produce the nearest float with that number of digits. For this to work, the results must have at most 15 decimal digits, which is less than 10 trillion for 2 digits after the decimal, or less than 1 billion for 6 digits (e.g. interest calculations). As shown by Excel, you probably don't need to round after every operation: the error may accumulate, but the number of operations you would need to cause a 1 cent error is pretty huge. However, in this case you will need to "double round" before using the final result. You need to first round to the number of significant figures from your calculation, then you need to apply your rounding rule down to cents. Otherwise, you will get some rounding edge cases wrong (e.g. 3.6% of $3.75 = 0.135, which should round to $0.14, but in floating-point it is 0.1349... which rounds to $0.13).

I am not a theoretician and have not proven that this is actually correct. Let me know if you find a counter-example. I did, however, try every interest calculation in the range of [0.00%-4.00%] with all two-digit values [0.00-4.00], and they were equivalent to a precise decimal math library, so it does work at least for that limited domain (Go test program).

Another common suggested solution is to use an integer value that represents the largest precision you need. For example, CAD $1.23 could be stored as the integer 123. This is great for integer multiplication, addition and subtraction, which is sufficient for basic order accounting. However, as soon as you need to deal with interest or taxes, you need to track the number of decimal places and manage rounding, which is just as much of a pain as using floating point. For example, 1.25% of $0.40 can be expressed as 125 * 40 = 5000. However, we need to remember this has 6 digits after the decimal point, so is equivalent to 0.005 (4 decimal places times 2 decimal places produces a result with 6 decimal places). To convert to cents, we need to divide by 10000 and decide how to round. The primary advantage of using an integer is that it forces you to think about the conversions, rather than blindly applying the calculations and forgetting about rounding.

My summary: if you are doing some financial math that does not need to be accurate to the penny, just use floating point numbers. If it is good enough for Excel, it will be good enough for most applications. This is the approach I took when building a cloud cost model that breaks down a cloud bill by software component. I was able to get it to exactly equal the bill by applying rounding rules in the right places, but simplified the code by not doing it. However, if you are writing software that needs to get it exactly right, use a specialized package. Not only will the APIs force you to get the math right, but they also provide tools to make it easier, such as rounding rules or currency support.

- What Every Programmer Should Know About Floating-Point Arithmetic: A simpler guide to floating point numbers than the often cited What Every Computer Scientist Should Know About Floating-Point Arithmetic which is far too math-heavy for me.
- IEEE Standard 754 Floating Point Numbers: Another readable overview of how floating point numbers work.
- Sometimes floating-point math is perfect: Bruce Dawson digs into the common misconception that floating-point math is erroneous. It isn't always! His other articles on floating-point numbers are also entertaining and educational.
- Small-Data Computing: Correct Calculator Arithmetic: How to compute exactly correct decimal values for arbitrary calculator expressions. It is also educational about issues like rounding.
- Stack Overflow on Why Not Use Double or Float to Represent Currency: The answers here are ... mostly correct.