[floating-point] Why not use Double or Float to represent currency?

Float is binary form of Decimal with different design; they are two different things. There are little errors between two types when converted to each other. Also, float is designed to represent infinite large number of values for scientific. That means it is designed to lost precision to extreme small and extreme large number with that fixed number of bytes. Decimal can't represent infinite number of values, it bounds to just that number of decimal digits. So Float and Decimal are for different purpose.

There are some ways to manage the error for currency value:

  1. Use long integer and count in cents instead.

  2. Use double precision, keep your significant digits to 15 only so decimal can be exactly simulated. Round before presenting values; Round often when doing calculations.

  3. Use a decimal library like Java BigDecimal so you don't need to use double to simulate decimal.

p.s. it is interesting to know that most brands of handheld scientific calculators works on decimal instead of float. So no one complaint float conversion errors.