[floating-point] Why not use Double or Float to represent currency?

Many of the answers posted to this question discuss IEEE and the standards surrounding floating-point arithmetic.

Coming from a non-computer science background (physics and engineering), I tend to look at problems from a different perspective. For me, the reason why I wouldn't use a double or float in a mathematical calculation is that I would lose too much information.

What are the alternatives? There are many (and many more of which I am not aware!).

BigDecimal in Java is native to the Java language. Apfloat is another arbitrary-precision library for Java.

The decimal data type in C# is Microsoft's .NET alternative for 28 significant figures.

SciPy (Scientific Python) can probably also handle financial calculations (I haven't tried, but I suspect so).

The GNU Multiple Precision Library (GMP) and the GNU MFPR Library are two free and open-source resources for C and C++.

There are also numerical precision libraries for JavaScript(!) and I think PHP which can handle financial calculations.

There are also proprietary (particularly, I think, for Fortran) and open-source solutions as well for many computer languages.

I'm not a computer scientist by training. However, I tend to lean towards either BigDecimal in Java or decimal in C#. I haven't tried the other solutions I've listed, but they are probably very good as well.

For me, I like BigDecimal because of the methods it supports. C#'s decimal is very nice, but I haven't had the chance to work with it as much as I'd like. I do scientific calculations of interest to me in my spare time, and BigDecimal seems to work very well because I can set the precision of my floating point numbers. The disadvantage to BigDecimal? It can be slow at times, especially if you're using the divide method.

You might, for speed, look into the free and proprietary libraries in C, C++, and Fortran.