I keep seeing people using doubles in C#. I know I read somewhere that doubles sometimes lose precision. My question is when should a use a double and when should I use a decimal type? Which type is suitable for money computations? (ie. greater than $100 million)
System.Single / float - 7 digits
System.Double / double - 15-16 digits
System.Decimal / decimal - 28-29 significant digits
The way I've been stung by using the wrong type (a good few years ago) is with large amounts:
You run out at 1 million for a float.
A 15 digit monetary value:
9 trillion with a double. But with division and comparisons it's more complicated (I'm definitely no expert in floating point and irrational numbers - see Marc's point). Mixing decimals and doubles causes issues:
A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used because the floating-point number might not exactly approximate the decimal number.
When should I use double instead of decimal? has some similar and more in depth answers.
Using double
instead of decimal
for monetary applications is a micro-optimization - that's the simplest way I look at it.
For money: decimal
. It costs a little more memory, but doesn't have rounding troubles like double
sometimes has.
Decimal is for exact values. Double is for approximate values.
USD: $12,345.67 USD (Decimal)
CAD: $13,617.27 (Decimal)
Exchange Rate: 1.102932 (Double)
My question is when should a use a double and when should I use a decimal type?
decimal
for when you work with values in the range of 10^(+/-28) and where you have expectations about the behaviour based on base 10 representations - basically money.
double
for when you need relative accuracy (i.e. losing precision in the trailing digits on large values is not a problem) across wildly different magnitudes - double
covers more than 10^(+/-300). Scientific calculations are the best example here.
which type is suitable for money computations?
decimal, decimal, decimal
Accept no substitutes.
The most important factor is that double
, being implemented as a binary fraction, cannot accurately represent many decimal
fractions (like 0.1) at all and its overall number of digits is smaller since it is 64-bit wide vs. 128-bit for decimal
. Finally, financial applications often have to follow specific rounding modes (sometimes mandated by law). decimal
supports these; double
does not.
Definitely use integer types for your money computations.
This cannot be emphasized enough since at first glance it might seem that a floating point type is adequate.
Here an example in python code:
>>> amount = float(100.00) # one hundred dollars
>>> print amount
100.0
>>> new_amount = amount + 1
>>> print new_amount
101.0
>>> print new_amount - amount
>>> 1.0
looks pretty normal.
Now try this again with 10^20
Zimbabwe dollars:
>>> amount = float(1e20)
>>> print amount
1e+20
>>> new_amount = amount + 1
>>> print new_amount
1e+20
>>> print new_amount-amount
0.0
As you can see, the dollar disappeared.
If you use the integer type, it works fine:
>>> amount = int(1e20)
>>> print amount
100000000000000000000
>>> new_amount = amount + 1
>>> print new_amount
100000000000000000001
>>> print new_amount - amount
1
I think that the main difference beside bit width is that decimal has exponent base 10 and double has 2
http://software-product-development.blogspot.com/2008/07/net-double-vs-decimal.html
Source: Stackoverflow.com