[mysql] Difference between float and decimal data type

Hard & Fast Rule

If all you need to do is add, subtract or multiply the numbers you are storing, DECIMAL is best.

If you need to divide or do any other form of arithmetic or algebra on the data you're almost certainly going to be happier with float. Floating point libraries, and on Intel processors, the floating point processor itself, have TONs of operations to correct, fix-up, detect and handle the blizzard of exceptions that occur when doing typical math functions - especially transcendental functions.

As for accuracy, I once wrote a budget system that computed the % contribution of each of 3,000+ accounts, for 3,600 budget units, by month to that unit's consolidation node, then based on that matrix of percentages (3,000 + x 12 x 3,600) I multiplied the amounts budgeted by the highest organizational nodes down to the next 3 levels of the organizational nodes, and then computed all (3,000 + 12) values for all 3,200 detail units from that. Millions and millions and millions of double precision floating point calculations, any one of which would throw off the roll-up of all of those projections in a bottoms-up consolidation back to the highest level in the organization.

The total floating point error after all of those calculations was ZERO. That was in 1986, and floating point libraries today are much, much better than they were back then. Intel does all of it's intermediate calculations of doubles in 80 bit precision, which all but eliminates rounding error. When someone tells you "it's floating point error" it's almost certainty NOT true.