Binary floating-point numbers have interesting precision characteristics, since the value is stored as a binary integer raised to a binary power. When dealing with sub-integer values (that is, values between 0 and 1), negative powers of two "round off" very differently than negative powers of ten.
For example, the number 0.1 can be represented by 1 x 10-1, but there is no combination of base-2 exponent and mantissa that can precisely represent 0.1 -- the closest you get is 0.10000000000000001.
So if you have an application where you are working with values like 0.1 or 0.01 a great deal, but where small (less than 0.000000000000001%) errors cannot be tolerated, then binary floating-point numbers are not for you.
Conversely, if powers of ten are not "special" to your application (powers of ten are important in currency calculations, but not in, say, most applications of physics), then you are actually better off using binary floating-point, since it's usually at least an order of magnitude faster, and it is much more memory efficient.
The article from the Python documentation on floating point issues and limitations does an excellent job of explaining this issue in an easy to understand form. Wikipedia also has a good article on floating point that explains the math behind the representation.