[floating-point] What's the difference between a single precision and double precision floating point operation?

Double precision means the numbers takes twice the word-length to store. On a 32-bit processor, the words are all 32 bits, so doubles are 64 bits. What this means in terms of performance is that operations on double precision numbers take a little longer to execute. So you get a better range, but there is a small hit on performance. This hit is mitigated a little by hardware floating point units, but its still there.

The N64 used a MIPS R4300i-based NEC VR4300 which is a 64 bit processor, but the processor communicates with the rest of the system over a 32-bit wide bus. So, most developers used 32 bit numbers because they are faster, and most games at the time did not need the additional precision (so they used floats not doubles).

All three systems can do single and double precision floating operations, but they might not because of performance. (although pretty much everything after the n64 used a 32 bit bus so...)

Examples related to floating-point

Convert list or numpy array of single element to float in python Convert float to string with precision & number of decimal digits specified? Float and double datatype in Java C convert floating point to int Convert String to Float in Swift How do I change data-type of pandas data frame to string with a defined format? How to check if a float value is a whole number Convert floats to ints in Pandas? Converting Float to Dollars and Cents Format / Suppress Scientific Notation from Python Pandas Aggregation Results

Examples related to precision

How do you round a double in Dart to a given degree of precision AFTER the decimal point? Show two digits after decimal point in c++ Get DateTime.Now with milliseconds precision How to convert milliseconds to seconds with precision Dividing two integers to produce a float result Double precision - decimal places Changing precision of numeric column in Oracle Double precision floating values in Python? JavaScript displaying a float to 2 decimal places What is the difference between float and double?

Examples related to processor

What's the difference between a single precision and double precision floating point operation? How to determine whether a given Linux is 32 bit or 64 bit?

Examples related to operations

Java NIO FileChannel versus FileOutputstream performance / usefulness What's the difference between a single precision and double precision floating point operation?