[c#] What does the M stand for in C# Decimal literal notation?

In order to work with decimal data types, I have to do this with variable initialization:

decimal aValue = 50.0M;

What does the M part stand for?

This question is related to c# decimal literals

The answer is


Well, i guess M represent the mantissa. Decimal can be used to save money, but it doesn't mean, decimal only used for money.


M refers to the first non-ambiguous character in "decimal". If you don't add it the number will be treated as a double.

D is double.


A real literal suffixed by M or m is of type decimal (money). For example, the literals 1m, 1.5m, 1e10m, and 123.456M are all of type decimal. This literal is converted to a decimal value by taking the exact value, and, if necessary, rounding to the nearest representable value using banker's rounding. Any scale apparent in the literal is preserved unless the value is rounded or the value is zero (in which latter case the sign and scale will be 0). Hence, the literal 2.900m will be parsed to form the decimal with sign 0, coefficient 2900, and scale 3.

Read more about real literals


From C# specifications:

var f = 0f; // float
var d = 0d; // double
var m = 0m; // decimal (money)
var u = 0u; // unsigned int
var l = 0l; // long
var ul = 0ul; // unsigned long

Note that you can use an uppercase or lowercase notation.