Is this defined by the language? Is there a defined maximum? Is it different in different browsers?
This question is related to
javascript
math
browser
cross-browser
In JavaScript the representation of numbers is 2^53 - 1
.
Many earlier answers have shown that 9007199254740992 === 9007199254740992 + 1 === true
to verify that 9,007,199,254,740,991 is the maximum and safe integer.
But what if we keep doing accumulation:
input: 9007199254740992 + 1 output: 9007199254740992 expected: 9007199254740993
input: 9007199254740992 + 2 output: 9007199254740994 expected: 9007199254740994
input: 9007199254740992 + 3 output: 9007199254740996 expected: 9007199254740995
input: 9007199254740992 + 4 output: 9007199254740996 expected: 9007199254740996
We can find out that among numbers greater than 9,007,199,254,740,992, only even numbers are representable.
It's an entrance to explaining how the 64-bit double-precision format works. Let's see how 9,007,199,254,740,992 is represented using this binary format.
First, here is 4,503,599,627,370,496:
1 . 0000 ----- 0000 * 2^52 => 1 0000 ----- 0000.
| - 52 bits - | |exponent part| | - 52 bits - |.
On the left side of the arrow, we have bit value 1, an adjacent radix point and a fraction part consisting of 52 bits. This number multiplied with 2^52
, which moves the radix point 52 steps to the right. The radix point ends up at the end, and we get 4503599627370496 in binary.
Now let's keep incrementing the fraction part with 1 until all the bits are set to 1, which equals 9,007,199,254,740,991.
1 . 0000 ----- 0000 * 2^52 => 1 0000 ----- 0000. (like above)
(+1)
1 . 0000 ----- 0001 * 2^52 => 1 0000 ----- 0001.
(+1)
1 . 0000 ----- 0010 * 2^52 => 1 0000 ----- 0010.
.
.
.
1 . 1111 ----- 1111 * 2^52 => 1 1111 ----- 1111.
Because the 64-bit double-precision format strictly allots 52 bits for the fraction part, no more bits are available to carry another 1, so what we can do is to set all bits back to 0, and manipulate the exponent part (which can technically go as high as 2^1023
):
???? This bit is implicit and persistent.
?
1 . 1111 ----- 1111 * 2^52 => 1 1111 ----- 1111.
(+1) (radix point won't have anywhere to go)
1 . 0000 ----- 0000 * 2^52 * 2 => 1 0000 ----- 0000. * 2
=>
1 . 0000 ----- 0000 * 2^53 (exponent has increased)
Now, we've reached our limit of 9,007,199,254,740,992. Since the exponent has increased to be 2 to the power of 53, and our fraction part is only 52 bits long, the format can only handle increments of 2: for every increment of 1 on the fraction part, the resulting decimal number is 2 greater.
Consume 2^52 to move radix point to
the end again, you end up with:
1 . 0000 ----- 0000 * 2^53 => 1 0000 ----- 0000. * 2 <--- we're left with a multiplication of 2 here
| - 52 bits - | | - 52 bits - |.
This is why the 64-bit double-precision format cannot hold odd numbers when the number is greater than 9,007,199,254,740,992 and why people like to point out that 9007199254740992 === 9007199254740992 + 1 === true
:
Following this pattern, when the number gets greater than 9,007,199,254,740,992 * 2 = 18,014,398,509,481,984 only 4 times the fraction can be held:
input: 18014398509481984 + 1 output: 18014398509481984 expected: 18014398509481985
input: 18014398509481984 + 2 output: 18014398509481984 expected: 18014398509481986
input: 18014398509481984 + 3 output: 18014398509481984 expected: 18014398509481987
input: 18014398509481984 + 4 output: 18014398509481988 expected: 18014398509481988
How about numbers 2,251,799,813,685,248 through 4,503,599,627,370,495?
We've seen that we can technically go beyond what we like to treat as the biggest possible number (it's rather the biggest safe number; it's called MAX_SAFE_NUMBER
for a reason), but that we're then limited to increments of 2, later 4, then 8, etc.
Below those numbers, though, there are other limits. Again, following the same pattern.
1 . 0000 ----- 0001 * 2^51 => 1 0000 ----- 000 . 1
| - 52 bits - | | - 52 bits - |
The value 0.1
in binary is is exactly 2^-1 (=1/2) (=0.5). So when the number is in the aforementioned range, the least significant bit represents 1/2:
input: 4503599627370495.5 output: 4503599627370495.5
input: 4503599627370495.7 output: 4503599627370495.5
A higher precision isn't possible.
Less than 2,251,799,813,685,248 (2^51
):
input: 2251799813685246.3 output: 2251799813685246.25
input: 2251799813685246.5 output: 2251799813685246.5
input: 2251799813685246.75 output: 2251799813685246.75
// Please note that if you try this yourself and, say, log these numbers to the console,
// they will get rounded. JavaScript rounds if the number of digits exceed 17. The value
// is internally held correctly:
input: 2251799813685246.25.toString(2)
output: "111111111111111111111111111111111111111111111111110.01"
input: 2251799813685246.75.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
input: 2251799813685246.78.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
The exponent part has 11 bits allotted for it by the format.
From Wikipedia (for more details, go there):
So to make the exponent part be 2^52
, we exactly need to set e = 1075.
Scato wrotes:
anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
the console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648
Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
_x000D_
Many earlier answers have shown that 9007199254740992 === 9007199254740992 + 1 === true
to verify that 9,007,199,254,740,991 is the maximum and safe integer.
But what if we keep doing accumulation:
input: 9007199254740992 + 1 output: 9007199254740992 expected: 9007199254740993
input: 9007199254740992 + 2 output: 9007199254740994 expected: 9007199254740994
input: 9007199254740992 + 3 output: 9007199254740996 expected: 9007199254740995
input: 9007199254740992 + 4 output: 9007199254740996 expected: 9007199254740996
We can find out that among numbers greater than 9,007,199,254,740,992, only even numbers are representable.
It's an entrance to explaining how the 64-bit double-precision format works. Let's see how 9,007,199,254,740,992 is represented using this binary format.
First, here is 4,503,599,627,370,496:
1 . 0000 ----- 0000 * 2^52 => 1 0000 ----- 0000.
| - 52 bits - | |exponent part| | - 52 bits - |.
On the left side of the arrow, we have bit value 1, an adjacent radix point and a fraction part consisting of 52 bits. This number multiplied with 2^52
, which moves the radix point 52 steps to the right. The radix point ends up at the end, and we get 4503599627370496 in binary.
Now let's keep incrementing the fraction part with 1 until all the bits are set to 1, which equals 9,007,199,254,740,991.
1 . 0000 ----- 0000 * 2^52 => 1 0000 ----- 0000. (like above)
(+1)
1 . 0000 ----- 0001 * 2^52 => 1 0000 ----- 0001.
(+1)
1 . 0000 ----- 0010 * 2^52 => 1 0000 ----- 0010.
.
.
.
1 . 1111 ----- 1111 * 2^52 => 1 1111 ----- 1111.
Because the 64-bit double-precision format strictly allots 52 bits for the fraction part, no more bits are available to carry another 1, so what we can do is to set all bits back to 0, and manipulate the exponent part (which can technically go as high as 2^1023
):
???? This bit is implicit and persistent.
?
1 . 1111 ----- 1111 * 2^52 => 1 1111 ----- 1111.
(+1) (radix point won't have anywhere to go)
1 . 0000 ----- 0000 * 2^52 * 2 => 1 0000 ----- 0000. * 2
=>
1 . 0000 ----- 0000 * 2^53 (exponent has increased)
Now, we've reached our limit of 9,007,199,254,740,992. Since the exponent has increased to be 2 to the power of 53, and our fraction part is only 52 bits long, the format can only handle increments of 2: for every increment of 1 on the fraction part, the resulting decimal number is 2 greater.
Consume 2^52 to move radix point to
the end again, you end up with:
1 . 0000 ----- 0000 * 2^53 => 1 0000 ----- 0000. * 2 <--- we're left with a multiplication of 2 here
| - 52 bits - | | - 52 bits - |.
This is why the 64-bit double-precision format cannot hold odd numbers when the number is greater than 9,007,199,254,740,992 and why people like to point out that 9007199254740992 === 9007199254740992 + 1 === true
:
Following this pattern, when the number gets greater than 9,007,199,254,740,992 * 2 = 18,014,398,509,481,984 only 4 times the fraction can be held:
input: 18014398509481984 + 1 output: 18014398509481984 expected: 18014398509481985
input: 18014398509481984 + 2 output: 18014398509481984 expected: 18014398509481986
input: 18014398509481984 + 3 output: 18014398509481984 expected: 18014398509481987
input: 18014398509481984 + 4 output: 18014398509481988 expected: 18014398509481988
How about numbers 2,251,799,813,685,248 through 4,503,599,627,370,495?
We've seen that we can technically go beyond what we like to treat as the biggest possible number (it's rather the biggest safe number; it's called MAX_SAFE_NUMBER
for a reason), but that we're then limited to increments of 2, later 4, then 8, etc.
Below those numbers, though, there are other limits. Again, following the same pattern.
1 . 0000 ----- 0001 * 2^51 => 1 0000 ----- 000 . 1
| - 52 bits - | | - 52 bits - |
The value 0.1
in binary is is exactly 2^-1 (=1/2) (=0.5). So when the number is in the aforementioned range, the least significant bit represents 1/2:
input: 4503599627370495.5 output: 4503599627370495.5
input: 4503599627370495.7 output: 4503599627370495.5
A higher precision isn't possible.
Less than 2,251,799,813,685,248 (2^51
):
input: 2251799813685246.3 output: 2251799813685246.25
input: 2251799813685246.5 output: 2251799813685246.5
input: 2251799813685246.75 output: 2251799813685246.75
// Please note that if you try this yourself and, say, log these numbers to the console,
// they will get rounded. JavaScript rounds if the number of digits exceed 17. The value
// is internally held correctly:
input: 2251799813685246.25.toString(2)
output: "111111111111111111111111111111111111111111111111110.01"
input: 2251799813685246.75.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
input: 2251799813685246.78.toString(2)
output: "111111111111111111111111111111111111111111111111110.11"
The exponent part has 11 bits allotted for it by the format.
From Wikipedia (for more details, go there):
So to make the exponent part be 2^52
, we exactly need to set e = 1075.
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
The
MAX_SAFE_INTEGER
constant has a value of9007199254740991
(9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between-(2^53 - 1)
and2^53 - 1
.Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example,
Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2
will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.Because
MAX_SAFE_INTEGER
is a static property of Number, you always use it asNumber.MAX_SAFE_INTEGER
, rather than as a property of a Number object you created.
Node.js and Google Chrome seem to both be using 1024 bit floating point values so:
Number.MAX_VALUE = 1.7976931348623157e+308
Jimmy's answer correctly represents the continuous JavaScript integer spectrum as -9007199254740992 to 9007199254740992 inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong! Demonstration below or in jsfiddle).
console.log(9007199254740993);
_x000D_
However, there is no answer that finds/proves this programatically (other than the one CoolAJ86 alluded to in his answer that would finish in 28.56 years ;), so here's a slightly more efficient way to do that (to be precise, it's more efficient by about 28.559999999968312 years :), along with a test fiddle:
/**_x000D_
* Checks if adding/subtracting one to/from a number yields the correct result._x000D_
*_x000D_
* @param number The number to test_x000D_
* @return true if you can add/subtract 1, false otherwise._x000D_
*/_x000D_
var canAddSubtractOneFromNumber = function(number) {_x000D_
var numMinusOne = number - 1;_x000D_
var numPlusOne = number + 1;_x000D_
_x000D_
return ((number - numMinusOne) === 1) && ((number - numPlusOne) === -1);_x000D_
}_x000D_
_x000D_
//Find the highest number_x000D_
var highestNumber = 3; //Start with an integer 1 or higher_x000D_
_x000D_
//Get a number higher than the valid integer range_x000D_
while (canAddSubtractOneFromNumber(highestNumber)) {_x000D_
highestNumber *= 2;_x000D_
}_x000D_
_x000D_
//Find the lowest number you can't add/subtract 1 from_x000D_
var numToSubtract = highestNumber / 4;_x000D_
while (numToSubtract >= 1) {_x000D_
while (!canAddSubtractOneFromNumber(highestNumber - numToSubtract)) {_x000D_
highestNumber = highestNumber - numToSubtract;_x000D_
}_x000D_
_x000D_
numToSubtract /= 2;_x000D_
} _x000D_
_x000D_
//And there was much rejoicing. Yay. _x000D_
console.log('HighestNumber = ' + highestNumber);
_x000D_
At the moment of writing, JavaScript is receiving a new data type: BigInt
. It is a TC39 proposal at stage 4 to be included in EcmaScript 2020. BigInt
is available in Chrome 67+, FireFox 68+, Opera 54 and Node 10.4.0. It is underway in Safari, et al... It introduces numerical literals having an "n" suffix and allows for arbitrary precision:
var a = 123456789012345678901012345678901n;
Precision will still be lost, of course, when such a number is (maybe unintentionally) coerced to a number data type.
And, obviously, there will always be precision limitations due to finite memory, and a cost in terms of time in order to allocate the necessary memory and to perform arithmetic on such large numbers.
For instance, the generation of a number with a hundred thousand decimal digits, will take a noticeable delay before completion:
console.log(BigInt("1".padEnd(100000,"0")) + 1n)
...but it works.
I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:
javascript: alert(9e15-(9e15+1));
In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.
The short answer is “it depends.”
If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:
Unsigned: 0…(-1>>>0)
Signed: (-(-1>>>1)-1)…(-1>>>1)
(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)
If you’re not using bitwise operators or working with array lengths:
Signed: (-Math.pow(2,53))…(+Math.pow(2,53))
These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a negative 0!)
ECMAScript 6:
Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;
Jimmy's answer correctly represents the continuous JavaScript integer spectrum as -9007199254740992 to 9007199254740992 inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong! Demonstration below or in jsfiddle).
console.log(9007199254740993);
_x000D_
However, there is no answer that finds/proves this programatically (other than the one CoolAJ86 alluded to in his answer that would finish in 28.56 years ;), so here's a slightly more efficient way to do that (to be precise, it's more efficient by about 28.559999999968312 years :), along with a test fiddle:
/**_x000D_
* Checks if adding/subtracting one to/from a number yields the correct result._x000D_
*_x000D_
* @param number The number to test_x000D_
* @return true if you can add/subtract 1, false otherwise._x000D_
*/_x000D_
var canAddSubtractOneFromNumber = function(number) {_x000D_
var numMinusOne = number - 1;_x000D_
var numPlusOne = number + 1;_x000D_
_x000D_
return ((number - numMinusOne) === 1) && ((number - numPlusOne) === -1);_x000D_
}_x000D_
_x000D_
//Find the highest number_x000D_
var highestNumber = 3; //Start with an integer 1 or higher_x000D_
_x000D_
//Get a number higher than the valid integer range_x000D_
while (canAddSubtractOneFromNumber(highestNumber)) {_x000D_
highestNumber *= 2;_x000D_
}_x000D_
_x000D_
//Find the lowest number you can't add/subtract 1 from_x000D_
var numToSubtract = highestNumber / 4;_x000D_
while (numToSubtract >= 1) {_x000D_
while (!canAddSubtractOneFromNumber(highestNumber - numToSubtract)) {_x000D_
highestNumber = highestNumber - numToSubtract;_x000D_
}_x000D_
_x000D_
numToSubtract /= 2;_x000D_
} _x000D_
_x000D_
//And there was much rejoicing. Yay. _x000D_
console.log('HighestNumber = ' + highestNumber);
_x000D_
I write it like this:
var max_int = 0x20000000000000;
var min_int = -0x20000000000000;
(max_int + 1) === 0x20000000000000; //true
(max_int - 1) < 0x20000000000000; //true
Same for int32
var max_int32 = 0x80000000;
var min_int32 = -0x80000000;
In JavaScript the representation of numbers is 2^53 - 1
.
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
_x000D_
It is 253 == 9 007 199 254 740 992. This is because Number
s are stored as floating-point in a 52-bit mantissa.
The min value is -253.
This makes some fun things happening
Math.pow(2, 53) == Math.pow(2, 53) + 1
>> true
And can also be dangerous :)
var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
// infinite loop
}
Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html
At the moment of writing, JavaScript is receiving a new data type: BigInt
. It is a TC39 proposal at stage 4 to be included in EcmaScript 2020. BigInt
is available in Chrome 67+, FireFox 68+, Opera 54 and Node 10.4.0. It is underway in Safari, et al... It introduces numerical literals having an "n" suffix and allows for arbitrary precision:
var a = 123456789012345678901012345678901n;
Precision will still be lost, of course, when such a number is (maybe unintentionally) coerced to a number data type.
And, obviously, there will always be precision limitations due to finite memory, and a cost in terms of time in order to allocate the necessary memory and to perform arithmetic on such large numbers.
For instance, the generation of a number with a hundred thousand decimal digits, will take a noticeable delay before completion:
console.log(BigInt("1".padEnd(100000,"0")) + 1n)
...but it works.
It is 253 == 9 007 199 254 740 992. This is because Number
s are stored as floating-point in a 52-bit mantissa.
The min value is -253.
This makes some fun things happening
Math.pow(2, 53) == Math.pow(2, 53) + 1
>> true
And can also be dangerous :)
var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992
for (var i = MAX_INT; i < MAX_INT + 2; ++i) {
// infinite loop
}
Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html
Node.js and Google Chrome seem to both be using 1024 bit floating point values so:
Number.MAX_VALUE = 1.7976931348623157e+308
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
In JavaScript, there is a number called Infinity
.
Examples:
(Infinity>100)
=> true
// Also worth noting
Infinity - 1 == Infinity
=> true
Math.pow(2,1024) === Infinity
=> true
This may be sufficient for some questions regarding this topic.
Try:
maxInt = -1 >>> 1
In Firefox 3.6 it's 2^31 - 1.
I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:
javascript: alert(9e15-(9e15+1));
ECMAScript 6:
Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1;
Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;
Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :
for (var x = 2; x + 1 !== x; x *= 2);
console.log(x);
Which gives me 9007199254740992 within less than a millisecond in Chrome 30.
It will test powers of 2 to find which one, when 'added' 1, equals himself.
The short answer is “it depends.”
If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:
Unsigned: 0…(-1>>>0)
Signed: (-(-1>>>1)-1)…(-1>>>1)
(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)
If you’re not using bitwise operators or working with array lengths:
Signed: (-Math.pow(2,53))…(+Math.pow(2,53))
These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a negative 0!)
I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:
javascript: alert(9e15-(9e15+1));
Try:
maxInt = -1 >>> 1
In Firefox 3.6 it's 2^31 - 1.
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
_x000D_
var MAX_INT = 4294967295;
I thought I'd be clever and find the value at which x + 1 === x
with a more pragmatic approach.
My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.
If you can't wait that long, I'm willing to bet that
9007199254740992 === Math.pow(2, 53) + 1
is proof enough4294967295
which is Math.pow(2,32) - 1
as to avoid expected issues with bit-shiftingFinding x + 1 === x
:
(function () {
"use strict";
var x = 0
, start = new Date().valueOf()
;
while (x + 1 != x) {
if (!(x % 10000000)) {
console.log(x);
}
x += 1
}
console.log(x, new Date().valueOf() - start);
}());
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
In JavaScript, there is a number called Infinity
.
Examples:
(Infinity>100)
=> true
// Also worth noting
Infinity - 1 == Infinity
=> true
Math.pow(2,1024) === Infinity
=> true
This may be sufficient for some questions regarding this topic.
Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.
Scato wrotes:
anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
the console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648
Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.
In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.
Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).
The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.
The
MAX_SAFE_INTEGER
constant has a value of9007199254740991
(9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between-(2^53 - 1)
and2^53 - 1
.Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example,
Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2
will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.Because
MAX_SAFE_INTEGER
is a static property of Number, you always use it asNumber.MAX_SAFE_INTEGER
, rather than as a property of a Number object you created.
var MAX_INT = 4294967295;
I thought I'd be clever and find the value at which x + 1 === x
with a more pragmatic approach.
My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.
If you can't wait that long, I'm willing to bet that
9007199254740992 === Math.pow(2, 53) + 1
is proof enough4294967295
which is Math.pow(2,32) - 1
as to avoid expected issues with bit-shiftingFinding x + 1 === x
:
(function () {
"use strict";
var x = 0
, start = new Date().valueOf()
;
while (x + 1 != x) {
if (!(x % 10000000)) {
console.log(x);
}
x += 1
}
console.log(x, new Date().valueOf() - start);
}());
>= ES6:
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;
<= ES5
From the reference:
Number.MAX_VALUE;
Number.MIN_VALUE;
console.log('MIN_VALUE', Number.MIN_VALUE);
console.log('MAX_VALUE', Number.MAX_VALUE);
console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6
console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6
_x000D_
Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :
for (var x = 2; x + 1 !== x; x *= 2);
console.log(x);
Which gives me 9007199254740992 within less than a millisecond in Chrome 30.
It will test powers of 2 to find which one, when 'added' 1, equals himself.
Firefox 3 doesn't seem to have a problem with huge numbers.
1e+200 * 1e+100 will calculate fine to 1e+300.
Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)
Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.
I write it like this:
var max_int = 0x20000000000000;
var min_int = -0x20000000000000;
(max_int + 1) === 0x20000000000000; //true
(max_int - 1) < 0x20000000000000; //true
Same for int32
var max_int32 = 0x80000000;
var min_int32 = -0x80000000;
Source: Stackoverflow.com