It seems like there is no maximum limit to a BigInt as per spec, which makes sense considering BigInts are supposed to be arbitrary-precision integers, whose "digits of precision are limited only by the available memory of the host system".
As for v8 specifically, according to this article on the v8 blog, the precision of BigInts are "arbitrary up to an implementation-defined limit". Unfortunately, I couldn't find any further information on how the limit is determined. Maybe someone else would be able to shed light on this based on these v8 BigInt implementation notes?
That said, based on the aforementioned articles, there doesn't seem to be a specific maximum value/size for a BigInt. Rather, it is likely determined based on the available memory on the system in some way.
Answer from zwliew on Stack OverflowIt seems like there is no maximum limit to a BigInt as per spec, which makes sense considering BigInts are supposed to be arbitrary-precision integers, whose "digits of precision are limited only by the available memory of the host system".
As for v8 specifically, according to this article on the v8 blog, the precision of BigInts are "arbitrary up to an implementation-defined limit". Unfortunately, I couldn't find any further information on how the limit is determined. Maybe someone else would be able to shed light on this based on these v8 BigInt implementation notes?
That said, based on the aforementioned articles, there doesn't seem to be a specific maximum value/size for a BigInt. Rather, it is likely determined based on the available memory on the system in some way.
The maximum size of a BigInt in webkit is defined as such
// The maximum length that the current implementation supports would be
// maxInt / digitBits. However, we use a lower limit for now, because
// raising it later is easier than lowering it.
// Support up to 1 million bits.
static constexpr unsigned maxLength = 1024 * 1024 / (sizeof(void*) * bitsPerByte);
The size of void* is platform dependent, 8 on 64 bit systems.
So there's your answer right? Should be 16384 bits.... (-1 for the sign). But I can't create anywhere near that large a number in console.
Note: below measurements were taken in V8 (Chromium), should extend to other engines to confirm and post links to benchmarks.
Note 2: all these functions assume the argument is nonnegative, and may crash / loop / return invalid results otherwise.
simple version
return x.toString(2).length
as pointed out in the comments, even if it feels hacky to use toString(), it's unlikely there's an asymptotically faster way (because formatting a base 2 string is efficient and happens natively). it's O(n) with the number of bits of the bigint. other than that it's short and readable, so it's ideal for most uses.
better version
const i = (x.toString(16).length - 1) * 4
return i + 32 - Math.clz32(Number(x >> BigInt(i)))
this still uses the toString() approach, but it uses base 16 instead of base 2, meaning the temporary string that is produced is 4x shorter. formatting base 16 is also efficient, no real base change required, so still O(n).
the gains of using this are marginal for small bigints (+10% for 32 bits) but for big ones (such as messages or cryptography coefficients, meaning 512 / 1024 / 2048 / 4096) it's 2.5 to 4 times faster and puts less pressure on the GC as it only allocates a 2x long string. it's ideal for hot code and non-small integers.
"you may be overthinking this" version
there's fortunately a more performant way that does not involve toString() and instead relies on pre-allocated integers of different sizes that are successively compared to the target to get an upper bound for its size, in ceil(log2(n)) steps.
once the upper bound is established, the exact size is determined by bisection:
// find upper bound
let k = 0
while (true) {
if (testersN === k) {
testersCoeff.push(32 << testersN)
testersBigCoeff.push(BigInt(testersCoeff[testersN]))
testers.push(1n << testersBigCoeff[testersN])
testersN++
}
if (x < testers[k]) break
k++
}
if (!k)
return 32 - Math.clz32(Number(x))
// determine length by bisection
k--
let i = testersCoeff[k]
let a = x >> testersBigCoeff[k]
while (k--) {
let b = a >> testersBigCoeff[k]
if (b)
(i += testersCoeff[k], a = b)
}
return i + 32 - Math.clz32(Number(a))
this code uses the following global variables to store cached state:
const testersCoeff = []
const testersBigCoeff = []
const testers = []
let testersN = 0
in my tests, compared with the previous version, this is 2x faster for 512 bits, 7x faster for 4096 bits. it allocates at most 1x the size of the target integer, albeit in multiple integers.
ideal version
with proper native support, this would essentially be a O(1) operation (since the engine needs to know the length of the buffer used to store the bigint).
as for now, the best thing we can hope for is O(n) since there's no way to probe for the size of a bigint (i.e. ask if size is smaller or larger than a certain n) that does not involve an allocation proportional to n in the worst case.
The following code works incorrectly in some cases (pointed out in comments by @Matthijs), please see the safe (but slower) version in the end (benchmarks would be added later).
The following naïve version is about 3.5x faster than the currently most upvoted one for small bigints (tested on 960-bit):
const MAX = 2n ** 1023n;
function bitLengthNaive(n) {
if (n === 0n) return 0;
let v = n, len = 0;
while (v >= MAX) {
v >>= 1023n;
len += 1023;
}
return len + Math.floor(Math.log2(Number(v))) + 1;
}
For bigints less than 1024 bits it can be inlined to n => Math.floor(Math.log2(Number(n))) + 1 (you see why it's called naïve). The 1023 constant comes from 2 ** 1023 being the largest power of two that can be represented as a finite Number (i.e. not +Infinity).
For 10240-bit bigints it's already 7x times slower (for 2048-bit 2x faster, and for 4096-bit 1.1x slower). Fortunately, we can just modify the original solution to get one that is 2x faster on 10240-bit and 3x faster on 960-bit (which is a bit slower than 3.5x, but in my opinion this is a good trade-off):
const testersCoeff = [];
const testersBigCoeff = [];
const testers = [];
let testersN = 0;
function bitLength(n) {
if (n === 0n) return 0;
// find upper bound
let k = 0;
while (true) {
if (testersN === k) {
testersCoeff.push(1023 << testersN);
testersBigCoeff.push(BigInt(testersCoeff[testersN]));
testers.push(1n << testersBigCoeff[testersN]);
testersN++;
}
if (n < testers[k]) break;
k++;
}
if (!k) return Math.floor(Math.log2(Number(n))) + 1;
// determine length by bisection
k--;
let i = testersCoeff[k];
let a = n >> testersBigCoeff[k];
while (k--) {
let b = a >> testersBigCoeff[k];
if (b) {
i += testersCoeff[k];
a = b;
}
}
return i + Math.floor(Math.log2(Number(a))) + 1;
}
UPD: for "ultrasmall" bigints (64-bit) naïve version and combined version are respectively 10% faster and 10% slower than the original solution.
Safe version (1023 replaced by 48):
const testersCoeff = [];
const testersBigCoeff = [];
const testers = [];
let testersN = 0;
function bitLength(n) {
if (n === 0n) return 0;
// find upper bound
let k = 0;
while (true) {
if (testersN === k) {
testersCoeff.push(48 << testersN);
testersBigCoeff.push(BigInt(testersCoeff[testersN]));
testers.push(1n << testersBigCoeff[testersN]);
testersN++;
}
if (n < testers[k]) break;
k++;
}
if (!k) return Math.floor(Math.log2(Number(n))) + 1;
// determine length by bisection
k--;
let i = testersCoeff[k];
let a = n >> testersBigCoeff[k];
while (k--) {
let b = a >> testersBigCoeff[k];
if (b) {
i += testersCoeff[k];
a = b;
}
}
return i + Math.floor(Math.log2(Number(a))) + 1;
}
Javascript represents all numbers as 64-bit double precision IEEE 754 floating point numbers (see the ECMAscript spec, section 8.5.) All positive integers up to 2^53 can be encoded precisely. Larger integers get their least significant bits clipped. This leaves the question of how can you even represent a 64-bit integer in Javascript -- the native number data type clearly can't precisely represent a 64-bit int.
The following illustrates this. Although javascript appears to be able to parse hexadecimal numbers representing 64-bit numbers, the underlying numeric representation does not hold 64 bits. Try the following in your browser:
<html>
<head>
<script language="javascript">
function showPrecisionLimits() {
document.getElementById("r50").innerHTML = 0x0004000000000001 - 0x0004000000000000;
document.getElementById("r51").innerHTML = 0x0008000000000001 - 0x0008000000000000;
document.getElementById("r52").innerHTML = 0x0010000000000001 - 0x0010000000000000;
document.getElementById("r53").innerHTML = 0x0020000000000001 - 0x0020000000000000;
document.getElementById("r54").innerHTML = 0x0040000000000001 - 0x0040000000000000;
}
</script>
</head>
<body onload="showPrecisionLimits()">
<p>(2^50+1) - (2^50) = <span id="r50"></span></p>
<p>(2^51+1) - (2^51) = <span id="r51"></span></p>
<p>(2^52+1) - (2^52) = <span id="r52"></span></p>
<p>(2^53+1) - (2^53) = <span id="r53"></span></p>
<p>(2^54+1) - (2^54) = <span id="r54"></span></p>
</body>
</html>
In Firefox, Chrome and IE I'm getting the following. If numbers were stored in their full 64-bit glory, the result should have been 1 for all the substractions. Instead, you can see how the difference between 2^53+1 and 2^53 is lost.
(2^50+1) - (2^50) = 1
(2^51+1) - (2^51) = 1
(2^52+1) - (2^52) = 1
(2^53+1) - (2^53) = 0
(2^54+1) - (2^54) = 0
So what can you do?
If you choose to represent a 64-bit integer as two 32-bit numbers, then applying a bitwise AND is as simple as applying 2 bitwise AND's, to the low and high 32-bit 'words'.
For example:
var a = [ 0x0000ffff, 0xffff0000 ];
var b = [ 0x00ffff00, 0x00ffff00 ];
var c = [ a[0] & b[0], a[1] & b[1] ];
document.body.innerHTML = c[0].toString(16) + ":" + c[1].toString(16);
gets you:
ff00:ff0000
Here is code for AND int64 numbers, you can replace AND with other bitwise operation
function and(v1, v2) {
var hi = 0x80000000;
var low = 0x7fffffff;
var hi1 = ~~(v1 / hi);
var hi2 = ~~(v2 / hi);
var low1 = v1 & low;
var low2 = v2 & low;
var h = hi1 & hi2;
var l = low1 & low2;
return h*hi + l;
}
MDN doc
BigIntis a built-in object that provides a way to represent whole numbers larger than 2^53 - 1, which is the largest number JavaScript can reliably represent with theNumberprimitive and represented by theNumber.MAX_SAFE_INTEGERconstant.BigIntcan be used for arbitrarily large integers.
Difference:
BigIntcannot be used with methods in the built-inMathobject and cannot be mixed with instances ofNumberin operations- Because coercing between
NumberandBigIntcan lead to loss of precision, it is recommended to only useBigIntwhen values greater than 2^53 are reasonably expected and not to coerce between the two types.
The differences between BigInt and Number:
Number |
BigInt |
|
|---|---|---|
| Safe Range | Number.MAX_SAFE_INTEGER; loses precision outside of this range |
Extremely large integers |
| Math operations | Supports the Math object |
Basic arithmetic operations: + * - % ** |
| Decimal vs. Integer support | Integers and decimals, e.g 1.00, 2.56 |
Only integers, e.g 1n, 2n. 5n / 2 === 2n |
| Syntax | No special syntax | Append n to the end of a number literal or use new BigInt('100') |
| Type Conversion | Automatically converts Number to BigInt in operations |
Requires explicit conversion to Number for arithmetic involving Number values |
| JSON | Supported by default | Not serializable by default |
Use BigInt when you need to work with extremely large integers or require precise integer arithmetic without any loss of precision.