The ECMA specification does not specify a bounding complexity, however, you can derive one from the specification's algorithms.
push is O(1), however, in practice it will encounter an O(N) copy costs at engine defined boundaries as the slot array needs to be reallocated. These boundaries are typically logarithmic.
pop is O(1) with a similar caveat to push but the O(N) copy is rarely encountered as it is often folded into garbage collection (e.g. a copying collector could only copy the used part of an array).
shift is at worst O(N) however it can, in specially cases, be implemented as O(1) at the cost of slowing down indexing so your mileage may vary.
slice is O(N) where N is end - start. Not a tremendous amount of optimization opportunity here without significantly slowing down writes to both arrays.
splice is, worst case, O(N). There are array storage techniques that divide N by a constant but they significantly slow down indexing. If an engine uses such techniques you might notice unusually slow operations as it switches between storage techniques triggered by access pattern changes.
One you didn't mention, is sort. It is, in the average case, O(N log N). However, depending on the algorithm chosen by the engine, you could get O(N^2) in some cases. For example, if the engine uses QuickSort (even with an late out to InsertionSort), it has well-known N^2 cases. This could be a source of DoS for your application. If this is a concern either limit the size of the arrays you sort (maybe merging the sub-arrays) or bail-out to HeapSort.
The ECMA specification does not specify a bounding complexity, however, you can derive one from the specification's algorithms.
push is O(1), however, in practice it will encounter an O(N) copy costs at engine defined boundaries as the slot array needs to be reallocated. These boundaries are typically logarithmic.
pop is O(1) with a similar caveat to push but the O(N) copy is rarely encountered as it is often folded into garbage collection (e.g. a copying collector could only copy the used part of an array).
shift is at worst O(N) however it can, in specially cases, be implemented as O(1) at the cost of slowing down indexing so your mileage may vary.
slice is O(N) where N is end - start. Not a tremendous amount of optimization opportunity here without significantly slowing down writes to both arrays.
splice is, worst case, O(N). There are array storage techniques that divide N by a constant but they significantly slow down indexing. If an engine uses such techniques you might notice unusually slow operations as it switches between storage techniques triggered by access pattern changes.
One you didn't mention, is sort. It is, in the average case, O(N log N). However, depending on the algorithm chosen by the engine, you could get O(N^2) in some cases. For example, if the engine uses QuickSort (even with an late out to InsertionSort), it has well-known N^2 cases. This could be a source of DoS for your application. If this is a concern either limit the size of the arrays you sort (maybe merging the sub-arrays) or bail-out to HeapSort.
in very simple words
push -> O(1)
pop -> O(1)
shift -> O(N)
slice -> O(N)
splice -> O(N)
Here is a complete explanation about time complexity of Arrays in JavaScript.
arrays - Time complexity of unshift() vs. push() in Javascript - Stack Overflow
Is there a resource to find the time complexity for common methods
How do you find time complexity when using Javascript array methods
javascript - Time complexity of Array.from - Stack Overflow
push() is faster.
js>function foo() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.unshift(1); return((new Date)-start)}
js>foo()
2190
js>function bar() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.push(1); return((new Date)-start)}
js>bar()
10
function foo() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.unshift(1); return((new Date)-start)}
console.log(foo())
function bar() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.push(1); return((new Date)-start)}
console.log(bar());
Update
The above does not take into consideration the order of the arrays. If you want to compare them properly, you must reverse the pushed array. However, push then reverse is still faster by ~10ms for me on chrome with this snippet:
var a=[];
var start = new Date;
for (var i=0;i<100000;i++) {
a.unshift(1);
}
var end = (new Date)-start;
console.log(`Unshift time: ${end}`);
var a=[];
var start = new Date;
for (var i=0;i<100000;i++) {
a.push(1);
}
a.reverse();
var end = (new Date)-start;
console.log(`Push and reverse time: ${end}`);
The JavaScript language spec does not mandate the time complexity of these functions, as far as I know.
It is certainly possible to implement an array-like data structure (O(1) random access) with O(1) push and unshift operations. The C++ std::deque is an example. A Javascript implementation that used C++ deques to represent Javascript arrays internally would therefore have O(1) push and unshift operations.
But if you need to guarantee such time bounds, you will have to roll your own, like this:
http://code.stephenmorley.org/javascript/queues/
Hello all, do anyone know of a resource to find the time complexity of common JavaScript methods such as .map .reduce etc.. ?
I would usually follow the rule of "look at the loops" but idk how all those functions are implemented under the hood
I have a function that performs intersection as follows
Array.from(new Set(arr.filter((i)=>arr2.includes(i)))) Or let p = new Set(arr) return [...p].filter((item)=>arr2.includes(item)))
I feel its linear time as by common sense if the arrays increase in length the time would also increase. But im not sure.
It's O(n). When used on an iterable (like a Set), Array.from iterates over the iterable and puts every item returned into the new array, so there's an operation for every item returned by the iterable.
It is always going to be O(n) as the number of iterations would be directly proportional to the number of elements in the set. Actual Time complexity would be O(n) for retrieving values from a set O(n) for pushing into an array.
O(n) + O(n) = O(2n)
But since we do not consider a constant value when calculating using n value it should be O(n).