The ECMA specification does not specify a bounding complexity, however, you can derive one from the specification's algorithms.
push is O(1), however, in practice it will encounter an O(N) copy costs at engine defined boundaries as the slot array needs to be reallocated. These boundaries are typically logarithmic.
pop is O(1) with a similar caveat to push but the O(N) copy is rarely encountered as it is often folded into garbage collection (e.g. a copying collector could only copy the used part of an array).
shift is at worst O(N) however it can, in specially cases, be implemented as O(1) at the cost of slowing down indexing so your mileage may vary.
slice is O(N) where N is end - start. Not a tremendous amount of optimization opportunity here without significantly slowing down writes to both arrays.
splice is, worst case, O(N). There are array storage techniques that divide N by a constant but they significantly slow down indexing. If an engine uses such techniques you might notice unusually slow operations as it switches between storage techniques triggered by access pattern changes.
One you didn't mention, is sort. It is, in the average case, O(N log N). However, depending on the algorithm chosen by the engine, you could get O(N^2) in some cases. For example, if the engine uses QuickSort (even with an late out to InsertionSort), it has well-known N^2 cases. This could be a source of DoS for your application. If this is a concern either limit the size of the arrays you sort (maybe merging the sub-arrays) or bail-out to HeapSort.
The ECMA specification does not specify a bounding complexity, however, you can derive one from the specification's algorithms.
push is O(1), however, in practice it will encounter an O(N) copy costs at engine defined boundaries as the slot array needs to be reallocated. These boundaries are typically logarithmic.
pop is O(1) with a similar caveat to push but the O(N) copy is rarely encountered as it is often folded into garbage collection (e.g. a copying collector could only copy the used part of an array).
shift is at worst O(N) however it can, in specially cases, be implemented as O(1) at the cost of slowing down indexing so your mileage may vary.
slice is O(N) where N is end - start. Not a tremendous amount of optimization opportunity here without significantly slowing down writes to both arrays.
splice is, worst case, O(N). There are array storage techniques that divide N by a constant but they significantly slow down indexing. If an engine uses such techniques you might notice unusually slow operations as it switches between storage techniques triggered by access pattern changes.
One you didn't mention, is sort. It is, in the average case, O(N log N). However, depending on the algorithm chosen by the engine, you could get O(N^2) in some cases. For example, if the engine uses QuickSort (even with an late out to InsertionSort), it has well-known N^2 cases. This could be a source of DoS for your application. If this is a concern either limit the size of the arrays you sort (maybe merging the sub-arrays) or bail-out to HeapSort.
in very simple words
push -> O(1)
pop -> O(1)
shift -> O(N)
slice -> O(N)
splice -> O(N)
Here is a complete explanation about time complexity of Arrays in JavaScript.
I can't seem to find a solid answer on this subject. What are the time complexities of array.unshift and array.shift? Does JS use a linked list or queue/stack for arrays? I would think that the index of each element after a shift/unshift would have to be adjusted, making them linear methods, but I have also heard that because JS doesn't have c-style arrays, JS can simply shift the 'head' and 'tail' (and somehow change all of the references to each index) in constant time?
Can anyone clarify?
Seems like that would be dependent on the specific engine's implementation.
Running a quick benchmark in Node indicates that `shift`, and therefore `unshift` is indeed linear.
100 elements ~= 10,000,000 ops/sec
1,000 elements ~= 1,000,000 ops/sec
10,000 elements ~= 100,000 ops/sec
100,000 elements ~= 10,000 ops/sec
1,000,000 elements ~= 1,000 ops/sec
Ten times the elements ~= ten times longer, therefore it's linear, at least in Node.
push() is faster.
js>function foo() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.unshift(1); return((new Date)-start)}
js>foo()
2190
js>function bar() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.push(1); return((new Date)-start)}
js>bar()
10
function foo() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.unshift(1); return((new Date)-start)}
console.log(foo())
function bar() {a=[]; start = new Date; for (var i=0;i<100000;i++) a.push(1); return((new Date)-start)}
console.log(bar());
Update
The above does not take into consideration the order of the arrays. If you want to compare them properly, you must reverse the pushed array. However, push then reverse is still faster by ~10ms for me on chrome with this snippet:
var a=[];
var start = new Date;
for (var i=0;i<100000;i++) {
a.unshift(1);
}
var end = (new Date)-start;
console.log(`Unshift time: ${end}`);
var a=[];
var start = new Date;
for (var i=0;i<100000;i++) {
a.push(1);
}
a.reverse();
var end = (new Date)-start;
console.log(`Push and reverse time: ${end}`);
The JavaScript language spec does not mandate the time complexity of these functions, as far as I know.
It is certainly possible to implement an array-like data structure (O(1) random access) with O(1) push and unshift operations. The C++ std::deque is an example. A Javascript implementation that used C++ deques to represent Javascript arrays internally would therefore have O(1) push and unshift operations.
But if you need to guarantee such time bounds, you will have to roll your own, like this:
http://code.stephenmorley.org/javascript/queues/