Yes, V8 has optimised string slicing to O(1). This does of course depend a lot on what else happens to all the strings, they might need to get copied later on.
The relevant implementation from the above link is:

// The Sliced String class describes strings that are substrings of another
// sequential string. The motivation is to save time and memory when creating
// a substring. A Sliced String is described as a pointer to the parent,
// the offset from the start of the parent string and the length. Using
// a Sliced String therefore requires unpacking of the parent string and
// adding the offset to the start address. A substring of a Sliced String
// are not nested since the double indirection is simplified when creating
// such a substring.
// Currently missing features are:
// - handling externalized parent strings
// - external strings as parent
// - truncating sliced string to enable otherwise unneeded parent to be GC'ed.
class SlicedString: public String {
// ...
};
Also beware of your quick test results. As you are doing nothing with the y variable, the slicing and even the whole loop might get eliminated as dead code by the optimiser. If you are doing benchmarks, do them on practical real world data.
(V8 developer here.)
Array.prototype.slice is O(n), where n is the number of elements in the slice.
String.prototype.slice is O(1), thanks to our implementation of SlicedStrings, which are just storing pointer, offset, length to the original string and avoid copying the characters (except when they're tiny, so that copying a handful of characters is actually cheaper and smaller than storing a reference; that's still O(1)).
The key difference is that strings are immutable, and arrays are not. When you do str1 = "Hello World"; str2 = str1.slice(2, 5);, since there is no way to modify str1's contents afterwards, str2 doesn't need to ensure that it's unaffected by any such modification.
When you do a = [1, 2, 3, 4]; b = a.slice(1, 3); a[1] = "changed"; console.log(b[0]);, then you expect to see 2, not "changed". That's why b has to be an actual copy. (In theory, a copy-on-write approach would be possible, but V8 doesn't do that for array slices.)
"Shallow copy" means that nested objects will not be copied. Example:
let nested = {property: "value"};
var a = [nested];
var b = a.slice(0, 1);
a[0].property = "new value";
console.log(a === b); // false, `b` is a copy
console.log(a[0] === b[0]); // true, `nested` was not copied
console.log(b[0] === nested); // true
console.log(b[0].property); // "new value"
Based on my read (which is to say, I could be wrong, since V8 is a complicated beast) of this array-slice.tq source code, the answer is: "it depends".
If possible (and the heuristics as to when that might happen I didn't really get to), V8 optimizes things to essentially O(1) by just returning a copy-on-write view to the original array via ExtractFastJSArray.
When that fails, V8 allocates a new array and copies object (pointers) over, which is of course O(N).
The tq source code includes lots of "gotcha" cases, since JavaScript does allow you to call Array.prototype.slice() on things that aren't really arrays.
javascript - What's the time complexity of array.splice() in Google Chrome? - Stack Overflow
javascript - Performance question: String.split and then walk on the array, or RegExp? - Stack Overflow
What is the time complexity of forEach? of slice?
JavaScript runtime complexity of Array functions - Stack Overflow
Worst case should be O(n) (copying all n-1 elements to new array).
A linked list would be O(1) for a single deletion.
For those interested I've made this lazily-crafted benchmark. (Please don't run on Windows XP/Vista). As you can see from this though, it looks fairly constant (i.e. O(1)), so who knows what they're doing behind the scenes to make this crazy-fast. Note that regardless, the actual splice is VERY fast.
Rerunning an extended benchmark directly in the V8 shell that suggest O(n). Note though that you need huge array sizes to get a runtime that's likely to affect your code. This should be expected as if you look at the V8 code it uses memmove to create the new array.
¡Hi!
I did an experiment myself and would like to share my findings. The experiment was very simple, we ran 100 splice operations on an array of size n, and calculate the average time each splice function took. Then we varied the size of n, to check how it behave.
This graph summarizes our findings for big numbers: 
For big numbers it seems to behave linearly.
We also checked with "small" numbers (they were still quite big but not as big):

On this case it seems to be constant.
If I would have to decide for one option I would say it is O(n), because that is how it behaves for big numbers. Bear in mind though, that the linear behaviour only shows for VERY big numbers.
However, It is hard to go for a definitive answer because the array implementation in javascript dependes A LOT on how the array is declared and manipulated.
I recommend this stackoverflow discussion and this quora discussion to understand how arrays work.
I run it in node v10.15.3 and the code used is the following:
const f = async () => {
const n = 80000000;
const tries = 100;
const array = [];
for (let i = 0; i < n; i++) { // build initial array
array.push(i);
}
let sum = 0;
for (let i = 0; i < tries; i++) {
const index = Math.floor(Math.random() * (n));
const start = new Date();
array.splice(index, 1); // UNCOMMENT FOR OPTION A
// array.splice(index, 0, -1); // UNCOMMENT FOR OPTION B
const time = new Date().getTime() - start.getTime();
sum += time;
array.push(-2); // UNCOMMENT FOR OPTION A, to keep it of size n
// array.pop(); // UNCOMMENT FOR OPTION B, to keep it of size n
}
console.log('for an array of size', n, 'the average time of', tries, 'splices was:', sum / tries);
};
f();
Note that the code has an Option B, we did the same experiment for the three argument splice function to insert an element. It worked similary.
Well, the best way to get your answer is to just take 2 minutes and write a loop that does it both ways a thousand times and check firebug to see which one is faster ;)
I've had to optimize a lot of string munging while working on MXHR and in my experience, plain String methods are significantly faster than RegExps in current browsers. Use RegExps on the shortest Strings possible and do everything you possibly can with String methods.
For example, I use this little number in my current code:
var mime = mimeAndPayload.shift().split('Content-Type:', 2)[1].split(";", 1)[0].replace(' ', '');
It's ugly as hell, but believe it or not it's significantly faster than the equivalent RegExp under high load.
While this is 2½ years late, hopefully this helps shed some light on the matter for any future viewers: https://jsperf.app/split-join-vs-regex-replace (Includes benchmarks results for multiple browsers, as well the functional benchmark code itself)
I randomly happened upon someones Github page where he shared some of his algorithm practice problems and solutions.
One of the problems conditions stated we are given an array of all integers. For each index of that array, find the product of every other index. Solve this in linear time, linear space
Example input: [2, 7, 1, 4] Expected output: [28, 8, 56, 14]
The weird thing was, he used 2 forEach loops to come to his solution. He used both forEach loops to iterate over the array and he had variables to store product and eventually push into a final array that he returns at the end.
This had me questioning the time complexity of forEach. I found this explanation on ecma-international.org. Basically it shows O(n) time complexity for a single forEach loop. Please correct me if I'm wrong.
So, the persons solution should be O(n2) right? I really just want confirmation from someone with more experience so that I don't keep questioning myself on this..
This got me thinking about using other built-in methods. I've had solutions where I've used multiple slice methods or enumerables like map or reduce. In these cases, what is the expected time complexity of using a single slice method or map method? Isn't it really bad once you start method chaining?
The ECMA specification does not specify a bounding complexity, however, you can derive one from the specification's algorithms.
push is O(1), however, in practice it will encounter an O(N) copy costs at engine defined boundaries as the slot array needs to be reallocated. These boundaries are typically logarithmic.
pop is O(1) with a similar caveat to push but the O(N) copy is rarely encountered as it is often folded into garbage collection (e.g. a copying collector could only copy the used part of an array).
shift is at worst O(N) however it can, in specially cases, be implemented as O(1) at the cost of slowing down indexing so your mileage may vary.
slice is O(N) where N is end - start. Not a tremendous amount of optimization opportunity here without significantly slowing down writes to both arrays.
splice is, worst case, O(N). There are array storage techniques that divide N by a constant but they significantly slow down indexing. If an engine uses such techniques you might notice unusually slow operations as it switches between storage techniques triggered by access pattern changes.
One you didn't mention, is sort. It is, in the average case, O(N log N). However, depending on the algorithm chosen by the engine, you could get O(N^2) in some cases. For example, if the engine uses QuickSort (even with an late out to InsertionSort), it has well-known N^2 cases. This could be a source of DoS for your application. If this is a concern either limit the size of the arrays you sort (maybe merging the sub-arrays) or bail-out to HeapSort.
in very simple words
push -> O(1)
pop -> O(1)
shift -> O(N)
slice -> O(N)
splice -> O(N)
Here is a complete explanation about time complexity of Arrays in JavaScript.
I think it's O(1) but some sources online say O(n)
Time complexity of array.splice is worst-case O(n) because it has to potentially copy over all the elements.
Question 1: this means array.shift() is always O(n)? Or is it optimized somehow low-level where it just adjusts starting memory address?
Question 2: People say use a linked list instead of array if you repeatedly splice because deletion is O(1). But you'd still have to search through linked list to find element (which worst-case is still linear) because it's not indexed. So how do you index a linked list, so that deletion is ACTUALLY constant time? (also, is a language's implementation, like JS, of an array ever a linked list?)
-
Using copy on write, the copy can be avoided in some cases right up to the point where the data mutated. AFAIK, not many languages expose or even use COW. I've mostly only ever seen it referenced in OS design for things like process forking.
-
Academics say to always use a linked list. However, practically, Array lists are almost always preferred. The problem with linked lists is that they are not cache friendly. In the worst case, your CPU has to go out and hit main memory for every single node accessed (millions of cycles per node!).
In contrast Array lists keep all the nodes in a contiguous block of memory. CPUs, when they access memory, will pull in a block of memory into cache. For traversal of an array, the cost can easily be in the 10s of cylces per element.
Further, the cost of adding to an array list isn't as bad as you might think. These lists will often overallocate so they can avoid copying over the data when you hit the end of the array.
And in practice, middle or beginning insertion just isn't all that common. You almost always do tail insertions with lists and very often you don't do any deletions at all!
So how do you index a linked list, so that deletion is ACTUALLY constant time?
You iterate and delete/split at the same time. You still have n time, but you don't end up with 2*n time. That is pretty much the case for a linked list, when you want to frequently go through the list and remove, split, or merge the list at various points based on the current element. Again, in practice this happens almost never.
also, is a language's implementation, like JS, of an array ever a linked list?
Depends. In fact, in javascript it could easily start out as a linked list and later be transformed into an array list based on jit optimizations.
From experience in implementation (JSC but I believe it’s similar in other engines) array.shift and unshift are amortized but generally O(1). Splice is interesting that engines will try to avoid “real” splicing depending on how the outputs are used