Yes, V8 has optimised string slicing to O(1). This does of course depend a lot on what else happens to all the strings, they might need to get copied later on.
The relevant implementation from the above link is:

// The Sliced String class describes strings that are substrings of another
// sequential string. The motivation is to save time and memory when creating
// a substring. A Sliced String is described as a pointer to the parent,
// the offset from the start of the parent string and the length. Using
// a Sliced String therefore requires unpacking of the parent string and
// adding the offset to the start address. A substring of a Sliced String
// are not nested since the double indirection is simplified when creating
// such a substring.
// Currently missing features are:
// - handling externalized parent strings
// - external strings as parent
// - truncating sliced string to enable otherwise unneeded parent to be GC'ed.
class SlicedString: public String {
// ...
};
Also beware of your quick test results. As you are doing nothing with the y variable, the slicing and even the whole loop might get eliminated as dead code by the optimiser. If you are doing benchmarks, do them on practical real world data.
(V8 developer here.)
Array.prototype.slice is O(n), where n is the number of elements in the slice.
String.prototype.slice is O(1), thanks to our implementation of SlicedStrings, which are just storing pointer, offset, length to the original string and avoid copying the characters (except when they're tiny, so that copying a handful of characters is actually cheaper and smaller than storing a reference; that's still O(1)).
The key difference is that strings are immutable, and arrays are not. When you do str1 = "Hello World"; str2 = str1.slice(2, 5);, since there is no way to modify str1's contents afterwards, str2 doesn't need to ensure that it's unaffected by any such modification.
When you do a = [1, 2, 3, 4]; b = a.slice(1, 3); a[1] = "changed"; console.log(b[0]);, then you expect to see 2, not "changed". That's why b has to be an actual copy. (In theory, a copy-on-write approach would be possible, but V8 doesn't do that for array slices.)
"Shallow copy" means that nested objects will not be copied. Example:
let nested = {property: "value"};
var a = [nested];
var b = a.slice(0, 1);
a[0].property = "new value";
console.log(a === b); // false, `b` is a copy
console.log(a[0] === b[0]); // true, `nested` was not copied
console.log(b[0] === nested); // true
console.log(b[0].property); // "new value"
Based on my read (which is to say, I could be wrong, since V8 is a complicated beast) of this array-slice.tq source code, the answer is: "it depends".
If possible (and the heuristics as to when that might happen I didn't really get to), V8 optimizes things to essentially O(1) by just returning a copy-on-write view to the original array via ExtractFastJSArray.
When that fails, V8 allocates a new array and copies object (pointers) over, which is of course O(N).
The tq source code includes lots of "gotcha" cases, since JavaScript does allow you to call Array.prototype.slice() on things that aren't really arrays.
javascript - What's the time complexity of array.splice() in Google Chrome? - Stack Overflow
javascript - Performance question: String.split and then walk on the array, or RegExp? - Stack Overflow
JavaScript runtime complexity of Array functions - Stack Overflow
What is the time complexity of forEach? of slice?
Worst case should be O(n) (copying all n-1 elements to new array).
A linked list would be O(1) for a single deletion.
For those interested I've made this lazily-crafted benchmark. (Please don't run on Windows XP/Vista). As you can see from this though, it looks fairly constant (i.e. O(1)), so who knows what they're doing behind the scenes to make this crazy-fast. Note that regardless, the actual splice is VERY fast.
Rerunning an extended benchmark directly in the V8 shell that suggest O(n). Note though that you need huge array sizes to get a runtime that's likely to affect your code. This should be expected as if you look at the V8 code it uses memmove to create the new array.
¡Hi!
I did an experiment myself and would like to share my findings. The experiment was very simple, we ran 100 splice operations on an array of size n, and calculate the average time each splice function took. Then we varied the size of n, to check how it behave.
This graph summarizes our findings for big numbers: 
For big numbers it seems to behave linearly.
We also checked with "small" numbers (they were still quite big but not as big):

On this case it seems to be constant.
If I would have to decide for one option I would say it is O(n), because that is how it behaves for big numbers. Bear in mind though, that the linear behaviour only shows for VERY big numbers.
However, It is hard to go for a definitive answer because the array implementation in javascript dependes A LOT on how the array is declared and manipulated.
I recommend this stackoverflow discussion and this quora discussion to understand how arrays work.
I run it in node v10.15.3 and the code used is the following:
const f = async () => {
const n = 80000000;
const tries = 100;
const array = [];
for (let i = 0; i < n; i++) { // build initial array
array.push(i);
}
let sum = 0;
for (let i = 0; i < tries; i++) {
const index = Math.floor(Math.random() * (n));
const start = new Date();
array.splice(index, 1); // UNCOMMENT FOR OPTION A
// array.splice(index, 0, -1); // UNCOMMENT FOR OPTION B
const time = new Date().getTime() - start.getTime();
sum += time;
array.push(-2); // UNCOMMENT FOR OPTION A, to keep it of size n
// array.pop(); // UNCOMMENT FOR OPTION B, to keep it of size n
}
console.log('for an array of size', n, 'the average time of', tries, 'splices was:', sum / tries);
};
f();
Note that the code has an Option B, we did the same experiment for the three argument splice function to insert an element. It worked similary.
Well, the best way to get your answer is to just take 2 minutes and write a loop that does it both ways a thousand times and check firebug to see which one is faster ;)
I've had to optimize a lot of string munging while working on MXHR and in my experience, plain String methods are significantly faster than RegExps in current browsers. Use RegExps on the shortest Strings possible and do everything you possibly can with String methods.
For example, I use this little number in my current code:
var mime = mimeAndPayload.shift().split('Content-Type:', 2)[1].split(";", 1)[0].replace(' ', '');
It's ugly as hell, but believe it or not it's significantly faster than the equivalent RegExp under high load.
While this is 2½ years late, hopefully this helps shed some light on the matter for any future viewers: https://jsperf.app/split-join-vs-regex-replace (Includes benchmarks results for multiple browsers, as well the functional benchmark code itself)
The ECMA specification does not specify a bounding complexity, however, you can derive one from the specification's algorithms.
push is O(1), however, in practice it will encounter an O(N) copy costs at engine defined boundaries as the slot array needs to be reallocated. These boundaries are typically logarithmic.
pop is O(1) with a similar caveat to push but the O(N) copy is rarely encountered as it is often folded into garbage collection (e.g. a copying collector could only copy the used part of an array).
shift is at worst O(N) however it can, in specially cases, be implemented as O(1) at the cost of slowing down indexing so your mileage may vary.
slice is O(N) where N is end - start. Not a tremendous amount of optimization opportunity here without significantly slowing down writes to both arrays.
splice is, worst case, O(N). There are array storage techniques that divide N by a constant but they significantly slow down indexing. If an engine uses such techniques you might notice unusually slow operations as it switches between storage techniques triggered by access pattern changes.
One you didn't mention, is sort. It is, in the average case, O(N log N). However, depending on the algorithm chosen by the engine, you could get O(N^2) in some cases. For example, if the engine uses QuickSort (even with an late out to InsertionSort), it has well-known N^2 cases. This could be a source of DoS for your application. If this is a concern either limit the size of the arrays you sort (maybe merging the sub-arrays) or bail-out to HeapSort.
in very simple words
push -> O(1)
pop -> O(1)
shift -> O(N)
slice -> O(N)
splice -> O(N)
Here is a complete explanation about time complexity of Arrays in JavaScript.
I randomly happened upon someones Github page where he shared some of his algorithm practice problems and solutions.
One of the problems conditions stated we are given an array of all integers. For each index of that array, find the product of every other index. Solve this in linear time, linear space
Example input: [2, 7, 1, 4] Expected output: [28, 8, 56, 14]
The weird thing was, he used 2 forEach loops to come to his solution. He used both forEach loops to iterate over the array and he had variables to store product and eventually push into a final array that he returns at the end.
This had me questioning the time complexity of forEach. I found this explanation on ecma-international.org. Basically it shows O(n) time complexity for a single forEach loop. Please correct me if I'm wrong.
So, the persons solution should be O(n2) right? I really just want confirmation from someone with more experience so that I don't keep questioning myself on this..
This got me thinking about using other built-in methods. I've had solutions where I've used multiple slice methods or enumerables like map or reduce. In these cases, what is the expected time complexity of using a single slice method or map method? Isn't it really bad once you start method chaining?
Recently I was thinking about interview questions I got as an undergrad:
Things like "reverse a string" and "check if a string is a palindrome".
I did most of these in C++ with a loop and scrolling through the index using logic.
When I learned Python, I realized that I could "reverse a string" by simply going:
return mystring[::-1]
Likewise with "check if it is a palindrome" by doing:
return mystring == mystring[::-1]
The problem now is that, I don't know what kinda complexity it is.
From my point of view, it is constant, so O(1). But I am guessing that that is too good to be true as the string splicing is doing something behind the scenes.
Can anyone help me clarify?
The python page on time-complexity shows that slicing lists has a time-complexity of O(k), where "k" is the length of the slice. That's for lists, not strings, but the complexity can't be O(1) for strings since the slicing must handle more characters as the size is increased. At a guess, the complexity of slicing strings would also be O(k). We can write a little bit of code to test that guess:
import time
StartSize = 2097152
size = StartSize
for _ in range(10):
# create string of size "size"
s = '*' * size
# now time reverse slice
start = time.time()
r = s[::-1]
delta = time.time() - start
print(f'Size {size:9d}, time={delta:.3f}')
# double size of the string
size *= 2
This uses a simple method of timing. Other tools exist, but this is simple. When run I get:
$ python3 test.py
Size 2097152, time=0.006
Size 4194304, time=0.013
Size 8388608, time=0.024
Size 16777216, time=0.050
Size 33554432, time=0.098
Size 67108864, time=0.190
Size 134217728, time=0.401
Size 268435456, time=0.808
Size 536870912, time=1.610
Size 1073741824, time=3.192
which shows the time doubles when doubling the size of the string for each reverse slice. So O(n) (k == n for whole-string slicing).
Edit: spelling.
How difficult an algorithm is to write and how difficult it is to calculate are two separate things. Creating a reversed string with the shorthand still requires n order space and n order time. Keep in mind that, in most cases, creating a reversed array isn't necessary, you can just start at the top and go down, which is essentially what Python's reversed() function does
I think it's O(1) but some sources online say O(n)