As pointed out in the comments, since each element is indeed touched only once, the time complexity is intuitively O(N).
However, because each recursive call to flatten creates a new intermediate array, the run-time depends strongly on the structure of the input array.
A non-trivial1 example of such a case would be when the array is organized similarly to a full binary tree:
[[[a, b], [c, d]], [[e, f], [g, h]]], [[[i, j], [k, l]], [[m, n], [o, p]]]
|
______ + ______
| |
__ + __ __ + __
| | | |
_ + _ _ + _ _ + _ _ + _
| | | | | | | | | | | | | | | |
a b c d e f g h i j k l m n o p
The time complexity recurrence relation is:
T(n) = 2 * T(n / 2) + O(n)
Where 2 * T(n / 2) comes from recursive calls to flatten the sub-trees, and O(n) from pushing2 the results, which are two arrays of length n / 2.
The Master theorem states that in this case T(N) = O(N log N), not O(N) as expected.
1) non-trivial means that no element is wrapped unnecessarily, e.g. [[[a]]].
2) This implicitly assumes that k push operations are O(k) amortized, which is not guaranteed by the standard, but is still true for most implementations.
A "true" O(N) solution will directly append to the final output array instead of creating intermediate arrays:
function flatten_linear(items) {
const flat = [];
// do not call the whole function recursively
// ... that's this mule function's job
function inner(input) {
if (Array.isArray(input))
input.forEach(inner);
else
flat.push(input);
}
// call on the "root" array
inner(items);
return flat;
}
The recurrence becomes T(n) = 2 * T(n / 2) + O(1) for the previous example, which is linear.
Again this assumes both 1) and 2).
Answer from meowgoesthedog on Stack OverflowJavascript Interview Question: Flatten an Array (in-depth)
if I were an interviewer, I think I would be satisfied by your answer, but as a fellow programmer I think that despite the current situation with tail call optimisation, at least attempting a tail recursive solution is worth it, because you can 1) be future proof for when the tail call situation improves and 2) use a trampoline to put a tail-call style recursive function into an iterative process in the meantime, which solves the issue.
Here is an example of a tail-recursive flatten function that uses a "trampoline" to put each recursive call into an iterative while loop. It's probably not the most efficient way of going about things, but due to the trampoline, there should be no stack growth issue. I also opted to not "return a continuation" for the continuation part of the trampoline. Returning a continuation function would increase the amount of inner function executions and definitions, so instead I just take a "snapshot" of the arguments and call the original function with those arguments.
const trampoline = fn => (...args) => {
let step = fn(...args);
while (!step.done) {
step = fn(...step.args);
}
return step.result;
}
trampoline.done = result => ({ done: true, result });
trampoline.next = (...args) => ({ done: false, args });
const flatten = trampoline((list, accumulator = []) => {
if (list.length === 0) {
return trampoline.done(accumulator);
}
const [head, ...tail] = list;
if (Array.isArray(head)) {
return trampoline.next([...head, ...tail], accumulator);
}
return trampoline.next(tail, [...accumulator, head]);
});
console.log(flatten([1, 2, [3, [4, [5, [6, 7, [8, 9, 10]]]]]]));More on reddit.com
python - How can I calculate the run-time complexity of this flatten array function if number of items is unknown? - Stack Overflow
How to flat an array of arrays and order it in javascript - Stack Overflow
What is the time complexity of this code?
Videos
No, the code you've shown has neither exponential nor linear time complexity.
Before we can determine the complexity of any algorithm, we need to decide how to measure the size of the input. For the particular case of flatting an array, many options exist. We can count the number of arrays in the input, the number of array elements (sum of all array lengths), the average array length, the number of non-array elements in all arrays, the likelyhood that an array element is an array, the average number of elements that are arrays, etc.
I think what makes the most sense to gauge algorithms for this problem are the number of array elements in the whole input - let's call it e - and the average depth of these elements - let's call it d.
Now there are two standard approaches to this problem. The algorithm you've shown
const flattenDeep = (array) => { const flat = []; for (let element of array) { Array.isArray(element) ? flat.push(...flattenDeep(element)) : flat.push(element); } return flat; }
does have a time complexity of O(e * d). It is the naive approach, also demonstrated in the shorter code
const flattenDeep = x => Array.isArray(x) ? x.flatMap(flattenDeep) : [x];
or in the slightly longer loop
const flattenDeep = (array) => {
const flat = [];
for (const element of array) {
if (Array.isArray(element)) {
flat.push(...flattenDeep(element))
} else {
flat.push(element);
}
}
return flat;
}
Both of them have the problem that they are more-or-less-explicit nested loops, where the inner one loops over the result of the recursive call. Notice that the spread syntax in the call flat.push(...flattenDeep(element)) amounts to basically
for (const val of flattenDeep(element)) flat.push(val);
This is pretty bad, consider what happens for the worst case input [[[…[[[1,2,3,…,n]]]…]]].
The second standard approach is to directly put non-array elements into the final result array - without creating, returning and iterating any temporary arrays:
function flattenDeep(array) {
const flat = [];
function recurse(val) {
if (Array.isArray(val)) {
for (const el of val) {
recurse(el);
}
} else {
flat.push(val);
}
}
recurse(array);
return flat;
}
This is a much better solution, it has a linear time complexity of O(e) - d doesn't factor in at all any more.
the exploration of a regular structure can be done, however, for models that use addressing it is necessary to control the exploration, otherwise it may have infinite recursion.. i.e:
var a = [ 1, 2, 3 ]; var t = [ ]; t.push(a); t.push(t);
console.log(t);
infinit loop example
- action: check if your array value already explored.
Write a function that flattens a given input array. It can contain many deeply nested arrays. Example: given [[1],[[2]],[[[3]]]] the function should return [1,2,3].
I've been studying up on some common JS interview questions and this one really intrigued me. So, I spent a bit of time looking in depth at the problem and would love other's feedback and help to determine if I assessed it correctly. Also, if anyone else has been presented with this problem maybe this will help them clearly look at the trade-offs of iterative vs recursion in this specific situation.
Also, I think I commonly see the answer for time and space complexity to this question as O(n)... but I think that's wrong. See my analysis below.
Thanks in advance for the feedback and I hope this will help others when they need to answer a question like this.
https://gist.github.com/jcarroll2007/4ee72b3e99507c4f8ce3916fca147ab7
if I were an interviewer, I think I would be satisfied by your answer, but as a fellow programmer I think that despite the current situation with tail call optimisation, at least attempting a tail recursive solution is worth it, because you can 1) be future proof for when the tail call situation improves and 2) use a trampoline to put a tail-call style recursive function into an iterative process in the meantime, which solves the issue.
Here is an example of a tail-recursive flatten function that uses a "trampoline" to put each recursive call into an iterative while loop. It's probably not the most efficient way of going about things, but due to the trampoline, there should be no stack growth issue. I also opted to not "return a continuation" for the continuation part of the trampoline. Returning a continuation function would increase the amount of inner function executions and definitions, so instead I just take a "snapshot" of the arguments and call the original function with those arguments.
const trampoline = fn => (...args) => {
let step = fn(...args);
while (!step.done) {
step = fn(...step.args);
}
return step.result;
}
trampoline.done = result => ({ done: true, result });
trampoline.next = (...args) => ({ done: false, args });
const flatten = trampoline((list, accumulator = []) => {
if (list.length === 0) {
return trampoline.done(accumulator);
}
const [head, ...tail] = list;
if (Array.isArray(head)) {
return trampoline.next([...head, ...tail], accumulator);
}
return trampoline.next(tail, [...accumulator, head]);
});
console.log(flatten([1, 2, [3, [4, [5, [6, 7, [8, 9, 10]]]]]]));
const deepFlatten = arr => [].concat(...arr.map(v => (Array.isArray(v) ? deepFlatten(v) : v)));
example:
deepFlatten([1, [2], [[3], 4], 5]); // [1,2,3,4,5]
https://30secondsofcode.org/#deepflatten
Since, I'm assuming, you care about the worst case performance, you can consider an abstract worst case example and then derive the answer from that.
data = [[[[... [1, 2, 3, ..., m] ...]]], [[[... [1, 2, 3, ..., m] ...]]], ..., [[[... [1, 2, 3, ..., m] ...]]]]
Now you can easily see how the worst case is influenced by three factors. The size of items, the depth of the list of lists, and finally the number of elements in a deepest list.
The worst case complexity thus is O(n x d x m) where n, d, and m stand for the size of items, the maximum depth of the nested lists, and the maximum number of elements in a deepest list.
Exact size isn't really too important when it comes to complexity. Constants get dropped anyway. If you have a 3d array, internally it has to loop through all the items, even if that's not obvious by the code. So the answer is O(n**3) because it needs a loop inside a loop inside a loop. The exact value of what n means doesn't matter because constants get dropped.