I don't think so. There are N elements, so you will need to visit each element at least once. Overall, your algorithm will run for O(N) iterations. The deciding factor is what happens per iteration.

Your first algorithm has 2 loops, but if you observe carefully, it is still iterating over each element O(1) times per iteration. However, as @abarnert pointed out, the arr[i: i + 1] = arr[i] moves every element from arr[i+1:] up, which is O(N) again.

Your second algorithm is similar, but you are adding lists in this case (in the previous case, it was a simple slice assignment), and unfortunately, list addition is linear in complexity.

In summary, both your algorithms are quadratic.

Answer from cs95 on Stack Overflow
🌐
Leocon
leocon.dev › blog › 2021 › 09 › how-to-flatten-a-python-list-array-and-which-one-should-you-use
How to flatten a python list/array and which one should you use | Leonidas Constantinou
September 5, 2021 - When we repeat the same experiment with the fastest methods, we can identify that their time complexity is O(1), meaning, the number of dimensions won't affect their performance. Overall, I would conclude that I will always use the chain_from_iterable when I am working with Python Lists. Make sure you never use the _sum and the list_comprehension since they are really inefficient. ... I would use the ravel() method if I want a view of the array but maybe modify it's values later on since ravel will create a copy automatically when it's necessary
Discussions

numpy - python array time complexity? - Stack Overflow
I see time complexity for list, collections.deque, set, and dict in python_wiki , but I can't find the time complexity of array.array and np.array. More on stackoverflow.com
🌐 stackoverflow.com
python - Flattening a list of NumPy arrays? - Stack Overflow
The initial suggestion of mine was to use numpy.ndarray.flatten that returns a copy every time which affects performance. Let's now see how the time complexity of the above-listed solutions compares using perfplot package for a setup similar to the one of the OP More on stackoverflow.com
🌐 stackoverflow.com
What is the time complexity of the “in” operation
This moved much faster. Because it does fewer tests - if you're checking for membership in a list, you can stop as soon as you find the element. If you're comparing every element of the list to a value, as the first example does, then you check every element of the list. Doing less is always faster. More on reddit.com
🌐 r/learnpython
10
2
September 1, 2021
Why is the time complexity of Python's list.append() method O(1)? - Stack Overflow
As seen in the documentation for TimeComplexity, Python's list type is implemented using an array. So if an array is being used and we do a few appends, eventually you will have to reallocate space... More on stackoverflow.com
🌐 stackoverflow.com
Top answer
1 of 2
3

So to link you provided (also a TLDR) list are internally "represented as an array" link It's supposed to be O(1) with a note at the bottom saying:

"These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container." link


More details

It doesn't go into detail in the docs but if you look at the source code you'll actually see what's going on. Python arrays have internal buffer(s) that allow for quick resizing of themselves and will realloc as it grows/shrinks.

array.append uses arraymodule.array_array_append which calls arraymodule.ins calling arraymodule.ins1 which is the meat and potatoes of the operation. Incidentally array.extend uses this as well but it just supplies it Py_SIZE(self) as the insertion index.

So if we read the notes in arraymodule.ins1 it starts off with:

Bypass realloc() when a previous overallocation is large enough
to accommodate the newsize.  If the newsize is 16 smaller than the
current size, then proceed with the realloc() to shrink the array.

link

...

This over-allocates proportional to the array size, making room
for additional growth.  The over-allocation is mild, but is
enough to give linear-time amortized behavior over a long
sequence of appends() in the presence of a poorly-performing
system realloc().
The growth pattern is:  0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ...
Note, the pattern starts out the same as for lists but then
grows at a smaller rate so that larger arrays only overallocate
by about 1/16th -- this is done because arrays are presumed to be more
memory critical.

link

2 of 2
-1

It is important to understand the array data structure to answer your question. Since both array objects are based on C arrays (regular, numpy), they share a lot of the same functionality.

Adding an item to an array is amortized O(1), but in most cases, ends up being O(n) time. This is because it could be the case that your array object is not filled yet, and thus appending some data to that spot in memory is a relatively trivial exercise, it is O(1). However, in most cases, the array is full and thus needs to be completely copied over in memory with the new item added to it. This is a very expensive operation since an array of n size needs to be copied, thus making the insertion O(n).

An interesting example from this post:

To make this clearer, consider the case where the factor is 2 and initial array size is 1. Then consider copy costs to grow the array from size 1 to where it's large enough to hold, 2^k+1 elements for any k >= 0. This size is 2^(k+1). Total copy costs will include all the copying to become that big in factor-of-2 steps:

1 + 2 + 4 + ... + 2^k = 2^(k+1) - 1 = 2n - 1
🌐
TechGeekBuzz
techgeekbuzz.com › blog › flatten-list-list-of-lists-in-python
A Guide to Flatten List & List of Lists in Python [Beginner's Guide]
Use the Python list's extend method to append all the iterable elements to the end of the list. At last, return the 1D array . ... def flat_2d(two_d): # initialize a new empty list # that will contain all the list elements in 1-D pattern one_d = [] for i in two_d: # extend list elements to one_d list one_d.extend(i) return one_d two_d = [[10, 20, 30, 40], [50, 60,70], [80, 90], [100, 110]] print(flat_2d(two_d)) ... Time Complexity: The time complexity of the above program is O(N^2) because we are using the extend method inside the for loop and the time complexity of the extend method itself is O(N).
🌐
YourBasic
yourbasic.org › algorithms › time-complexity-arrays
Time complexity of array/list operations [Java, Python] · YourBasic
The following ArrayList methods operate on a subset of the elements, but still have time complexity that depends on the size n of the list. Note: add(E element) takes constant amortized time, even though the worst-case time is linear. The following Python list operations operate on a subset of the elements, but still have time complexity that depends on n = len(a).
🌐
Python
wiki.python.org › moin › TimeComplexity
TimeComplexity - Python Wiki
Internally, a list is represented as an array; the largest costs come from growing beyond the current allocation size (because everything must move), or from inserting or deleting somewhere near the beginning (because everything after that must move).
Find elsewhere
🌐
CodeRivers
coderivers.org › blog › python-flatten-array
Python Flatten Array: Unraveling the Complexity - CodeRivers
April 5, 2025 - For instance, if we want to calculate ... array, flattening it first can make the calculation straightforward. - Data storage and transmission: In some cases, storing or transmitting data in a one-dimensional format can be more efficient. It can reduce the complexity of data structures and make it easier to work with data across different systems. numpy is a powerful library for numerical computing in Python...
Top answer
1 of 5
119

You could use numpy.concatenate, which as the name suggests, basically concatenates all the elements of such an input list into a single NumPy array, like so -

import numpy as np
out = np.concatenate(input_list).ravel()

If you wish the final output to be a list, you can extend the solution, like so -

out = np.concatenate(input_list).ravel().tolist()

Sample run -

In [24]: input_list
Out[24]: 
[array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]]),
 array([[ 0.00353654]])]

In [25]: np.concatenate(input_list).ravel()
Out[25]: 
array([ 0.00353654,  0.00353654,  0.00353654,  0.00353654,  0.00353654,
        0.00353654,  0.00353654,  0.00353654,  0.00353654,  0.00353654,
        0.00353654,  0.00353654,  0.00353654])

Convert to list -

In [26]: np.concatenate(input_list).ravel().tolist()
Out[26]: 
[0.00353654,
 0.00353654,
 0.00353654,
 0.00353654,
 0.00353654,
 0.00353654,
 0.00353654,
 0.00353654,
 0.00353654,
 0.00353654,
 0.00353654,
 0.00353654,
 0.00353654]
2 of 5
20

Can also be done by

np.array(list_of_arrays).flatten().tolist()

resulting in

[0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654]

Update

As @aydow points out in the comments, using numpy.ndarray.ravel can be faster if one doesn't care about getting a copy or a view

np.array(list_of_arrays).ravel()

Although, according to docs

When a view is desired in as many cases as possible, arr.reshape(-1) may be preferable.

In other words

np.array(list_of_arrays).reshape(-1)

The initial suggestion of mine was to use numpy.ndarray.flatten that returns a copy every time which affects performance.

Let's now see how the time complexity of the above-listed solutions compares using perfplot package for a setup similar to the one of the OP

import perfplot

perfplot.show(
    setup=lambda n: np.random.rand(n, 2),
    kernels=[lambda a: a.ravel(),
             lambda a: a.flatten(),
             lambda a: a.reshape(-1)],
    labels=['ravel', 'flatten', 'reshape'],
    n_range=[2**k for k in range(16)],
    xlabel='N')

Here flatten demonstrates piecewise linear complexity which can be reasonably explained by it making a copy of the initial array compare to constant complexities of ravel and reshape that return a view.

It's also worth noting that, quite predictably, converting the outputs .tolist() evens out the performance of all three to equally linear.

🌐
Medium
medium.com › @pivajr › pythonic-tips-mastering-list-flattening-in-python-409b731d4c9a
Pythonic Tips: Mastering List Flattening in Python | by Dilermando Piva Junior | Medium
April 24, 2025 - List Comprehension: This method takes about 0.45 milliseconds, has low memory usage, and operates with a time complexity of O(n). Recursion: Recursive flattening requires about 1.20 milliseconds, uses medium memory, and also has a complexity ...
🌐
AlgoMonster
algo.monster › liteproblems › 2625
2625. Flatten Deeply Nested Array - In-Depth Explanation
In-depth solution and explanation for LeetCode 2625. Flatten Deeply Nested Array in Python, Java, C++ and more. Intuitions, example walk through, and complexity analysis. Better than official and forum solutions.
🌐
GitHub
gist.github.com › nitely › 21735cf83c8867b9004e203e78aae76d
Flatten array in JavaScript · GitHub
Elements is the sum of integers and nested arrays. There may be a better way to express this in Big O notation, but idk how. [0]: that number is 3 between push, copy, and pop; it's not done for every element, is only done once. Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment · You can’t perform that action at this time.
🌐
Gitbooks
knaidu.gitbooks.io › problem-solving › content › arrays › array_flattening.html
Array Flattening · Problem Solving for Coding interviews
Array result.concat(x) else result << x end end return result end · Test.expect flatten([]) == [] Test.expect flatten([1,2,3]) == [1,2,3] Test.expect flatten([[1,2,3],["a","b","c"],[1,2,3]]) == [1,2,3,"a","b","c",1,2,3] Test.expect flatten([[3,4,5],[[9,9,9]],["a,b,c"]]) == [3,4,5,[9,9,9],"a,b,c"] Test.expect flatten([[[3],[4],[5]],[9],[9],[8],[[1,2,3]]]) == [[3],[4],[5],9,9,8,[1,2,3]]
🌐
Reddit
reddit.com › r/learnpython › what is the time complexity of the “in” operation
r/learnpython on Reddit: What is the time complexity of the “in” operation
September 1, 2021 -

I’m not the biggest python user. But I was looking at a friends code yesterday and they had something like:

For x in (list of 40000)

For y in (list of 2.7 million)

  If x = y 

     Append something 

This was obviously super slow so they changed it to something like:

For x in (list of 2.7 million)

If y in (list of 40000)

  Append something 

This moved much faster. I get the point of one for loop being faster than two, but what is that “in” exists function doing that makes it so much faster. I always thought that to check if something exists is O(n) which shouldn’t be faster. Also this was for ML purposes so they were likely using numpy stuff.

🌐
Algor Education
cards.algoreducation.com › en › content › SWp21oWS › python-array-operations
Python Array Operations and Time Complexity | Algor Cards
Efficiency of array operations depends on time complexity; choose operations with lower complexity for better performance. ... Slicing provides a way to access sub-parts of arrays quickly; syntax is array[start:stop:step]. ... Numpy enhances Python with high-performance operations on large arrays and matrices; includes functions for advanced data manipulation.
🌐
Quora
quora.com › What-are-the-time-complexity-considerations-of-lists-in-Python
What are the time complexity considerations of lists in Python? - Quora
Answer: In a normal list on average: * Append : O(1) * Extend : O(k) - k is the length of the extension * Index : O(1) * Slice : O(k) * Sort : O(n log n) - n is the length of the list * Len : O(1) * Pop : O(1) - pop from end * Insert : O(n) ...
Top answer
1 of 3
206

It's amortized O(1), not O(1).

Let's say the list reserved size is 8 elements and it doubles in size when space runs out. You want to push 50 elements.

The first 8 elements push in O(1). The nineth triggers reallocation and 8 copies, followed by an O(1) push. The next 7 push in O(1). The seventeenth triggers reallocation and 16 copies, followed by an O(1) push. The next 15 push in O(1). The thirty-third triggers reallocation and 32 copies, followed by an O(1) push. The next 31 push in O(1). This continues as the size of list is doubled again at pushing the 65th, 129th, 257th element, etc..

So all of the pushes have O(1) complexity, we had 64 copies at O(1), and 3 reallocations at O(n), with n = 8, 16, and 32. Note that this is a geometric series and asymptotically equals O(n) with n = the final size of the list. That means the whole operation of pushing n objects onto the list is O(n). If we amortize that per element, it's O(n)/n = O(1).

2 of 3
61

If you look at the footnote in the document you linked, you can see that they include a caveat:

These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container.

Using amortized analysis, even if we have to occasionally perform expensive operations, we can get a lower bound on the 'average' cost of operations when you consider them as a sequence, instead of individually.

So, any individual operation could be very expensive - O(n) or O(n^2) or something even bigger - but since we know these operations are rare, we guarantee that a sequence of O(n) operations can be done in O(n) time.

🌐
AlgoMonster
algo.monster › liteproblems › 341
341. Flatten Nested List Iterator - In-Depth Explanation
In-depth solution and explanation for LeetCode 341. Flatten Nested List Iterator in Python, Java, C++ and more. Intuitions, example walk through, and complexity analysis. Better than official and forum solutions.
🌐
GeeksforGeeks
geeksforgeeks.org › python › complexity-cheat-sheet-for-python-operations
Complexity Cheat Sheet for Python Operations - GeeksforGeeks
July 12, 2025 - This cheat sheet is designed to help developers understand the average and worst-case complexities of common operations for these data structures that help them write optimized and efficient code in Python. Python's list is an ordered, mutable sequence, often implemented as a dynamic array. Below are the time complexities for common list operations: