You are creating a new list object each time by concatenating. This requires copying all elements from the old list into a new one, plus one extra. So yes, using l = l + [i] is an O(N) algorithm, not O(1).

At the very least, don't use + concatenation; use += augmented concatenation, which is the same thing as list.extend() with a re-assignment to the same reference:

def test3():
    l = []
    for i in range(1000):
        l += [i]  # or use l.extend([i])
    return l

This produces:

>>> print timeit.repeat(stmt=test1, number=100, repeat=2)
[0.1333179473876953, 0.12804388999938965]
>>> print timeit.repeat(stmt=test2, number=100, repeat=2)
[0.01052403450012207, 0.007989168167114258]
>>> print timeit.repeat(stmt=test3, number=100, repeat=2)
[0.013209104537963867, 0.011193037033081055]
Answer from Martijn Pieters on Stack Overflow
🌐
Analytics Vidhya
analyticsvidhya.com › home › how to merge two lists in python?
How To Merge Two Lists in Python?
January 23, 2024 - Concatenated list using * operator: [‘apple’, ‘banana’, ‘orange’, ‘grape’, ‘watermelon’, ‘kiwi’] ... The time complexity of list concatenation using the * operator is O(n + m), where n and m are the lengths of the two ...
Discussions

python - Time complexity of min() and max() on a list of constant size? - Computer Science Stack Exchange
If you use min() or max() on a constant sized list, even in a loop, is the time complexity O(1)? More on cs.stackexchange.com
🌐 cs.stackexchange.com
March 11, 2021
What is Python's list.append() method WORST Time Complexity? It can't be O(1), right?
I've read on Stackoverflow that in Python Array doubles in size when run of space It's actually not double, but it does increase proportional to the list size (IIRC it's about 12%, though there's some variance at smaller sizes). This does result in the same asymptotics though, so I'll assume doubling in the following description for simplicity. So basically it has to copy all addresses log(n) times. Not quite. Suppose we're appending n items to an empty vector. We will indeed do log(n) resizes, so you might think "Well, resizes are O(n), so log(n) resizes is n log(n) operations, which if we amortize over the n appends we did means n log(n)/n, or log(n) per append". However, there's a flaw in this analysis: we do not copy "all addresses" each of those times. Ie. the copies are not O(n). Sure, the last copy we do will involve copying n items, but the one before it only copied n/2, and so on. So we actually do 1 + 2 + 4 + ... + n copies, which sums to 2n-1. Divide that by n and you get ~2 operations per append - a constant. I assume since it copies addresses, not information We are indeed only copying the pointer, but this doesn't really matter for the complexity analysis. Even if it was copying a large structure, that'd only increase the time by a constant factor. Does that mean that O(1) is the average time complexity? It's the amortized worst case complexity (ie. what happens over a large number of operations). While any one operation can indeed end up doing O(n) operations, there's an important distinction that over a large number of operations, you are guaranteed to only be O(1), which is a distinction just talking about average case wouldn't capture. More on reddit.com
🌐 r/learnpython
11
3
October 26, 2022
Runtime of a string join?
https://github.com/python/cpython/blob/main/Objects/stringlib/join.h Looks linear to me. More on reddit.com
🌐 r/learnpython
9
3
November 17, 2023
Python String Addition Time Complexity
You do need to take into account the string copying as others have said. It matters a lot, because that's the difference between O(m*(n^2)) and O(m*n). The O(m*(n^2)) solution would likely time out on LeetCode, and it would be a huge red flag in an interview. However no one has explained how to analyze the complexity yet, and I think that'll help you. Let's say you have n strings, each of length m. The first time you append to encodedStr it takes m operations because encodedStr is empty. The second time it's 2m because it copies m chars from encodedStr and another m from what you're appending. Then 3m etc. So the total time is m + 2m + 3m + ... + nm, which is m(1 + 2 + ... + n). We can use a summation formula to get m*n*(n+1)/2. In big-O notation, that's O(m*(n^2)). (I'm assuming the strings are relatively short so that we can treat str(len(s)) + '#' as an O(1) operation.) As u/aocregacc said, to get an O(m*n) solution use a list and then str.join() at the end. This is the standard way in Python. More on reddit.com
🌐 r/leetcode
9
0
May 7, 2023
🌐
Reddit
reddit.com › r/learnpython › what is python's list.append() method worst time complexity? it can't be o(1), right?
r/learnpython on Reddit: What is Python's list.append() method WORST Time Complexity? It can't be O(1), right?
October 26, 2022 -

I know that lists in Python are implemented using arrays that store addresses to the information. Therefore, after several appends, when an array is loaded, it needs to reserve a new space and copy the entire array of addresses to the new place.

I've read on Stackoverflow that in Python Array doubles in size when run of space. So basically it has to copy all addresses log(n) times.

The bigger the list, the more copying it will need to do. So how can append operation have a Constant Time Complexity O(1) if it has some dependence on the array size

I assume since it copies addresses, not information, it shouldn't take long, python takes only 8 bytes for address after all. Moreover, it does so very rarely. Does that mean that O(1) is the average time complexity? Is my assumption right?

Top answer
1 of 4
4
I've read on Stackoverflow that in Python Array doubles in size when run of space It's actually not double, but it does increase proportional to the list size (IIRC it's about 12%, though there's some variance at smaller sizes). This does result in the same asymptotics though, so I'll assume doubling in the following description for simplicity. So basically it has to copy all addresses log(n) times. Not quite. Suppose we're appending n items to an empty vector. We will indeed do log(n) resizes, so you might think "Well, resizes are O(n), so log(n) resizes is n log(n) operations, which if we amortize over the n appends we did means n log(n)/n, or log(n) per append". However, there's a flaw in this analysis: we do not copy "all addresses" each of those times. Ie. the copies are not O(n). Sure, the last copy we do will involve copying n items, but the one before it only copied n/2, and so on. So we actually do 1 + 2 + 4 + ... + n copies, which sums to 2n-1. Divide that by n and you get ~2 operations per append - a constant. I assume since it copies addresses, not information We are indeed only copying the pointer, but this doesn't really matter for the complexity analysis. Even if it was copying a large structure, that'd only increase the time by a constant factor. Does that mean that O(1) is the average time complexity? It's the amortized worst case complexity (ie. what happens over a large number of operations). While any one operation can indeed end up doing O(n) operations, there's an important distinction that over a large number of operations, you are guaranteed to only be O(1), which is a distinction just talking about average case wouldn't capture.
2 of 4
2
Personally, I thought that lists worked as double LinkedLists, so insert time was O(1) But if it works as a dynamic array, time complexity should be amortized time, ie, close to O(1) but not quite
🌐
GeeksforGeeks
geeksforgeeks.org › python › python-records-list-concatenation
Python | Records List Concatenation - GeeksforGeeks
April 13, 2023 - The original list 1 : [('g', 'f'), ... 'isbest'), ('best', 'st gfg')] Time complexity: O(n), where n is the length of the shorter list between test_list1 and test_list2....
🌐
Python
wiki.python.org › moin › TimeComplexity
TimeComplexity
Internally, a list is represented as an array; the largest costs come from growing beyond the current allocation size (because everything must move), or from inserting or deleting somewhere near the beginning (because everything after that must move).
🌐
DEV Community
dev.to › fayomihorace › python-how-simple-string-concatenation-can-kill-your-code-performance-2636
Python: How simple string concatenation can kill your code performance - DEV Community
March 18, 2023 - So because concatenation creates a new string of length len(s+c) , the complexity of the concatenation at each iteration is the length of s at the previous iteration + length of string c. As you can see the total, or overall complexity is 1 ...
🌐
Codecademy
codecademy.com › article › how-to-concatenate-list-in-python
How to Concatenate Two Lists in Python: 6 Effective Methods | Codecademy
For large lists or repeated concatenation, + and extend() are the most efficient. Between them, extend() is typically more efficient than + because + creates new lists with each operation, which increases memory usage and time complexity. Yes, Python lists can hold mixed data types.
🌐
DigitalOcean
digitalocean.com › community › tutorials › concatenate-lists-python
6+ Ways to Concatenate Lists in Python | DigitalOcean
October 8, 2025 - However, note that using sum() for list concatenation is inefficient for large lists because it repeatedly creates new lists during the addition process, leading to quadratic time complexity.
Find elsewhere
🌐
Quora
quora.com › What-are-the-time-complexity-considerations-of-lists-in-Python
What are the time complexity considerations of lists in Python? - Quora
Answer: In a normal list on average: * Append : O(1) * Extend : O(k) - k is the length of the extension * Index : O(1) * Slice : O(k) * Sort : O(n log n) - n is the length of the list * Len : O(1) * Pop : O(1) - pop from end * Insert : O(n) ...
🌐
GeeksforGeeks
geeksforgeeks.org › python › python-concatenate-n-consecutive-elements-in-string-list
Python | Concatenate N consecutive elements in String list - GeeksforGeeks
April 13, 2023 - Time complexity: O(n) Where n is the length of the input list test_list. In each recursive call, we slice the input list into two parts, which takes O(n) time in the worst case. We then make one recursive call with the second part of the list, ...
🌐
ImportPython
importpython.com › home › list concatenation in python: efficient methods and best practices
List Concatenation Python: Master Efficient Methods for Combining Lists
June 27, 2024 - Take tuple concatenation for instance – it’s a method that dances solo with only one operator to lead it. When two tuples take the floor, Python seamlessly weaves them into one grand ensemble. This approach pirouettes on simplicity’s toes but might falter when confronted by more complex choreographies. On the other hand, stepping in time with operations such as “+=” or using the .join() method cranks up the tempo and introduces dynamic flair optimal for larger productions or handling intricate data routines.
🌐
Unstop
unstop.com › home › blog › python string concatenation in 10 ways (+examples)
Python String Concatenation In 10 Ways (+Examples)
December 31, 2023 - Time Complexity: The time complexity for this method is O(n), where n is the length of the output string. This is because the format() function first parses the format string before constructing it by appending the formatted values into output.
🌐
Quora
quora.com › Does-string-concatenation-in-a-loop-have-a-run-time-of-O-N-squared
Does string concatenation in a loop have a run time of O(N squared)? - Quora
Answer: That depends on the language, and the loop, and many other things. But taking a guess at what motivated this question … and what you are trying to ask … In Java and C# the String class is “immutable”. If you try and modify a String, it will make a new copy of it.
🌐
Google Groups
groups.google.com › g › numpy › c › KM3eGSQV3FA
[Numpy-discussion] Array concatenation performance
Sturla Molden wrote: > Christopher Barker skrev: >> However, Python lists hold python objects, so it's bit inefficient, at >> least in terms of memory use. >> > I guess that is mainly in terms of memory use, as the Python (scalar) > objects must be created for the call to append. np.array([]) can also be > inefficient, as Anne explained yesterday, but an appendable ndarray > would not have that problem. But I don't know how important it is to > save a few milliseconds worth of cpu time for this.
🌐
GeeksforGeeks
geeksforgeeks.org › python › python-program-to-concatenate-every-elements-across-lists
Python program to concatenate every elements across lists - GeeksforGeeks
July 15, 2025 - The original list 1 is : ['gfg', ... 'best love', 'best CS'] Time Complexity: O(n2) -> time complexity of product is O(n) and a for loop, O(n2) ......
🌐
GeeksforGeeks
geeksforgeeks.org › python-concatenate-two-list-of-lists-row-wise
Concatenate two list of lists Row-wise-Python | GeeksforGeeks
March 11, 2025 - The goal is to concatenate each individual element, ensuring that the result is a continuous string without spaces or delimiters, unless specified. For example, g ... Merging list elements is a common task in Python. Each method has its own strengths and the choice of method depends on the complexity of the task.
🌐
Studytonight
studytonight.com › python-howtos › how-to-concatenate-two-or-multiple-lists-in-python
How to Concatenate two or multiple Lists in Python - Studytonight
January 20, 2021 - Concatenated list using list comprehension: [1, 4, 5, 6, 5, 3, 5, 7, 2, 5] The extend() is the function extended by lists in Python language and hence can be used to perform list concatenation operation. This function performs the in-place extension of the first list.
🌐
Microeducate
microeducate.tech › time-complexity-of-string-concatenation-in-python-duplicate
Microeducate
November 5, 2021 - MicroEducate is a blogging site for passionate video gamers on all platforms.