I think it should be O(n^2) in both cases. My reasoning:

for idx,num in enumerate(hm3):
        for i in range(hm3[num]):
            l.append(num)

Outer loop: O(n)
Inner Loop: O(n) (Let's say inner loop runs for a max of k times)
append: O(1)
Total: n * k * O(1)=O(n*k)=O(n^2)

for n in hm3:
  l.extend( [n]*hm3[n])

Outer loop: O(n)
extend: O(n)
Total: O(n^2)

Let me know if my thinking is wrong.

Answer from vipul bhatia on Stack Overflow
๐ŸŒ
Python
wiki.python.org โ€บ moin โ€บ TimeComplexity
TimeComplexity - Python Wiki
[2] = Popping the intermediate ... is O(n - k). The best case is popping the second to last element, which necessitates one move, the worst case is popping the first element, which involves n - 1 moves. The average case for an average value of k is popping the element the middle of the list, which takes O(n/2) = O(n) ...
Discussions

What is Python's list.append() method WORST Time Complexity? It can't be O(1), right?
I've read on Stackoverflow that in Python Array doubles in size when run of space It's actually not double, but it does increase proportional to the list size (IIRC it's about 12%, though there's some variance at smaller sizes). This does result in the same asymptotics though, so I'll assume doubling in the following description for simplicity. So basically it has to copy all addresses log(n) times. Not quite. Suppose we're appending n items to an empty vector. We will indeed do log(n) resizes, so you might think "Well, resizes are O(n), so log(n) resizes is n log(n) operations, which if we amortize over the n appends we did means n log(n)/n, or log(n) per append". However, there's a flaw in this analysis: we do not copy "all addresses" each of those times. Ie. the copies are not O(n). Sure, the last copy we do will involve copying n items, but the one before it only copied n/2, and so on. So we actually do 1 + 2 + 4 + ... + n copies, which sums to 2n-1. Divide that by n and you get ~2 operations per append - a constant. I assume since it copies addresses, not information We are indeed only copying the pointer, but this doesn't really matter for the complexity analysis. Even if it was copying a large structure, that'd only increase the time by a constant factor. Does that mean that O(1) is the average time complexity? It's the amortized worst case complexity (ie. what happens over a large number of operations). While any one operation can indeed end up doing O(n) operations, there's an important distinction that over a large number of operations, you are guaranteed to only be O(1), which is a distinction just talking about average case wouldn't capture. More on reddit.com
๐ŸŒ r/learnpython
11
3
October 26, 2022
python - Time complexity of appending a list to a list - Stack Overflow
Since the implementation of .extend ... https://github.com/python/cpython/blob/master/Objects/listobject.c ยท As you can see, there's some constant time overhead, setting up the resulting list with enough memory and the like. And then the iterator that the list is being extended with is being exhausted linearly, which means the time complexity is ... More on stackoverflow.com
๐ŸŒ stackoverflow.com
Time Complexity of Python list comprehension then list[i] = value vs. list = [] then list.append(value)
In terms of Big O the algorithms are equivalent to each other as they both have linear growth. As you say, the list comprehension in the first example is O(n) so the function is O(2n), but in algorithmic analysis that is considered equivalent to O(n). In practical terms the second approach is better as it only requires one iteration over the input array. More on reddit.com
๐ŸŒ r/learnprogramming
5
2
September 26, 2021
What is the difference between Python's list methods append and extend? - Stack Overflow
It also works like extend, in that the second iterable can be any kind of iterable. Don't get confused - my_list = my_list + another_list is not equivalent to += - it gives you a brand new list assigned to my_list. Append has (amortized) constant time complexity, O(1). More on stackoverflow.com
๐ŸŒ stackoverflow.com
Top answer
1 of 3
206

It's amortized O(1), not O(1).

Let's say the list reserved size is 8 elements and it doubles in size when space runs out. You want to push 50 elements.

The first 8 elements push in O(1). The nineth triggers reallocation and 8 copies, followed by an O(1) push. The next 7 push in O(1). The seventeenth triggers reallocation and 16 copies, followed by an O(1) push. The next 15 push in O(1). The thirty-third triggers reallocation and 32 copies, followed by an O(1) push. The next 31 push in O(1). This continues as the size of list is doubled again at pushing the 65th, 129th, 257th element, etc..

So all of the pushes have O(1) complexity, we had 64 copies at O(1), and 3 reallocations at O(n), with n = 8, 16, and 32. Note that this is a geometric series and asymptotically equals O(n) with n = the final size of the list. That means the whole operation of pushing n objects onto the list is O(n). If we amortize that per element, it's O(n)/n = O(1).

2 of 3
61

If you look at the footnote in the document you linked, you can see that they include a caveat:

These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container.

Using amortized analysis, even if we have to occasionally perform expensive operations, we can get a lower bound on the 'average' cost of operations when you consider them as a sequence, instead of individually.

So, any individual operation could be very expensive - O(n) or O(n^2) or something even bigger - but since we know these operations are rare, we guarantee that a sequence of O(n) operations can be done in O(n) time.

๐ŸŒ
Quora
quora.com โ€บ What-are-the-time-complexity-considerations-of-lists-in-Python
What are the time complexity considerations of lists in Python? - Quora
Answer: In a normal list on average: * Append : O(1) * Extend : O(k) - k is the length of the extension * Index : O(1) * Slice : O(k) * Sort : O(n log n) - n is the length of the list * Len : O(1) * Pop : O(1) - pop from end * Insert : O(n) ...
๐ŸŒ
iO Flood
ioflood.com โ€บ blog โ€บ python-list-extend-method-usage-and-examples
Python List .extend() Method Guide | Uses and Examples
June 28, 2024 - Its capability to handle different types of iterables renders it a versatile tool for list manipulation. Both extend() and append() have a time complexity of O(k), where k represents the number of elements to be added.
๐ŸŒ
Scaler
scaler.com โ€บ home โ€บ topics โ€บ difference between append and extend in python list methods
Difference Between Append and Extend in Python List Methods | Scaler Topics
April 4, 2024 - The key distinction lies in the type of object they accept. append() takes a single element and adds it as is, while extend() takes an iterable (like a list) and appends its elements individually to the list. Understanding this difference is crucial for precise list manipulation. The append() method has a constant time complexity of O(1) since it adds only one element.
๐ŸŒ
TutorialsPoint
tutorialspoint.com โ€บ append-and-extend-in-python-program
append() and extend() in Python program
extend() method is used to prolong a list with an iterable. The time complexity of extend() method is O(n), where n is the length of the iterable.
๐ŸŒ
Reddit
reddit.com โ€บ r/learnpython โ€บ what is python's list.append() method worst time complexity? it can't be o(1), right?
r/learnpython on Reddit: What is Python's list.append() method WORST Time Complexity? It can't be O(1), right?
October 26, 2022 -

I know that lists in Python are implemented using arrays that store addresses to the information. Therefore, after several appends, when an array is loaded, it needs to reserve a new space and copy the entire array of addresses to the new place.

I've read on Stackoverflow that in Python Array doubles in size when run of space. So basically it has to copy all addresses log(n) times.

The bigger the list, the more copying it will need to do. So how can append operation have a Constant Time Complexity O(1) if it has some dependence on the array size

I assume since it copies addresses, not information, it shouldn't take long, python takes only 8 bytes for address after all. Moreover, it does so very rarely. Does that mean that O(1) is the average time complexity? Is my assumption right?

Top answer
1 of 4
4
I've read on Stackoverflow that in Python Array doubles in size when run of space It's actually not double, but it does increase proportional to the list size (IIRC it's about 12%, though there's some variance at smaller sizes). This does result in the same asymptotics though, so I'll assume doubling in the following description for simplicity. So basically it has to copy all addresses log(n) times. Not quite. Suppose we're appending n items to an empty vector. We will indeed do log(n) resizes, so you might think "Well, resizes are O(n), so log(n) resizes is n log(n) operations, which if we amortize over the n appends we did means n log(n)/n, or log(n) per append". However, there's a flaw in this analysis: we do not copy "all addresses" each of those times. Ie. the copies are not O(n). Sure, the last copy we do will involve copying n items, but the one before it only copied n/2, and so on. So we actually do 1 + 2 + 4 + ... + n copies, which sums to 2n-1. Divide that by n and you get ~2 operations per append - a constant. I assume since it copies addresses, not information We are indeed only copying the pointer, but this doesn't really matter for the complexity analysis. Even if it was copying a large structure, that'd only increase the time by a constant factor. Does that mean that O(1) is the average time complexity? It's the amortized worst case complexity (ie. what happens over a large number of operations). While any one operation can indeed end up doing O(n) operations, there's an important distinction that over a large number of operations, you are guaranteed to only be O(1), which is a distinction just talking about average case wouldn't capture.
2 of 4
2
Personally, I thought that lists worked as double LinkedLists, so insert time was O(1) But if it works as a dynamic array, time complexity should be amortized time, ie, close to O(1) but not quite
Find elsewhere
๐ŸŒ
Finxter
blog.finxter.com โ€บ python-list-extend
Python List extend() Method โ€“ Be on the Right Side of Change
March 13, 2020 - At the same time, the runtime complexity of the code is linear because each loop iteration can be completed in constant time. The trade-off is that you have to maintain two data structures which results in double the memory overhead. This nicely demonstrates the common inverse relationship between memory and runtime overhead. If you use the lst.extend(iter) operation, you add the elements in iter to the existing list lst...
๐ŸŒ
TechBeamers
techbeamers.com โ€บ python-list-extend
List Extend Method - TechBeamers
November 30, 2025 - After the Python extend method gets called, you will get the updated list object. ... It has a time complexity that is proportional to the length of the list that we want to add.
๐ŸŒ
Tutorialspoint
tutorialspoint.com โ€บ python โ€บ list_extend.htm
Python List extend() Method
For instance, say we are extending the list ['1', '2'] with the string '34'; the resultant list would be ['1', '2', '3', '4'] instead of ['1', '2', '34']. Therefore, the time complexity of this method is O(n), where n is the length of this iterable.
๐ŸŒ
Reddit
reddit.com โ€บ r/learnprogramming โ€บ time complexity of python list comprehension then list[i] = value vs. list = [] then list.append(value)
r/learnprogramming on Reddit: Time Complexity of Python list comprehension then list[i] = value vs. list = [] then list.append(value)
September 26, 2021 -

Let's say we are writing a function that we know the length of the output list == length of the input list. All we need to do is to insert some value to the output list and return it. I'd like to know if one approach's time complexity is better than another?

First approach:

def someFunc(inputArray):
    result = [1 for _ in inputArray]
    for i in range(len(inputArray)):
        someValue = 100
        result[i] = someValue

vs.

def someFunc(inputArray):
    result = []
    for i in range(len(inputArray)):
        someValue = 100
        result.append(someValue)

The first approach `result[i] = someValue` is O(1) operation, however, is is list comprehension O(n) ? if that's the case then the overall algorithm would be O(2n) time?

The second approach `result.append(someValue)` can be view as O(1) ? That leads to the overall algo time complexity O(n)?

Does that mean in terms of time complexity, second approach is better than first approach? Or not?

Top answer
1 of 3
4
In terms of Big O the algorithms are equivalent to each other as they both have linear growth. As you say, the list comprehension in the first example is O(n) so the function is O(2n), but in algorithmic analysis that is considered equivalent to O(n). In practical terms the second approach is better as it only requires one iteration over the input array.
2 of 3
1
Yes, the Time Complexity of the list comprehension is O(n). You literally iterate over every list element, so if the list has n elements, then you're doing n iterations. It's going to be a little more efficient than a for a loop though. Because it's implemented in the C language and better optimized But if we are talking about Asymptotic Growth(which Big O is about). Then the Time Complexity is going to be the same for both the for loop and the list comprehension. As for the actual code you sent. In the first version, you're literally iterating over a list two times. Which kind of doesn't make sense, you could just do [100 for _ in inputArray] and omit the second loop. You can even call functions, use if-else statements from the list comprehensions, and many other things But to answer your question directly, yes it's time Complexity is n+n or O(2n). BUT 2 is constant, which isn't going to affect the Complexity Growth much. Therefore it's not considered, since it doesn't matter that much. So Asymptotic Growth is still considered O(n) With the second version, your assumption is right. Appending to the list has a time complexity of O(1). Since the list size doesn't matter for this operation. Just for your information, removing elements from the list is a very different matter. If you're removing by value or by index from the list, this operation has the worst time complexity of O(n). Because if you removed the first element for example, then under the hood, the computer has to move every single element(n-1 elements) to the left. But it's not the case with the append operation, since you simply add a new element to the end of the list and nothing else is moving. Poping the last element is also O(1) since you just remove one element from the end, and everything stays where it is
๐ŸŒ
Medium
medium.com โ€บ @ivanmarkeyev โ€บ understanding-python-list-operations-a-big-o-complexity-guide-49be9c00afb4
Understanding Python List Operations: A Big O Complexity Guide | by Ivan Markeev | Medium
June 4, 2023 - In this article, we will explore the Big O complexity of common list operations, helping you make informed decisions about algorithm design and performance optimizations. Accessing an element in a Python list by its index is an efficient operation with constant time complexity.
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ python โ€บ space-complexity-of-list-operations-in-python
Space Complexity of List Operations in Python - GeeksforGeeks
July 23, 2025 - This is because extend() iterates ... after the insertion point must be shifted. ... Space Complexity: O(n), where n is the number of elements in the list....
๐ŸŒ
LabEx
labex.io โ€บ tutorials โ€บ python-what-is-the-time-complexity-of-list-append-and-remove-operations-in-python-397728
What is the time complexity of list append and remove operations in Python | LabEx
This is because the list.append() operation simply adds a new element to the end of the list, and the underlying implementation of the Python list data structure is designed to handle this operation efficiently. Here's an example code snippet to demonstrate the constant time complexity of the list.append() operation:
๐ŸŒ
Quora
quora.com โ€บ How-fast-are-list-operations-in-Python
How fast are list operations in Python? - Quora
Dynamic array is a classic CS data structure, where elements are contiguous as an array, and the space is reallocated when it runs out, expanding the space by a certain factor each time (usually doubling). The main features are random access (O(1) indexing) and amortized O(1) appending to the end (in fact, this is one of the classic examples of amortized analysis). Inserting or deleting anywhere else (beginning and middle) requires shifting later elements and is O(n). Everything presented seems to be completely consistent with the complexities of a dynamic array. ... Python lists are dynamic arrays (CPython implementation), so most operations have predictable time/space costs.
๐ŸŒ
UCI
ics.uci.edu โ€บ ~pattis โ€บ ICS-33 โ€บ lectures โ€บ complexitypython.txt
Complexity of Python Operations
In fact, we could also simplify ... in our code. This change will speed up the code, but it won't change the complexity analysis because O(N + N Log N) = O (N Log N)....
๐ŸŒ
GeeksforGeeks
geeksforgeeks.org โ€บ python โ€บ python-repeat-and-multiply-list-extension
Python | Repeat and Multiply list extension - GeeksforGeeks
March 17, 2023 - Time Complexity: O(N * length of test_list) Auxiliary Space: O(N * length of test_list) ... This function takes a list, N and M as input and returns the list after extension and multiplication using recursion. it defines a recursive function ...