I've read on Stackoverflow that in Python Array doubles in size when run of space It's actually not double, but it does increase proportional to the list size (IIRC it's about 12%, though there's some variance at smaller sizes). This does result in the same asymptotics though, so I'll assume doubling in the following description for simplicity. So basically it has to copy all addresses log(n) times. Not quite. Suppose we're appending n items to an empty vector. We will indeed do log(n) resizes, so you might think "Well, resizes are O(n), so log(n) resizes is n log(n) operations, which if we amortize over the n appends we did means n log(n)/n, or log(n) per append". However, there's a flaw in this analysis: we do not copy "all addresses" each of those times. Ie. the copies are not O(n). Sure, the last copy we do will involve copying n items, but the one before it only copied n/2, and so on. So we actually do 1 + 2 + 4 + ... + n copies, which sums to 2n-1. Divide that by n and you get ~2 operations per append - a constant. I assume since it copies addresses, not information We are indeed only copying the pointer, but this doesn't really matter for the complexity analysis. Even if it was copying a large structure, that'd only increase the time by a constant factor. Does that mean that O(1) is the average time complexity? It's the amortized worst case complexity (ie. what happens over a large number of operations). While any one operation can indeed end up doing O(n) operations, there's an important distinction that over a large number of operations, you are guaranteed to only be O(1), which is a distinction just talking about average case wouldn't capture. Answer from Brian on reddit.com
🌐
Reddit
reddit.com › r/learnpython › what is python's list.append() method worst time complexity? it can't be o(1), right?
r/learnpython on Reddit: What is Python's list.append() method WORST Time Complexity? It can't be O(1), right?
October 26, 2022 -

I know that lists in Python are implemented using arrays that store addresses to the information. Therefore, after several appends, when an array is loaded, it needs to reserve a new space and copy the entire array of addresses to the new place.

I've read on Stackoverflow that in Python Array doubles in size when run of space. So basically it has to copy all addresses log(n) times.

The bigger the list, the more copying it will need to do. So how can append operation have a Constant Time Complexity O(1) if it has some dependence on the array size

I assume since it copies addresses, not information, it shouldn't take long, python takes only 8 bytes for address after all. Moreover, it does so very rarely. Does that mean that O(1) is the average time complexity? Is my assumption right?

Top answer
1 of 4
4
I've read on Stackoverflow that in Python Array doubles in size when run of space It's actually not double, but it does increase proportional to the list size (IIRC it's about 12%, though there's some variance at smaller sizes). This does result in the same asymptotics though, so I'll assume doubling in the following description for simplicity. So basically it has to copy all addresses log(n) times. Not quite. Suppose we're appending n items to an empty vector. We will indeed do log(n) resizes, so you might think "Well, resizes are O(n), so log(n) resizes is n log(n) operations, which if we amortize over the n appends we did means n log(n)/n, or log(n) per append". However, there's a flaw in this analysis: we do not copy "all addresses" each of those times. Ie. the copies are not O(n). Sure, the last copy we do will involve copying n items, but the one before it only copied n/2, and so on. So we actually do 1 + 2 + 4 + ... + n copies, which sums to 2n-1. Divide that by n and you get ~2 operations per append - a constant. I assume since it copies addresses, not information We are indeed only copying the pointer, but this doesn't really matter for the complexity analysis. Even if it was copying a large structure, that'd only increase the time by a constant factor. Does that mean that O(1) is the average time complexity? It's the amortized worst case complexity (ie. what happens over a large number of operations). While any one operation can indeed end up doing O(n) operations, there's an important distinction that over a large number of operations, you are guaranteed to only be O(1), which is a distinction just talking about average case wouldn't capture.
2 of 4
2
Personally, I thought that lists worked as double LinkedLists, so insert time was O(1) But if it works as a dynamic array, time complexity should be amortized time, ie, close to O(1) but not quite
Top answer
1 of 3
205

It's amortized O(1), not O(1).

Let's say the list reserved size is 8 elements and it doubles in size when space runs out. You want to push 50 elements.

The first 8 elements push in O(1). The nineth triggers reallocation and 8 copies, followed by an O(1) push. The next 7 push in O(1). The seventeenth triggers reallocation and 16 copies, followed by an O(1) push. The next 15 push in O(1). The thirty-third triggers reallocation and 32 copies, followed by an O(1) push. The next 31 push in O(1). This continues as the size of list is doubled again at pushing the 65th, 129th, 257th element, etc..

So all of the pushes have O(1) complexity, we had 64 copies at O(1), and 3 reallocations at O(n), with n = 8, 16, and 32. Note that this is a geometric series and asymptotically equals O(n) with n = the final size of the list. That means the whole operation of pushing n objects onto the list is O(n). If we amortize that per element, it's O(n)/n = O(1).

2 of 3
61

If you look at the footnote in the document you linked, you can see that they include a caveat:

These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container.

Using amortized analysis, even if we have to occasionally perform expensive operations, we can get a lower bound on the 'average' cost of operations when you consider them as a sequence, instead of individually.

So, any individual operation could be very expensive - O(n) or O(n^2) or something even bigger - but since we know these operations are rare, we guarantee that a sequence of O(n) operations can be done in O(n) time.

Discussions

Python String Addition Time Complexity
You do need to take into account the string copying as others have said. It matters a lot, because that's the difference between O(m*(n^2)) and O(m*n). The O(m*(n^2)) solution would likely time out on LeetCode, and it would be a huge red flag in an interview. However no one has explained how to analyze the complexity yet, and I think that'll help you. Let's say you have n strings, each of length m. The first time you append to encodedStr it takes m operations because encodedStr is empty. The second time it's 2m because it copies m chars from encodedStr and another m from what you're appending. Then 3m etc. So the total time is m + 2m + 3m + ... + nm, which is m(1 + 2 + ... + n). We can use a summation formula to get m*n*(n+1)/2. In big-O notation, that's O(m*(n^2)). (I'm assuming the strings are relatively short so that we can treat str(len(s)) + '#' as an O(1) operation.) As u/aocregacc said, to get an O(m*n) solution use a list and then str.join() at the end. This is the standard way in Python. More on reddit.com
🌐 r/leetcode
9
0
May 7, 2023
Time Complexity of Python list comprehension then list[i] = value vs. list = [] then list.append(value)
In terms of Big O the algorithms are equivalent to each other as they both have linear growth. As you say, the list comprehension in the first example is O(n) so the function is O(2n), but in algorithmic analysis that is considered equivalent to O(n). In practical terms the second approach is better as it only requires one iteration over the input array. More on reddit.com
🌐 r/learnprogramming
5
2
September 26, 2021
How to make dequeue o(1) operation rather than o(n)?
Nvm. I just found out that deque from python has constant dequeue time complexity by using a stack More on reddit.com
🌐 r/leetcode
11
3
April 30, 2021
How is the time complexity of the prepend and append in List object constant and linear?
The time complexities be like that because Lists are "Linked Lists". Appending requires traversing the whole collection to get to the end before appending the element. More on reddit.com
🌐 r/scala
12
1
June 1, 2020
🌐
Python
wiki.python.org › moin › TimeComplexity
TimeComplexity
As seen in the source code the complexities for set difference s-t or s.difference(t) (set_difference()) and in-place set difference s.difference_update(t) (set_difference_update_internal()) are different! The first one is O(len(s)) (for every element in s add it to the new set, if not in t).
🌐
Scaler
scaler.com › home › topics › python › append in python
Python List append() Method with Examples - Scaler Topics
May 16, 2023 - In that scenario, the previous ... any element to the list and the time complexity for this worst case is O(N) where N is the size of the original list....
🌐
Bacancy Technology
bacancytechnology.com › qanda › python › pythons-list-append-method-has-01-time-complexity
Why Python’s list.append() Method Has O(1) Time Complexity
July 14, 2025 - Python lists are implemented as dynamic arrays. When you use the append() method, Python may sometimes resize the underlying array, but this resizing doesn’t happen every time. Appending to a list is considered amortized O(1) time complexity.
🌐
GeeksforGeeks
geeksforgeeks.org › python › time-complexity-for-adding-element-in-python-set-vs-list
Time Complexity for Adding Element in Python Set vs List - GeeksforGeeks
July 23, 2025 - When we add an element to a list using the append() method, Python directly adds the element to the end. This operation has O(1) amortized time complexity, as no hashing or duplicate checks are needed.
🌐
LabEx
labex.io › tutorials › python-what-is-the-time-complexity-of-list-append-and-remove-operations-in-python-397728
What is the time complexity of list append and remove operations in Python | LabEx
For example, the time complexity of the Python list.append() operation is O(1), which means that the operation takes a constant amount of time, regardless of the size of the list.
Find elsewhere
🌐
DigitalOcean
digitalocean.com › community › tutorials › python-add-to-list
How to Add Elements to a List in Python – Append, Insert & Extend | DigitalOcean
April 17, 2025 - In the example above, append(), insert(), and extend() do not create a new list, so they do not require additional memory. However, the + operator creates a new list, which requires additional memory to store the new list. By understanding the performance and memory implications of each method, you can choose the most suitable approach for your specific use case, ensuring efficient and effective list operations. In Python, you can dynamically build lists by adding user inputs or data from files.
🌐
Medium
medium.com › @ivanmarkeyev › understanding-python-list-operations-a-big-o-complexity-guide-49be9c00afb4
Understanding Python List Operations: A Big O Complexity Guide | by Ivan Markeev | Medium
June 4, 2023 - In the worst case, where the element ... or removing an element at the end of a Python list is an efficient operation with constant time complexity....
🌐
Kaggle
kaggle.com › getting-started › 170564
python append list multiple time
Checking your browser before accessing www.kaggle.com · Click here if you are not automatically redirected after 5 seconds
🌐
edSlash
edslash.com › home › python tutorials – learn with edslash › insert() method in list – python
insert() method in list - Python - edSlash
September 21, 2024 - The .insert() method takes O(n) time complexity where n is the number of elements.
🌐
W3Schools
w3schools.com › python › python_dsa_lists.asp
Python Lists and Arrays
Each algorithm in this tutorial ... with its time complexity. ... If you want to use W3Schools services as an educational institution, team or enterprise, send us an e-mail: sales@w3schools.com · If you want to report an error, or if you want to make a suggestion, send us an e-mail: help@w3schools.com · HTML Tutorial CSS Tutorial JavaScript Tutorial How To Tutorial SQL Tutorial Python Tutorial W3.CSS ...
🌐
Medium
medium.com › @mohitarvindjoshi › the-truth-about-python-lists-nobody-told-you-7b2a5ae3e4e2
The Truth About Python Lists Nobody Told You | by Mohit Joshi | Medium
June 7, 2025 - If yes, it just adds the pointer — time complexity is O(1). If no, Python creates a bigger block of memory, copies the references, and updates the internal pointer — this takes O(N) time.
🌐
GeeksforGeeks
geeksforgeeks.org › python › space-complexity-of-list-operations-in-python
Space Complexity of List Operations in Python - GeeksforGeeks
July 23, 2025 - This is because extend() iterates ... after the insertion point must be shifted. ... Space Complexity: O(n), where n is the number of elements in the list....
🌐
Analytics Vidhya
analyticsvidhya.com › home › how to merge two lists in python?
How To Merge Two Lists in Python?
January 23, 2024 - This is because a new list is generated to store the merged elements. The extend() method in Python is used to extend a list by appending elements from another iterable (e.g., a list, tuple, or any other iterable).
🌐
Enki
enki.com › post › list-and-dict-in-python
Enki | Blog - Difference Between list and dict in Python
The Python Wiki offers an explanatory guide on dictionary keys and hash functions. When it comes to performance, lists and dictionaries have distinct characteristics suitable for different scenarios: Lists offer constant time complexity ( O(1) ) for appending elements, provided there’s ...
🌐
Code Like A Girl
code.likeagirl.io › list-comprehension-over-for-loops-in-python-time-complexity-36c7b647dec7
List comprehension over For Loops in Python-Time Complexity | by Python Code Nemesis | Code Like A Girl
March 7, 2023 - This code produces the same result ... result.append() and uses the optimized C code under the hood of list comprehension. List comprehension can improve efficiency when dealing with large datasets or complex operations by leveraging the optimized C code under the hood. The code above demonstrates how using list comprehension instead of a traditional for loop can improve the efficiency of filtering and transforming a list of numbers. Here’s an example code that uses the time library in ...
🌐
Unstop
unstop.com › home › blog › python list append() | syntax & working (with example codes)
Python List append() | Syntax & Working (With Example Codes)
February 4, 2025 - Time Complexity: The time complexity of the append() method is O(1), which means it operates in constant time. In most cases, appending an element takes the same amount of time, regardless of the size of the list.
🌐
GeeksforGeeks
geeksforgeeks.org › python › difference-between-and-append-in-python
Difference Between '+' and 'append' in Python - GeeksforGeeks
July 12, 2025 - .append() runs in amortized O(1) time, meaning that it’s generally very efficient to add items to the end of a list one by one.