Each time you remove an element from the list, the elements "shift down" in position. This takes time, as you are essentially recreating (large portions of) the list after each removal. When you create a new list, you build it once. A more Pythonic way to do this is to use a list comprehension. clw = [word for word in words if len(word) == length] Answer from mopslik on reddit.com
🌐
GeeksforGeeks
geeksforgeeks.org › python › python-remove-rear-element-from-list
Python - Remove rear element from list - GeeksforGeeks
April 6, 2023 - Time complexity: O(1) - The pop() method takes constant time to remove the last element from the list. Auxiliary space: O(1) - No extra space is used in this code. Method #2: Using del list[-1] This is just the alternate method to perform the ...
🌐
Python
wiki.python.org › moin › TimeComplexity
TimeComplexity - Python Wiki
The average case for an average value of k is popping the element the middle of the list, which takes O(n/2) = O(n) operations. [3] = For these operations, the worst case n is the maximum size the container ever achieved, rather than just the current size. For example, if N objects are added ...
Discussions

algorithm - Why does Python take O(n) time to remove the first element from a list? - Stack Overflow
The Python wiki page on time complexity says that deleting an item takes O(n) time. The description of the deque class in the documentation of the collections module says that "list objects [...] i... More on stackoverflow.com
🌐 stackoverflow.com
python 2.7 set and list remove time complexity - Stack Overflow
Wondering the time complexity of remove of list, and remove of set. ... Removal of set is O(1). I just studied some discussion, but never prove it. If anyone could shed some lights, it will be great. Especially how set implements with O(1) removal? Using Python 2.7. More on stackoverflow.com
🌐 stackoverflow.com
optimization - How does Python remove elements from a list so quickly? - Stack Overflow
I thought that the time needed for creating referenced objects is negligible. But now I've tried to first initialize the whole list with zeroes, and than assign them with 1s. It's considerably faster(about 0.3 sec). Although, still noticeably slower than deletion. It looks like the answer can be more complex than I expected. Not just tricky deletion logic, but also some Python ... More on stackoverflow.com
🌐 stackoverflow.com
What is the time complexity of popping elements from list in Python? - Stack Overflow
I was thinking of a classic linked ... However, python's list seems to basically be an array of pointers... 2018-08-14T13:37:04.077Z+00:00 ... Any time you put in a value the time complexity of that operation is O(n - k). For example, if you have a list of 9 items than removing from the end of the list is 9 operations and removing from the beginning of the list is 1 operations (deleting the 0th index ... More on stackoverflow.com
🌐 stackoverflow.com
June 3, 2016
🌐
Reddit
reddit.com › r/learnpython › why is removing elements from a list so slow, and is there a faster way?
r/learnpython on Reddit: Why is removing elements from a list so slow, and is there a faster way?
April 21, 2024 -

I was trying to write a simple application, which is ao supposed to filter a list of words down to a list of words of a certain length. For that I could either remove the words of the wrong length, or create a new list of words with the correct length.

I had a list of around 58000 words, and wanted to filter out all the 6 letter words, which are around 6900.

with open('words.txt') as f:
    words = f.readlines()
    for i in range(len(words)):
        words[i] = words[i].strip()

length = int(input("Desired word length "))

for i in reversed(words):
    if len(i) != length:
        words.remove(i)

This took 22 seconds.

Another way is to just create a new list with words of the correct length. I did this as follows:

with open('words.txt') as f:
    words = f.readlines()
    for i in range(len(words)):
        words[i] = words[i].strip()

length = int(input("Desired word length "))
clw = []

for i in words:
    if len(i) == length:
        clw.append(i)

This only took 0.03 seconds. How can it be that creating a list of 6900 words takes 0.03 seconds, but removing 51100 words takes 22? It's only 7 times as many words, but takes 700 times as long. And is there a better and faster way to quickly remove list elements?

🌐
LabEx
labex.io › tutorials › python-what-is-the-time-complexity-of-list-append-and-remove-operations-in-python-397728
What is the time complexity of list append and remove operations in Python | LabEx
In this tutorial, we will dive into the time complexity of two fundamental list operations in Python: append and remove. Understanding the time complexity of these operations is crucial for writing efficient Python code and optimizing the performance of your applications.
Top answer
1 of 2
7

I decided to turn my set of comments into a proper answer.

First off, let's clarify what's happening when you do:

>>> l = [i for i in range(100000000)]

Here three things are happening:

  1. 100000000 int objects are being created. Creating an object in CPython requires allocating memory and placing content into that memory, and this takes time.
  2. You are running a loop. This affects performances considerably: [i for i in range(...)] is much slower than list(range(...)).
  3. The large list is being created on the fly.

Reading your question, it seems that you are considering only the last point, ignoring the others. Therefore, your timings are inaccurate: creating a large list does not take 3 seconds, it takes a fraction of those 3 seconds.

How large is this fraction is an interesting question, which is difficult to answer using only Python code, but still we can try. Specifically, I'd try with the following statement:

>>> [None] * 100000000

Here CPython does not have to create a large amount of objects (there's only None), does not have to run loops and can allocate the memory for the list once (because it knows the size in advance).

Timings are self-explanatory:

$ python3 -m timeit "list(range(100000000))"
10 loops, best of 3: 2.26 sec per loop
$ python3 -m timeit "[None] * 100000000"
10 loops, best of 3: 375 msec per loop

Now, back to your question: how about item deletion?

$ python3 -m timeit --setup "l = [None] * 100000000" "del l[0]"
10 loops, best of 3: 89 msec per loop
$ python3 -m timeit --setup "l = [None] * 100000000" "del l[100000000 // 4]"
10 loops, best of 3: 66.5 msec per loop
$ python3 -m timeit --setup "l = [None] * 100000000" "del l[100000000 // 2]"
10 loops, best of 3: 45.3 msec per loop

These numbers tell us something important. Note that 2 × 45.3 ≈ 89. Also 66.5 × 4 / 3 ≈ 89.

These numbers are telling exactly what linear complexity is about. If a function has time complexity kn (which is O(n)), it means that if we double the input, we double time; if we increase the input size by 4/3, the time increases by 4/3.

And this is what's happening here. In CPython, our list of 100000000 items is a contiguous memory area containing pointers to Python objects:

l = |ptr0|ptr1|ptr2|...|ptr99999999|

When we run del l[0] we are moving ptr1 from right to left, overwriting ptr0. The same for other elements:

l = |ptr0|ptr1|ptr2|...|ptr99999999|
     ^^^^
         ` item to delete

l = |ptr1|ptr2|...|ptr99999999|

Therefore, when we run del l[0] we have to move 99999998 pointers to the left. This is different from del l[100000000 // 2], which requires moving only half the pointers (the pointers on the first half don't need to be moved). "Moving half the pointers" equals "performing half of the operations" which roughly mean "running in half the time" (this is not always true, but timings say that this is true in this case).

2 of 2
3

I'm not sure why you think it should take 3 seconds to delete a single element.

Your initial time is for 100000000 individual append operations. Each of those takes a fraction of a second; your delete operation takes a similar amount of time.

In any case, as Bartosz points out, O(n) complexity doesn't mean that all operations take the same length of time, it means that the length of time is proportional to the length of the list.

🌐
Quora
quora.com › Why-is-the-time-complexity-of-deleting-an-item-from-an-array-linear-O-n-and-not-constant
Why is the time complexity of deleting an item from an array linear - O(n) and not constant? - Quora
Answer (1 of 5): An array is a rigid block. “Deleting” something out of it leaves a gap. When an array has gaps in it, this causes huge problems. Indexing is compromised, traversal is no longer easy, space wastage is increased, etc. Therefore, when something gets deleted out of an array, it’s p...
Find elsewhere
🌐
HugeDomains
thecodingbot.com › pythons-list-remove-time-and-space-complexity-analysis
Python's list remove() time and space complexity analysis
November 29, 2019 - This domain is for sale! Fast and easy shopping. Trusted and secure since 2005.
🌐
Medium
medium.com › @ivanmarkeyev › understanding-python-list-operations-a-big-o-complexity-guide-49be9c00afb4
Understanding Python List Operations: A Big O Complexity Guide | by Ivan Markeev | Medium
June 4, 2023 - In this article, we will explore the Big O complexity of common list operations, helping you make informed decisions about algorithm design and performance optimizations. Accessing an element in a Python list by its index is an efficient operation with constant time complexity.
🌐
Finxter
blog.finxter.com › what-is-the-difference-between-remove-pop-and-del-in-lists-in-python
What is The Difference Between remove(), pop() and del in Lists in Python? – Be on the Right Side of Change
November 1, 2021 - If you specify an element with the wrong index, Python raises an IndexError. li = [3, 5, 7, 2, 6, 4, 8] del li[8] print(li) # IndexError: list assignment index out of range · The computational complexity when removing an element in index i from the list of n elements using del is O(n-i). Case ...
🌐
GeeksforGeeks
geeksforgeeks.org › python › python-front-and-rear-range-deletion-in-a-list
Python | Front and rear range deletion in a list - GeeksforGeeks
July 11, 2025 - Time complexity: O(n), because it takes linear time to delete the elements from the list.
🌐
Medium
thinklikeacto.medium.com › time-complexity-of-popping-elements-from-list-in-python-215ad3d9c048
Time complexity of popping elements from list in Python! | by Naresh Thakur | Medium
January 23, 2020 - Consider we have following list. ... By doing a.pop() with no arguments it will remove and return the last element which has O(1) time complexity. Because it just remove the last element and do not need to re-arrange the elements. Python list is implemented as an array of pointers.
🌐
Quora
quora.com › What-is-the-time-complexity-of-removing-an-element-from-an-unsorted-linked-list
What is the time complexity of removing an element from an unsorted linked list? - Quora
However,accessing element of a single (or double) linked list is different. We need to traverse i-1 nodes to access ith node. Here, best case is accessing 1st element which is O(1) while if the element is last node then you need to traverses n nodes. Thus, the worst case is O(n). Average case (n+1)/2 (we need to prove statistically which I am skipping) thus time complexity of accessing any node element O(n). ... RelatedWhat is the time complexity of deletion of an existing node from the first position from a single link list?
🌐
Stack Overflow
stackoverflow.com › questions › 73710891 › deleting-the-first-element-of-a-python-list-in-time-complexity-o1-without-us
Deleting the first element of a python list in time complexity O(1) , without using predefined list functions - Stack Overflow
Say I have [1,3,5,6]. I want to dequeue the list (make it [3,5,6]), and add a space in the end ([3,5,6,None]), in order to enable enqueue later. I want to do all of this in constant time O(1). ... Sounds like you’re trying to implement a double ended queue. You could make your enqueue/dequeue operations work in O(1) if you use a Python list like a ring buffer (so you wouldn’t be removing elements, you’d instead be changing references/altering where the “beginning” and “end” are.
🌐
PrepBytes
prepbytes.com › home › python › remove function in python
Remove() Function in Python
November 1, 2023 - If the specified item is not found in the list, the ‘remove’ function raises a ValueError. In Python, the time complexity of the ‘remove’ function is O(n), where n is the length of the list.
🌐
Stack Overflow
stackoverflow.com › questions › 49732932 › python-list-del-insert-no-of-assignments-and-time-complexity
Python list - del, insert, no. of assignments and time complexity - Stack Overflow
If I have a list containing 100000 elements and I delete the first, does python need 99999 assignments for the shifts or 100000? ... Do you really care if it's 79999 or 80000? If you want to be accurate, the complexity is actually O(n-i+1), where i is the index you're deleting/inserting. The thing is: Nobody cares. It's too specific. The whole point of time complexity analysis is to find out how fast the algorithm is with large numbers.
Top answer
1 of 4
10

As you correctly noticed, the CPython implementation of list.clear is O(n). The code iterates over the elements in order to decrease the reference count of each one, without a way to avoid it. There is no doubt that it is an O(n) operation and, given a large enough list, you can measure the time spent in clear() as function of list size:

import time

for size in 1_000_000, 10_000_000, 100_000_000, 1_000_000_000:
    l = [None] * size
    t0 = time.time()
    l.clear()
    t1 = time.time()
    print(size, t1 - t0)

The output shows linear complexity; on my system with Python 3.7 it prints the following:

1000000 0.0023756027221679688
10000000 0.02452826499938965
100000000 0.23625731468200684
1000000000 2.31496524810791

The time per element is of course tiny because the loop is coded in C and each iteration does very little work. But, as the above measurement shows, even a miniscule per-element factor eventually adds up. Small per-element constant is not the reason to ignore the cost of an operation, or the same would apply to the loop that shifts the list elements in l.insert(0, ...), which is also very efficient - and yet few would claim insertion at the beginning to be O(1). (And clear potentially does more work because a decref will run an arbitrary chain of destructors for an object whose reference count actually reaches zero.)

On a philosophical level, one could argue that costs of memory management should be ignored when assessing complexity because otherwise it would be impossible to analyze anything with certainty, as any operation could trigger a GC. This argument has merit; GC does come occasionally and unpredictably, and its cost can be considered amortized across all allocations. In a similar vein complexity analysis tends to ignore the complexity of malloc because the parameters it depends on (like memory fragmentation) are typically not directly related to allocation size or even to the number of already allocated blocks. However, in case of list.clear there is only one allocated block, no GC is triggered, and the code is still visiting each and every list element. Even with the assumption of O(1) malloc and amortized O(1) GC, list.clear still takes the time proportional to the number of elements in the list.

The article linked from the question is about Python the language and doesn't mention a particular implementation. Python implementations that don't use reference counting, such as Jython or PyPy, are likely to have true O(1) list.clear, and for them the claim from the article would be entirely correct. So, when explaining the Python list on a conceptual level, it is not wrong to say that clearing the list is O(1) - after all, all the object references are in a contiguous array, and you free it only once. This is the point your blog post probably should make, and that is what the linked article is trying to say. Taking the cost of reference counting into account too early might confuse your readers and give them completely wrong ideas about Python's lists (e.g. they could imagine that they are implemented as linked lists).

Finally, at some point one must accept that memory management strategy does change complexity of some operations. For example, destroying a linked list in C++ is O(n) from the perspective of the caller; discarding it in Java or Go would be O(1). And not in the trivial sense of a garbage-collected language just postponing the same work for later - it is quite possible that a moving collector will only traverse reachable objects and will indeed never visit the elements of the discarded linked list. Reference counting makes discarding large containers algorithmically similar to manual collection, and GC can remove that. While CPython's list.clear has to touch every element to avoid a memory leak, it is quite possible that PyPy's garbage collector never needs to do anything of the sort, and thus has a true O(1) list.clear.

2 of 4
4

It's O(1) neglecting memory management. It's not quite right to say it's O(N) accounting for memory management, because accounting for memory management is complicated.

Most of the time, for most purposes, we treat the costs of memory management separately from the costs of the operations that triggered it. Otherwise, just about everything you could possibly do becomes O(who even knows), because almost any operation could trigger a garbage collection pass or an expensive destructor or something. Heck, even in languages like C with "manual" memory management, there's no guarantee that any particular malloc or free call will be fast.

There's an argument to be made that refcounting operations should be treated differently. After all, list.clear explicitly performs a number of Py_XDECREF operations equal to the list's length, and even if no objects are deallocated or finalized as a result, the refcounting itself will necessarily take time proportional to the length of the list.

If you count the Py_XDECREF operations list.clear performs explicitly, but ignore any destructors or other code that might be triggered by the refcounting operations, and you assume PyMem_FREE is constant time, then list.clear is O(N), where N is the original length of the list. If you discount all memory management overhead, including the explicit Py_XDECREF operations, list.clear is O(1). If you count all memory management costs, then the runtime of list.clear cannot be asymptotically bounded by any function of the list's length.