Yes, it is O(1) to pop the last element of a Python list, and O(N) to pop an arbitrary element (since the whole rest of the list has to be shifted).

Here's a great article on how Python lists are stored and manipulated: An Introduction to Python Lists.

Answer from Dan Lenski on Stack Overflow
🌐
Quora
quora.com › What-is-the-time-complexity-of-the-pop-function-in-a-Python-list
What is the time complexity of the pop() function in a Python list? - Quora
Answer: Depends upon whether you pop from the end (which is the default when you pass no argument), or pop a specific position (which you can do, by passing an index number). Pop from the end is O(1) of course, but popping a specific position ...
🌐
Medium
medium.com › @shuangzizuobh2 › how-well-do-you-code-python-9bec36bbc322
How slow is python list.pop(0) ?. An empirical study on python list.pop… | by Hj | Medium
September 27, 2023 - How slow is python list.pop(0) ? An empirical study on python list.pop complexity TL;DR Python list.pop(k) has a time complexity of O(n). Be cautious when use a python list as a Queue structure. Use …
🌐
Medium
thinklikeacto.medium.com › time-complexity-of-popping-elements-from-list-in-python-215ad3d9c048
Time complexity of popping elements from list in Python! | by Naresh Thakur | Medium
January 23, 2020 - By doing a.pop() with no arguments it will remove and return the last element which has O(1) time complexity. Because it just remove the last element and do not need to re-arrange the elements.
🌐
GoLinuxCloud
golinuxcloud.com › home › python › python list pop() function examples [beginners]
Python list pop() function examples [Beginners] | GoLinuxCloud
January 9, 2024 - Nevertheless, those memory allocations are infrequent, and the time complexity for the append operation is referred to as amortized O(1) time. In the following table, the timings for different operations on a list of size 10,000 are shown; you ...
🌐
DataCamp
datacamp.com › tutorial › python-pop
How to Use the Python pop() Method | DataCamp
July 31, 2024 - The pop() method default time complexity is O(1), which is constant and efficient. Yes, the pop() method can also be applied to sets and bytearrays in Python. For sets, set.pop() removes and returns an arbitrary element, as sets are unordered.
🌐
Bradfield CS
bradfieldcs.com › algos › analysis › performance-of-python-types
Performance of Python Types
However, the expansion rate is cleverly chosen to be three times the previous size of the array; when we spread the expansion cost over each additional append afforded by this extra space, the cost per append is ... O(1)O(1) on an amortized basis. ... Popping from a Python list is typically performed from the end but, by passing an index, you can pop from a specific position.
Find elsewhere
Top answer
1 of 2
3

Your algorithm genuinely does take O(n) time and the "pop in reverse order" algorithm genuinely does take O(n²) time. However, LeetCode isn't reporting that your time complexity is better than 89% of submissions; it is reporting your actual running time is better than 89% of all submissions. The actual running time depends on what inputs the algorithm is tested with; not just the sizes but also the number of duplicates.

It also depends how the running times across multiple test cases are averaged; if most of the test cases are for small inputs where the quadratic solution is faster, then the quadratic solution may come out ahead overall even though its time complexity is higher. @Heap Overflow also points out in the comments that the overhead time of LeetCode's judging system is proportionally large and quite variable compared to the time it takes for the algorithms to run, so the discrepancy could simply be due to random variation in that overhead.

To shed some light on this, I measured running times using timeit. The graph below shows my results; the shapes are exactly what you'd expect given the time complexities, and the crossover point is somewhere between 8000 < n < 9000 on my machine. This is based on sorted lists where each distinct element appears on average twice. The code I used to generate the times is given below.

Timing code:

def linear_solution(nums):
    left, right = 0, 0
    while right < len(nums)-1:
        if nums[right] != nums[right+1]:
            nums[left+1]=nums[right+1]
            left += 1
        right += 1
    return left + 1

def quadratic_solution(nums):
    prev_obj = []
    for i in range(len(nums)-1,-1,-1):
        if prev_obj == nums[i]:
            nums.pop(i)
        prev_obj = nums[i]
    return len(nums)

from random import randint
from timeit import timeit

def gen_list(n):
    max_n = n // 2
    return sorted(randint(0, max_n) for i in range(n))

# I used a step size of 1000 up to 15000, then a step size of 5000 up to 50000
step = 1000
max_n = 15000
reps = 100

print('n', 'linear time (ms)', 'quadratic time (ms)', sep='\t')
for n in range(step, max_n+1, step):
    # generate input lists
    lsts1 = [ gen_list(n) for i in range(reps) ]
    # copy the lists by value, since the algorithms will mutate them
    lsts2 = [ list(g) for g in lsts1 ]
    # use iterators to supply the input lists one-by-one to timeit
    iter1 = iter(lsts1)
    iter2 = iter(lsts2)
    t1 = timeit(lambda: linear_solution(next(iter1)), number=reps)
    t2 = timeit(lambda: quadratic_solution(next(iter2)), number=reps)
    # timeit reports the total time in seconds across all reps
    print(n, 1000*t1/reps, 1000*t2/reps, sep='\t')

The conclusion is that your algorithm is indeed faster than the quadratic solution for large enough inputs, but the inputs LeetCode is using to measure running times are not "large enough" to overcome the variation in the judging overhead, and the fact that the average includes times measured on smaller inputs where the quadratic algorithm is faster.

2 of 2
-3

Just because the solution is not O(n), you can't assume it to be O(n^2).

It doesn't quite become O(n^2) because he is using pop in reverse order which decreases the time to pop every time, using pop(i) on forward order will consume more time than that on reverse, as the pop searches from reverse and in every loop he is decreasing the number of elements on the back. Try that same solution in non-reverse order, run few times to make sure, you'll see.

Anyway, regarding why his solution is faster, You have an if condition with a lot of variables, he has only used one variable prev_obj, using the reverse order makes it possible to do with just one variable. So the number of basic mathematical operations are more in your case, so with same O(n) complexity each of your n-loops is longer than his.

Just look at your count varible, in every iteration its value is left+1 you could return left+1, just removing that would decrease n amount of count=count+1 you have to do.

I just posted this solution and it is 76% faster

class Solution:
    def removeDuplicates(self, nums: List[int]) -> int:
        a=sorted(set(nums),key=lambda item:item)
        for i,v in enumerate(a):
            nums[i]=v
        return len(a)

and this one gives faster than 90%.

class Solution:
    def removeDuplicates(self, nums: List[int]) -> int:
        a ={k:1 for k in nums} #<--- this is O(n)
        for i,v in enumerate(a.keys()): #<--- this is another O(n), but the length is small so O(m)
            nums[i]=v
        return len(a)

You can say both of them are more than O(n) if you look at the for loop, But since we are working with dublicate members when I am looping over the reduced memebers while your code is looping over all memebers. So the time required to make that unique set/dict is if lesser than time required for you to loop over those extra members and to check for if conditions, then my solution can be faster.

🌐
Quora
quora.com › What-are-the-time-complexity-considerations-of-lists-in-Python
What are the time complexity considerations of lists in Python? - Quora
Answer: In a normal list on average: * Append : O(1) * Extend : O(k) - k is the length of the extension * Index : O(1) * Slice : O(k) * Sort : O(n log n) - n is the length of the list * Len : O(1) * Pop : O(1) - pop from end * Insert : O(n) ...
🌐
Unstop
unstop.com › home › blog › python pop() function | list & dictionaries (+code examples)
Python pop() Function | List & Dictionaries (+Code Examples)
November 11, 2024 - Python pop() operates in O(1) time complexity when used to remove the last element (i.e., when no index is specified or the index is -1).
🌐
Quora
quora.com › What-is-the-time-complexity-of-the-push-and-pop-operation-of-an-array-based-stack
What is the time complexity of the push and pop operation of an array-based stack? - Quora
So, irrespective of number of elements already in the array, this operation will always take same time. So it will be O(1). Pop : Every time first element of array is removed, all remaining n-1 elements are moved up. ...
🌐
Finxter
blog.finxter.com › home › learn python blog › python list pop()
Python List pop() - Be on the Right Side of Change
June 19, 2021 - The popped list contains the last five elements. The original list has only one element left. The time complexity of the pop() method is constant O(1).
🌐
Reddit
reddit.com › r/learnpython › i was surprised at how slow list.pop() is! and list.remove() is even many times slower
r/learnpython on Reddit: I was surprised at how slow list.pop() is! And list.remove() is even many times slower
August 22, 2021 -

I know there is a list.clear(), I'm just sharing that I didn't expect that using list.pop() and list.remove() specifically could slow down the program that much.

li = list(range(500000))

Creating a list is quick.

So we are going to test out pop/remove specific values. For the purpose of this "benchmark", we are going to remove all elements from the list:

while (li):
    li.pop(0)

It took 74.735 seconds to pop all the elements! It's ridiculously long.
I KNOW it would have been much faster if I even had used li.pop() without the index or maybe used filter function, list comprehension with conditional or whatever
But that's what I'm trying to show, how slow it is to remove certain list items specifically using pop and remove methods.

And li.remove(), which always requires a specified value to remove, is even worse than pop!

 for num in li:
    li.remove(num)

This one took me 303.268 seconds to complete. How crazy it is.

I've been having fun with abstract data structures. Implemented linked lists and a queues running on linked lists.

And for the sake of interest, I decided to compare the performance of the queue based on the linked list and the usual python list. And I was surprised. When my linked list Queue dequeued 500.000 elements in 0.5 seconds, while python list Queue was doing it in 75 seconds.

Top answer
1 of 5
46
This is simply how lists work, nothing surprising here. Removing the first element requires moving all the elements after it one step to the left to fill that gap, which makes this operation run in linear time. It means that clearing the list this way is O(n2), so it unsurprisingly takes a long time, as bubble sorting the (shuffled) list could even be faster. This is why we think of alternatives when solving problems, such as using collections.deque, reversing the list (O(n) instead of O(n2)) or just using pop() from the end if it works.
2 of 5
13
Just thought I'd mention that, on top of being the slowest option presented here, for num in li: li.remove(num) is also broken; it skips every other value in the list and the result is essentially only half of the original list, not an empty one. The reason for this already came up in the other answers, as the values shift in the list when you remove one, but the loop itself doesn't take this into account. You can think of the loop as if it had a hidden index variable it updates on every iteration, and when the values are shifted what was previously going to be the next value after deletion is now where the deleted one was, the loop index goes up by one, and the next index to be removed is the one next to the current one. EDIT: It's easier to understand visually, I guess. idx | 0 | 1 | 2 | 3 | 4 | val | 1 | 2 | 3 | 4 | 5 Loop index: 0 Removing item at index 0 idx | 0 | 1 | 2 | 3 | 4 | val | 2 | 3 | 4 | 5 | ... Loop index: 1 Removing item at index 1 idx | 0 | 1 | 2 | 3 | 4 | val | 2 | 4 | 5 | ... | ... Loop index: 2 Removing item at index 2 Idx | 0 | 1 | 2 | 3 | 4 | val | 2 | 4 | ... | ... | ... EDIT #2: If you needed to empty a list in a real project, the best options would be to either reassign an empty list, or use list.clear which is way faster than using list.pop in a loop.
🌐
DigitalOcean
digitalocean.com › community › tutorials › pop-python
How to Use `.pop()` in Python Lists and Dictionaries | DigitalOcean
July 24, 2025 - The list.pop(index=-1) method accepts an optional integer index to remove an element at the specified position. If you omit the index, it defaults to -1, removing the last item in constant time (O(1)). Specifying any other index triggers shifting of all subsequent elements, resulting in linear ...
🌐
GeeksforGeeks
geeksforgeeks.org › python-remove-rear-element-from-list
Python - Remove rear element from list - GeeksforGeeks
April 6, 2023 - Time Complexity: O(n), The list.remove() method has a time complexity of O(n) in the worst case, where n is the number of elements in the list. This is because it needs to search through the list to find the element to remove.
🌐
Finxter
blog.finxter.com › python-set-pop
Python Set pop() – Be on the Right Side of Change
s = {'Alice', 'Bob', 'Carl', 'Liz', ... 'Bob'] print(s) # {'Ann', 'Carl'} The runtime complexity of the set.pop() function on a set with n elements is O(1)....
🌐
GeeksforGeeks
geeksforgeeks.org › space-complexity-of-list-operations-in-python
Space Complexity of List Operations in Python - GeeksforGeeks
March 19, 2025 - Space Complexity: O(n), where n is the number of elements in the list. The operation requires shifting the remaining elements after the target is removed. pop() method removes and returns the element at the specified position.