Yes, it is O(1) to pop the last element of a Python list, and O(N) to pop an arbitrary element (since the whole rest of the list has to be shifted).

Here's a great article on how Python lists are stored and manipulated: An Introduction to Python Lists.

Answer from Dan Lenski on Stack Overflow
Discussions

algorithm - Why is the big O of pop() different from pop(0) in python - Stack Overflow
Shouldn't they both be O(1), as popping an element from any location in a Python list involves destroying that list and creating one at a new memory location? More on stackoverflow.com
🌐 stackoverflow.com
What is the time complexity of pop() for the set in Python? - Stack Overflow
I know that pop the last element of the list takes O(1). And after reading this post What is the time complexity of popping elements from list in Python? I notice that if we pop an arbitrary number... More on stackoverflow.com
🌐 stackoverflow.com
algorithm - Python list.pop(i) time complexity? - Stack Overflow
There are 161 tests, but maybe ... where complexity barely matters. Maybe the test cases have most of the numbers to be removed at the back of the list. I think people take the "percentage faster than x" stuff from Leetcode too seriously. ... @MichaelButscher: It's not "a single machine code instruction" (Python lists are much more complicated than that). But otherwise, yes, O(n) can hide huge constant multipliers; if the pop(i) solution ... More on stackoverflow.com
🌐 stackoverflow.com
Is popleft() faster than pop(0) ?

Yes. list.pop(0) is O(n), and deque.popleft() is O(1).

More on reddit.com
🌐 r/learnpython
9
6
May 14, 2020
🌐
Python
wiki.python.org › moin › TimeComplexity
TimeComplexity - Python Wiki
Note that there is a fast-path ... complexity, but it can significantly affect the constant factors: how quickly a typical program finishes. [1] = These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container. [2] = Popping the intermediate ...
🌐
Quora
quora.com › What-is-the-time-complexity-of-the-pop-function-in-a-Python-list
What is the time complexity of the pop() function in a Python list? - Quora
Answer: Depends upon whether you pop from the end (which is the default when you pass no argument), or pop a specific position (which you can do, by passing an index number). Pop from the end is O(1) of course, but popping a specific position ...
🌐
Medium
medium.com › @shuangzizuobh2 › how-well-do-you-code-python-9bec36bbc322
How slow is python list.pop(0) ?. An empirical study on python list.pop… | by Hj | Medium
September 27, 2023 - Python list.pop(k) has a time complexity of O(n). Be cautious when use a python list as a Queue structure. Use deque instead.
🌐
Medium
thinklikeacto.medium.com › time-complexity-of-popping-elements-from-list-in-python-215ad3d9c048
Time complexity of popping elements from list in Python! | by Naresh Thakur | Medium
January 23, 2020 - By doing a.pop() with no arguments it will remove and return the last element which has O(1) time complexity. Because it just remove the last element and do not need to re-arrange the elements.
🌐
GoLinuxCloud
golinuxcloud.com › home › programming › python list pop() function examples [beginners]
Python list pop() function examples [Beginners] | GoLinuxCloud
January 9, 2024 - Nevertheless, those memory allocations are infrequent, and the time complexity for the append operation is referred to as amortized O(1) time. In the following table, the timings for different operations on a list of size 10,000 are shown; you ...
Find elsewhere
🌐
Unstop
unstop.com › home › blog › python pop() function | list & dictionaries (+code examples)
Python pop() Function | List & Dictionaries (+Code Examples)
November 11, 2024 - The pop() function in Python provides several benefits, especially when dealing with mutable sequences like lists. Here are some of the key advantages: From the End:Python pop() is highly efficient (O(1) time complexity) when removing elements ...
🌐
Runestone Academy
runestone.academy › ns › books › published › pythonds3 › AlgorithmAnalysis › Lists.html
2.6. Lists — Problem Solving with Algorithms and Data Structures 3rd edition
After thinking carefully about ... different times for pop. When pop is called on the end of the list it takes \(O(1)\), but when pop is called on the first element in the list—or anywhere in the middle it—is \(O(n)\) The reason for this lies in how Python chooses to implement ...
🌐
GeeksforGeeks
geeksforgeeks.org › python › python-remove-rear-element-from-list
Python - Remove rear element from list - GeeksforGeeks
April 6, 2023 - Time complexity: O(1) - The pop() method takes constant time to remove the last element from the list. Auxiliary space: O(1) - No extra space is used in this code. Method #2: Using del list[-1] This is just the alternate method to perform the ...
🌐
DataCamp
datacamp.com › tutorial › python-pop
How to Use the Python pop() Method | DataCamp
July 31, 2024 - When we use the pop() method to remove the first or any other element, it works in O(n) time because it involves removing an element and shifting the other elements to a new index order. Check out our Analyzing Complexity of Code through Python tutorial to learn more about time complexity in Python.
🌐
Bradfield CS
bradfieldcs.com › algos › analysis › performance-of-python-types
Performance of Python Types
When pop is called from the end, the operation is ... O(n)O(n). Why the difference? When an item is taken from the front of a Python list, all other elements in the list are shifted one position closer to the beginning.
🌐
Finxter
blog.finxter.com › python-set-pop
Python Set pop() – Be on the Right Side of Change
April 14, 2021 - The runtime complexity of the set.pop() function on a set with n elements is O(1). So, Python’s set.pop() method has constant runtime complexity. It simply removes and returns the first element it encounters. You can see this in the following simple experiment where we run the set method ...
Top answer
1 of 2
3

Your algorithm genuinely does take O(n) time and the "pop in reverse order" algorithm genuinely does take O(n²) time. However, LeetCode isn't reporting that your time complexity is better than 89% of submissions; it is reporting your actual running time is better than 89% of all submissions. The actual running time depends on what inputs the algorithm is tested with; not just the sizes but also the number of duplicates.

It also depends how the running times across multiple test cases are averaged; if most of the test cases are for small inputs where the quadratic solution is faster, then the quadratic solution may come out ahead overall even though its time complexity is higher. @Heap Overflow also points out in the comments that the overhead time of LeetCode's judging system is proportionally large and quite variable compared to the time it takes for the algorithms to run, so the discrepancy could simply be due to random variation in that overhead.

To shed some light on this, I measured running times using timeit. The graph below shows my results; the shapes are exactly what you'd expect given the time complexities, and the crossover point is somewhere between 8000 < n < 9000 on my machine. This is based on sorted lists where each distinct element appears on average twice. The code I used to generate the times is given below.

Timing code:

def linear_solution(nums):
    left, right = 0, 0
    while right < len(nums)-1:
        if nums[right] != nums[right+1]:
            nums[left+1]=nums[right+1]
            left += 1
        right += 1
    return left + 1

def quadratic_solution(nums):
    prev_obj = []
    for i in range(len(nums)-1,-1,-1):
        if prev_obj == nums[i]:
            nums.pop(i)
        prev_obj = nums[i]
    return len(nums)

from random import randint
from timeit import timeit

def gen_list(n):
    max_n = n // 2
    return sorted(randint(0, max_n) for i in range(n))

# I used a step size of 1000 up to 15000, then a step size of 5000 up to 50000
step = 1000
max_n = 15000
reps = 100

print('n', 'linear time (ms)', 'quadratic time (ms)', sep='\t')
for n in range(step, max_n+1, step):
    # generate input lists
    lsts1 = [ gen_list(n) for i in range(reps) ]
    # copy the lists by value, since the algorithms will mutate them
    lsts2 = [ list(g) for g in lsts1 ]
    # use iterators to supply the input lists one-by-one to timeit
    iter1 = iter(lsts1)
    iter2 = iter(lsts2)
    t1 = timeit(lambda: linear_solution(next(iter1)), number=reps)
    t2 = timeit(lambda: quadratic_solution(next(iter2)), number=reps)
    # timeit reports the total time in seconds across all reps
    print(n, 1000*t1/reps, 1000*t2/reps, sep='\t')

The conclusion is that your algorithm is indeed faster than the quadratic solution for large enough inputs, but the inputs LeetCode is using to measure running times are not "large enough" to overcome the variation in the judging overhead, and the fact that the average includes times measured on smaller inputs where the quadratic algorithm is faster.

2 of 2
-3

Just because the solution is not O(n), you can't assume it to be O(n^2).

It doesn't quite become O(n^2) because he is using pop in reverse order which decreases the time to pop every time, using pop(i) on forward order will consume more time than that on reverse, as the pop searches from reverse and in every loop he is decreasing the number of elements on the back. Try that same solution in non-reverse order, run few times to make sure, you'll see.

Anyway, regarding why his solution is faster, You have an if condition with a lot of variables, he has only used one variable prev_obj, using the reverse order makes it possible to do with just one variable. So the number of basic mathematical operations are more in your case, so with same O(n) complexity each of your n-loops is longer than his.

Just look at your count varible, in every iteration its value is left+1 you could return left+1, just removing that would decrease n amount of count=count+1 you have to do.

I just posted this solution and it is 76% faster

class Solution:
    def removeDuplicates(self, nums: List[int]) -> int:
        a=sorted(set(nums),key=lambda item:item)
        for i,v in enumerate(a):
            nums[i]=v
        return len(a)

and this one gives faster than 90%.

class Solution:
    def removeDuplicates(self, nums: List[int]) -> int:
        a ={k:1 for k in nums} #<--- this is O(n)
        for i,v in enumerate(a.keys()): #<--- this is another O(n), but the length is small so O(m)
            nums[i]=v
        return len(a)

You can say both of them are more than O(n) if you look at the for loop, But since we are working with dublicate members when I am looping over the reduced memebers while your code is looping over all memebers. So the time required to make that unique set/dict is if lesser than time required for you to loop over those extra members and to check for if conditions, then my solution can be faster.

🌐
Quora
quora.com › What-is-the-time-complexity-of-the-push-and-pop-operation-of-an-array-based-stack
What is the time complexity of the push and pop operation of an array-based stack? - Quora
So, irrespective of number of elements already in the array, this operation will always take same time. So it will be O(1). Pop : Every time first element of array is removed, all remaining n-1 elements are moved up. ...
🌐
Quora
quora.com › What-are-the-time-complexity-considerations-of-lists-in-Python
What are the time complexity considerations of lists in Python? - Quora
Answer: In a normal list on average: * Append : O(1) * Extend : O(k) - k is the length of the extension * Index : O(1) * Slice : O(k) * Sort : O(n log n) - n is the length of the list * Len : O(1) * Pop : O(1) - pop from end * Insert : O(n) ...