there is a very detailed table on python wiki which answers your question.
However, in your particular example you should use enumerate to get an index of an iterable within a loop. like so:
for i, item in enumerate(some_seq):
bar(item, i)
Answer from SilentGhost on Stack Overflowthere is a very detailed table on python wiki which answers your question.
However, in your particular example you should use enumerate to get an index of an iterable within a loop. like so:
for i, item in enumerate(some_seq):
bar(item, i)
The answer is "undefined". The Python language doesn't define the underlying implementation. Here are some links to a mailing list thread you might be interested in.
It is true that Python's lists have been implemented as contiguous vectors in the C implementations of Python so far.
I'm not saying that the O() behaviours of these things should be kept a secret or anything. But you need to interpret them in the context of how Python works generally.
Also, the more Pythonic way of writing your loop would be this:
def foo(some_list):
for item in some_list:
bar(item)
What is the time complexity of the “in” operation
Big O Cheat Sheet: the time complexities of operations Python's data structures
algorithm - Time Complexity of List creation in Python - Stack Overflow
Why is the time complexity of Python's list.append() method O(1)? - Stack Overflow
Videos
I’m not the biggest python user. But I was looking at a friends code yesterday and they had something like:
For x in (list of 40000)
For y in (list of 2.7 million)
If x = y
Append something This was obviously super slow so they changed it to something like:
For x in (list of 2.7 million)
If y in (list of 40000)
Append something
This moved much faster. I get the point of one for loop being faster than two, but what is that “in” exists function doing that makes it so much faster. I always thought that to check if something exists is O(n) which shouldn’t be faster. Also this was for ML purposes so they were likely using numpy stuff.
I made a cheat sheet of all common operations on Python's many data structures. This include both the built-in data structures and all common standard library data structures.
The time complexities of different data structures in Python
If you're unfamiliar with time complexity and Big O notation, be sure to read the first section and the last two sections. I also recommend Ned Batchelder's talk/article that explains this topic more deeply.
It's amortized O(1), not O(1).
Let's say the list reserved size is 8 elements and it doubles in size when space runs out. You want to push 50 elements.
The first 8 elements push in O(1). The nineth triggers reallocation and 8 copies, followed by an O(1) push. The next 7 push in O(1). The seventeenth triggers reallocation and 16 copies, followed by an O(1) push. The next 15 push in O(1). The thirty-third triggers reallocation and 32 copies, followed by an O(1) push. The next 31 push in O(1). This continues as the size of list is doubled again at pushing the 65th, 129th, 257th element, etc..
So all of the pushes have O(1) complexity, we had 64 copies at O(1), and 3 reallocations at O(n), with n = 8, 16, and 32. Note that this is a geometric series and asymptotically equals O(n) with n = the final size of the list. That means the whole operation of pushing n objects onto the list is O(n). If we amortize that per element, it's O(n)/n = O(1).
If you look at the footnote in the document you linked, you can see that they include a caveat:
These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container.
Using amortized analysis, even if we have to occasionally perform expensive operations, we can get a lower bound on the 'average' cost of operations when you consider them as a sequence, instead of individually.
So, any individual operation could be very expensive - O(n) or O(n^2) or something even bigger - but since we know these operations are rare, we guarantee that a sequence of O(n) operations can be done in O(n) time.
That depends what exactly you mean by "constant sized". The time to find the minimum of a list with 917,340 elements is $O(1)$ with a very large constant factor. The time to find the minimum of various lists of different constant sizes is $O(n)$ and likely $\Theta(n)$ where $n$ is the size of each list. Finding the minimum of a list of 917,340 elements takes much longer than finding the minimum of a list of 3 elements.
I found this quote from the Wikipedia article on time complexity helpful:
The time complexity is generally expressed as a function of the size of the input.
So if the size of the input doesn't vary, for example if every list is of 256 integers, the time complexity will also not vary and the time complexity is therefore O(1). This would be true of any algorithm, such as sorting, searching, etc.