there is a very detailed table on python wiki which answers your question.
However, in your particular example you should use enumerate to get an index of an iterable within a loop. like so:
for i, item in enumerate(some_seq):
bar(item, i)
Answer from SilentGhost on Stack Overflowthere is a very detailed table on python wiki which answers your question.
However, in your particular example you should use enumerate to get an index of an iterable within a loop. like so:
for i, item in enumerate(some_seq):
bar(item, i)
The answer is "undefined". The Python language doesn't define the underlying implementation. Here are some links to a mailing list thread you might be interested in.
It is true that Python's lists have been implemented as contiguous vectors in the C implementations of Python so far.
I'm not saying that the O() behaviours of these things should be kept a secret or anything. But you need to interpret them in the context of how Python works generally.
Also, the more Pythonic way of writing your loop would be this:
def foo(some_list):
for item in some_list:
bar(item)
What is the time complexity of the “in” operation
algorithm - Time Complexity of List creation in Python - Stack Overflow
Complexity of *in* operator in Python - Stack Overflow
Time Complexity of Python list comprehension then list[i] = value vs. list = [] then list.append(value)
Videos
I’m not the biggest python user. But I was looking at a friends code yesterday and they had something like:
For x in (list of 40000)
For y in (list of 2.7 million)
If x = y
Append something This was obviously super slow so they changed it to something like:
For x in (list of 2.7 million)
If y in (list of 40000)
Append something
This moved much faster. I get the point of one for loop being faster than two, but what is that “in” exists function doing that makes it so much faster. I always thought that to check if something exists is O(n) which shouldn’t be faster. Also this was for ML purposes so they were likely using numpy stuff.
The complexity of in depends entirely on what L is. e in L will become L.__contains__(e).
See this time complexity document for the complexity of several built-in types.
Here is the summary for in:
- list - Average: O(n)
- set/dict - Average: O(1), Worst: O(n)
The O(n) worst case for sets and dicts is very uncommon, but it can happen if __hash__ is implemented poorly. This only happens if everything in your set has the same hash value.
It depends entirely on the type of the container. Hashing containers (dict, set) use the hash and are essentially O(1). Typical sequences (list, tuple) are implemented as you guess and are O(n). Trees would be average O(log n). And so on. Each of these types would have an appropriate __contains__ method with its big-O characteristics.