In the question you've called them iterable so I'm going to assume they're not set or similar and that to determine if x not inother_iterable is true you have to check the values in other_iterable one at a time. For example, this would be the case if they were lists or generators.

Time complexity is about the worst case; it's an upper bound. So, in this case the worst case would be in everything in iterable was in other_iterable but was the last item returned. Then, for each of the n items in iterable you'd check each of the m items in other_iterable and the total number of operations would be O(n*m). If n is roughly the same size then it's O(n^2).

For example, if iterable = [8, 8, 8] and other_iterable = [1, 2, 3, 4, 5, 6, 7, 8] then for each of the 3 items in iterable you have to check the 8 items in other_iterable until you find out that your if statement is false so you'd have 8 * 3 total operations.

The best case scenario would be if the first item in iterable was not in other_iterable. Then you'd only examine a single element of iterable but you'd iterate over all m items in other_iterable until you learned that the if condition was true and you'd be done. That's a total of m operations. However, as noted above, big-O time complexity is about worst case scenarios so you wouldn't ordinarily quote this as the complexity.

Answer from Oliver Dain on Stack Overflow
🌐
Python
wiki.python.org › moin › TimeComplexity
TimeComplexity - Python Wiki
This page documents the time-complexity (aka "Big O" or "Big Oh") of various operations in current CPython. Other Python implementations (or older or still-under development versions of CPython) may have slightly different performance characteristics. However, it is generally safe to assume that they are not ...
Discussions

performance - Python - time complexity of not in set - Stack Overflow
I know the time complexity of checking if x in set is O(1) but what about if x not in set? Would that be O(1) still because set is similar to a dictionary? More on stackoverflow.com
🌐 stackoverflow.com
for loop - What is time complexity of using "not in list" in python? - Stack Overflow
This is a program for finding missing number in a list containing elements from 1 to n in python. def reqfun(l): x=0 //finding large no for i in l: if i>x: x=i ... More on stackoverflow.com
🌐 stackoverflow.com
June 14, 2021
Doubt about computing running time / time complexity of a function in Python - Computer Science Stack Exchange
I am learning about time complexity now, and I am working with BST (Binary Search Trees). This question needs some context and this is a follow up post to this post. Basically, I would like to comp... More on cs.stackexchange.com
🌐 cs.stackexchange.com
April 9, 2022
Big O Cheat Sheet: the time complexities of operations Python's data structures
Good for people getting into programming in general. I only have one remark: I wouldn't qualify O(n) as "Slow !" since it's still practically fast for low values of n and has the elegance of scaling linearly, which is one of the best scenarios available in the vast amount of cases a programmer will face. More on reddit.com
🌐 r/Python
28
209
April 16, 2024
🌐
Reddit
reddit.com › r/learnpython › what is the time complexity of the “in” operation
r/learnpython on Reddit: What is the time complexity of the “in” operation
September 1, 2021 -

I’m not the biggest python user. But I was looking at a friends code yesterday and they had something like:

For x in (list of 40000)

For y in (list of 2.7 million)

  If x = y 

     Append something 

This was obviously super slow so they changed it to something like:

For x in (list of 2.7 million)

If y in (list of 40000)

  Append something 

This moved much faster. I get the point of one for loop being faster than two, but what is that “in” exists function doing that makes it so much faster. I always thought that to check if something exists is O(n) which shouldn’t be faster. Also this was for ML purposes so they were likely using numpy stuff.

🌐
Note.nkmk.me
note.nkmk.me › home › python
The in Operator in Python (for List, String, Dictionary) | note.nkmk.me
May 9, 2023 - It takes the longest when the value is at the end or does not exist. %%timeit 0 in l_large # 33.4 ns ± 0.397 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) %%timeit 5000 in l_large # 66.1 µs ± 4.38 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) %%timeit 9999 in l_large # 127 µs ± 2.17 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) ... The average time complexity of the in operator for sets is O(1).
🌐
Quora
quora.com › What-are-the-time-complexity-considerations-of-lists-in-Python
What are the time complexity considerations of lists in Python? - Quora
You must check all of the elements to ensure that you have the largest. Now do that for a sorted list. Well it is obvious that given a proper data structure that this can be acco ... Time complexity has nothing to do with Python.
🌐
Stack Overflow
stackoverflow.com › questions › 67970668 › what-is-time-complexity-of-using-not-in-list-in-python
for loop - What is time complexity of using "not in list" in python? - Stack Overflow
June 14, 2021 - ... For your case, I'd just iterate through the sorted list which would be O(log n) I think? Your way right now is very inefficient and probably O(n^2). ... It's O(n), according to Python's wiki entry on Time Complexity - wiki.python.org/mo...
Find elsewhere
🌐
UCI
ics.uci.edu › ~pattis › ICS-33 › lectures › complexitypython.txt
Complexity of Python Operations
That makes sense, as the operation done N times (add) is very simple (add to the end of a list/the front of a linked list, where each add is O(1)) and the operation done a constant number of times (10, independent of N) is the expensive operation (remove, which is O(N)). It even beats the complexity of Implementation 3. So, as N gets bigger, implementation 1 will eventually become faster than the other two for the "find the 10 biggest" task. So, the bottom line here is that sometimes there is NOT a "best all the time" implementation for a data structure.
🌐
Quora
quora.com › When-we-use-in-and-not-in-for-ensuring-existence-of-an-element-in-list-string-tuple-set-My-question-is-whats-the-actual-time-complexity-for-these-two-operations
When we use 'in' and 'not in' for ensuring existence of an element in list/string/tuple/set. My question is : what's the actual time complexity for these two operations? - Quora
In real life, with a reasonable hash-set implementation and a reasonable hash function, you can usually hand-wave and call it constant time, but if it really matters, measure it. Another common set implementation is a tree set. Here you’ll get logarithmic complexity. If it’s a binary tree, in the worst case you have to traverse a number of tree nodes equal to the depth of the tree; for a balanced tree, the depth scales as the logarithm of the number of nodes. Now, the names of the operators and types used in this question suggest Python.
🌐
GeeksforGeeks
geeksforgeeks.org › dsa › understanding-time-complexity-simple-examples
Time Complexity with Simple Examples - GeeksforGeeks
Repeat the process till you are left with one student who has your pen. This is what you mean by O(log n). The Time Complexity is not equal to the actual time required to execute a particular code, but the number of times a statement executes.
Published   1 month ago
🌐
Duke University
people.duke.edu › ~ccc14 › sta-663 › AlgorithmicComplexity.html
Algorithmic complexity — Computational Statistics in Python 0.1 documentation
Suppsoe you wanted to search for an item in an unsorted list of length \(n\). One way to do this would be to scan from the first position sequentially until you find it (or not). If the item is in the list, you will need to scan (\(n/2\)) items on average to find it. If it is not in the list, you will need to scan all \(n\) items. In any case, the complexity of the search grows linearly with the lenght of the list \(n\).
🌐
GeeksforGeeks
geeksforgeeks.org › complexity-cheat-sheet-for-python-operations
Complexity Cheat Sheet for Python Operations - GeeksforGeeks
December 13, 2024 - Python built-in data structures ... However, not understanding the complexity of these operations can sometimes cause your programs to run slower than expected. This cheat sheet is designed to help developers understand the average and worst-case complexities of common operations for these data structures that help them write optimized and efficient code in Python. Python's list is an ordered, mutable sequence, often implemented as a dynamic array. Below are the time complexities ...
🌐
Python Morsels
pythonmorsels.com › time-complexities
Python Big O: the time complexities of different data structures in Python - Python Morsels
April 16, 2024 - Although, if a number is passed to most_common, it will efficiently lookup the k most common items instead (similar to the heapq.nlargest function noted in traversal techniques below). Here are a few more somewhat common Counter operations: The k in O(k) above represents the length of the given iterable to the update and subtract methods. Need a heap, possibly for the sake of implementing your own priority queue? Python's heapq module has you covered. Here are the time complexities of various heap-related operations provided by the heapq module:
🌐
Stack Abuse
stackabuse.com › big-o-notation-and-algorithm-analysis-with-python-examples
Big O Notation and Algorithm Analysis with Python Examples
November 27, 2023 - The execution time shows that the first algorithm is faster compared to the second algorithm involving recursion. When dealing with large inputs, the performance difference can become more significant. However, execution time is not a good metric to measure the complexity of an algorithm since it depends upon the hardware.
🌐
W3Schools
w3schools.com › dsa › dsa_timecomplexity_theory.php
DSA Time Complexity
To evaluate and compare different ... called time complexity. Time complexity is more abstract than actual runtime, and does not consider factors such as programming language or hardware....
🌐
Finxter
blog.finxter.com › home › learn python blog › complexity of python operations
Complexity of Python Operations - Be on the Right Side of Change
May 29, 2020 - The runtime complexity is not constant, it increases with increasing size of the list. If there are more elements to be sorted, the runtime of the algorithm increases. Given input size n, you can describe the complexity of your algorithm with a function of the input n → f(n) that defines the number of “resource units” (e.g., time, memory) needed to finish it (worst-case or average-case).
🌐
GitHub
gist.github.com › Gr1N › 60b346b5e91babb5efac
Complexity of Python Operations · GitHub
Complexity of Python Operations. GitHub Gist: instantly share code, notes, and snippets.
🌐
Towards Data Science
towardsdatascience.com › home › latest › recursion vs dynamic programming – fibonacci(leetcode 509)
Recursion vs Dynamic Programming - Fibonacci(Leetcode 509) | Towards Data Science
March 5, 2025 - Fibonacci Number as our example to illustrate the coding logic and complexity of recursion vs dynamic programming with Python. This project was built by Shuheng Ma. To see the full code used, find GitHub. (Feel free to star/watch/fork, a lot new code coming : ) ... Recursion is the process in which a function calls itself until the base cases are reached. And during the process, complex situations will be traced recursively and become simpler and simpler. The whole structure of the process is tree like. Recursion does not store any value until reach to the final stage(base case).