In the question you've called them iterable so I'm going to assume they're not set or similar and that to determine if x not inother_iterable is true you have to check the values in other_iterable one at a time. For example, this would be the case if they were lists or generators.
Time complexity is about the worst case; it's an upper bound. So, in this case the worst case would be in everything in iterable was in other_iterable but was the last item returned. Then, for each of the n items in iterable you'd check each of the m items in other_iterable and the total number of operations would be O(n*m). If n is roughly the same size then it's O(n^2).
For example, if iterable = [8, 8, 8] and other_iterable = [1, 2, 3, 4, 5, 6, 7, 8] then for each of the 3 items in iterable you have to check the 8 items in other_iterable until you find out that your if statement is false so you'd have 8 * 3 total operations.
The best case scenario would be if the first item in iterable was not in other_iterable. Then you'd only examine a single element of iterable but you'd iterate over all m items in other_iterable until you learned that the if condition was true and you'd be done. That's a total of m operations. However, as noted above, big-O time complexity is about worst case scenarios so you wouldn't ordinarily quote this as the complexity.
performance - Python - time complexity of not in set - Stack Overflow
for loop - What is time complexity of using "not in list" in python? - Stack Overflow
Doubt about computing running time / time complexity of a function in Python - Computer Science Stack Exchange
Big O Cheat Sheet: the time complexities of operations Python's data structures
Videos
I’m not the biggest python user. But I was looking at a friends code yesterday and they had something like:
For x in (list of 40000)
For y in (list of 2.7 million)
If x = y
Append something This was obviously super slow so they changed it to something like:
For x in (list of 2.7 million)
If y in (list of 40000)
Append something
This moved much faster. I get the point of one for loop being faster than two, but what is that “in” exists function doing that makes it so much faster. I always thought that to check if something exists is O(n) which shouldn’t be faster. Also this was for ML purposes so they were likely using numpy stuff.
x not in some_set just negates the result of x in some_set, so it has the same time complexity. This is the case for any object, set or not. You can take a look at the place where the CPython implementation does res = !res; if you want.
For more information on the time complexities of Python Data Structures please reference this https://wiki.python.org/moin/TimeComplexity.
From this it is shown that x in s performs O(1) on average and O(n) in worst case. So as pointed out by
user2357112 x not in s is equivalent to not x in s which just negates the result of x in s and will have the same time complexity.
You could say that this function has a time complexity of $\mathcal{O}(h)$, where $h$ is the height of the BST.
You could be more precise and say that it is a $\Theta(h)$ in the worst case. This means that the complexity will never be more than linear in the height, but adds that this linear case can be reached (even if not every time as you have stated).
While Nathaniel's answer is obviously right, there is something else to consider: Do the operations that you perform change the shape of your tree? I binary tree with 1000 nodes could have a height of 10, or it could be degenerated to a linked list with a height of 1,000. That would happen if you have a naive implementation of "add a node" and add 1,000 nodes in sorted order.
So what is the height of your average tree, based on the algorithms that you are using?