len(obj) simply calls obj.__len__():
>>> [1, 2, 3, 4].__len__()
4
It is therefore not correct to say that len() is always O(1) -- calling len() on most objects (e.g. lists) is O(1), but an arbitrary object might implement __len__ in an arbitrarily inefficient way.
max(obj) is a different story, because it doesn't call a single magic __max__ method on obj; it instead iterates over it, calling __iter__ and then calling __next__. It does this n times (and also does a comparison each time to track the max item it's seen so far), so it must always be at least O(n). (It can be slower if __next__ or the comparison methods are slow, although that would be very unusual.)
For either of these, we don't count the time it took to build the collection as part of the cost of calling the operation itself -- this is because you might build a list once and then call len() on it many times, and it's useful to know that the len() by itself is very cheap even if building the list was very expensive.
len(obj) simply calls obj.__len__():
>>> [1, 2, 3, 4].__len__()
4
It is therefore not correct to say that len() is always O(1) -- calling len() on most objects (e.g. lists) is O(1), but an arbitrary object might implement __len__ in an arbitrarily inefficient way.
max(obj) is a different story, because it doesn't call a single magic __max__ method on obj; it instead iterates over it, calling __iter__ and then calling __next__. It does this n times (and also does a comparison each time to track the max item it's seen so far), so it must always be at least O(n). (It can be slower if __next__ or the comparison methods are slow, although that would be very unusual.)
For either of these, we don't count the time it took to build the collection as part of the cost of calling the operation itself -- this is because you might build a list once and then call len() on it many times, and it's useful to know that the len() by itself is very cheap even if building the list was very expensive.
Let's check it:
import time
from matplotlib import pyplot as plt
import numpy as np
def main():
a = []
data = []
for i in range(10_000):
a.append(i)
ts_len = time.time()
_ = len(a)
te_len = time.time()
ts_max = time.time()
_ = max(a)
te_max = time.time()
ts_min = time.time()
_ = min(a)
te_min = time.time()
data.append([i, te_len - ts_len, te_max - ts_max, te_min - ts_min])
data = np.array(data)
plt.plot(data[:, 0], data[:, 1], "-r", label="len")
plt.plot(data[:, 0], data[:, 2], "--g", label="max")
plt.plot(data[:, 0], data[:, 2], ".b", label="min")
plt.title("Len/max/min")
plt.xlabel("Size of the list")
plt.ylabel("Time elapsed (s)")
plt.legend()
plt.show()
if __name__ == '__main__':
main()

python - Cost of len() function - Stack Overflow
Why is the complexity of len(list) O(1)?
python - Time complexity of the Neighbors function - Bioinformatics Stack Exchange
Performance of length(::String)
When I call len(list) it counts element by element or returns a kind of metadata with list’s length?
It's O(1) (constant time, not depending of actual length of the element - very fast) on every type you've mentioned, plus set and others such as array.array.
Calling len() on those data types is O(1) in CPython, the official and most common implementation of the Python language. Here's a link to a table that provides the algorithmic complexity of many different functions in CPython:
TimeComplexity Python Wiki Page
I was looking at the time complexity of different list operations and the "get length" operation has constant time O(1). Why is that? I would understand it if lists could only store a single type of object (e.g. only integers or characters), but since lists can hold objects of a different size I don't understand how the len() function has O(1) complexity.