len(obj) simply calls obj.__len__():
>>> [1, 2, 3, 4].__len__()
4
It is therefore not correct to say that len() is always O(1) -- calling len() on most objects (e.g. lists) is O(1), but an arbitrary object might implement __len__ in an arbitrarily inefficient way.
max(obj) is a different story, because it doesn't call a single magic __max__ method on obj; it instead iterates over it, calling __iter__ and then calling __next__. It does this n times (and also does a comparison each time to track the max item it's seen so far), so it must always be at least O(n). (It can be slower if __next__ or the comparison methods are slow, although that would be very unusual.)
For either of these, we don't count the time it took to build the collection as part of the cost of calling the operation itself -- this is because you might build a list once and then call len() on it many times, and it's useful to know that the len() by itself is very cheap even if building the list was very expensive.
len(obj) simply calls obj.__len__():
>>> [1, 2, 3, 4].__len__()
4
It is therefore not correct to say that len() is always O(1) -- calling len() on most objects (e.g. lists) is O(1), but an arbitrary object might implement __len__ in an arbitrarily inefficient way.
max(obj) is a different story, because it doesn't call a single magic __max__ method on obj; it instead iterates over it, calling __iter__ and then calling __next__. It does this n times (and also does a comparison each time to track the max item it's seen so far), so it must always be at least O(n). (It can be slower if __next__ or the comparison methods are slow, although that would be very unusual.)
For either of these, we don't count the time it took to build the collection as part of the cost of calling the operation itself -- this is because you might build a list once and then call len() on it many times, and it's useful to know that the len() by itself is very cheap even if building the list was very expensive.
Let's check it:
import time
from matplotlib import pyplot as plt
import numpy as np
def main():
a = []
data = []
for i in range(10_000):
a.append(i)
ts_len = time.time()
_ = len(a)
te_len = time.time()
ts_max = time.time()
_ = max(a)
te_max = time.time()
ts_min = time.time()
_ = min(a)
te_min = time.time()
data.append([i, te_len - ts_len, te_max - ts_max, te_min - ts_min])
data = np.array(data)
plt.plot(data[:, 0], data[:, 1], "-r", label="len")
plt.plot(data[:, 0], data[:, 2], "--g", label="max")
plt.plot(data[:, 0], data[:, 2], ".b", label="min")
plt.title("Len/max/min")
plt.xlabel("Size of the list")
plt.ylabel("Time elapsed (s)")
plt.legend()
plt.show()
if __name__ == '__main__':
main()

It's O(1) (constant time, not depending of actual length of the element - very fast) on every type you've mentioned, plus set and others such as array.array.
Calling len() on those data types is O(1) in CPython, the official and most common implementation of the Python language. Here's a link to a table that provides the algorithmic complexity of many different functions in CPython:
TimeComplexity Python Wiki Page
Why is the complexity of len(list) O(1)?
Time Complexity of Python dictionary len() method - Stack Overflow
c - What is the secret behind Python's len() builtin time complexity of O(1) - Stack Overflow
Why len(list) has a time-complexity of O(1)?
I was looking at the time complexity of different list operations and the "get length" operation has constant time O(1). Why is that? I would understand it if lists could only store a single type of object (e.g. only integers or characters), but since lists can hold objects of a different size I don't understand how the len() function has O(1) complexity.
Inspecting the c-source of dictobject.c shows that the structure contains a member responsible for maintaining an explicit count (dk_size)
layout:
+---------------+
| dk_refcnt |
| dk_size |
| dk_lookup |
| dk_usable |
| dk_nentries |
+---------------+
...
Thus it will have order O(1)
According to this page:
Time Complexity: O(1) – In Python, a variable is maintained inside the container(here the dictionary) that holds the current size of the container. So, whenever anything is pushed or popped into a container, the value of the variable is incremented(for the push operation)/decremented(for the pop operation). Let’s say, there are 2 elements already present in a dictionary. When we insert another element in the dictionary, the value of the variable holding the size of the dictionary is also incremented, as we insert the element. Its value becomes 3. When we call len() on the dictionary, it calls the magic function len() which simply returns the size variable. Hence, it is O(1) operation.
Space Complexity: O(1) – Since there’s only a single variable holding the size of the dictionary, there’s no auxiliary space involved. Hence the space complexity of the method is O(1) too.
I guess you are missing one concept that is how a data structure can return its size in constant time i.e. O(1).
Roughly, think of a program like this:
void init(){
// code to initialize (or allocate memory) the container
size = 0;
}
void add(Something something){
container.add(something);
size++;
}
void remove(Something something){
//Code to search 'something'
if(found) {
container.remove(something);
size--;
}
}
int len(){
return size;
}
Now any time you call the method len(), it is ready to return the integral value without any need to traverse the container.
Why strlen or any C related data structure doesn't work that way is because of the space overhead of having a counter like size. But that doesn't mean you can't define one.
Hint:
Use struct and keep the size maintained there.
Any string/list in python is an object. Like many objects, it has a __len__ method, which stores the length of the list. When we call len, __len__ gets called internally, and returns the stored value, which is an O(1) operation.