I think it should be O(n^2) in both cases. My reasoning:
for idx,num in enumerate(hm3):
for i in range(hm3[num]):
l.append(num)
Outer loop: O(n)
Inner Loop: O(n) (Let's say inner loop runs for a max of k times)
append: O(1)
Total: n * k * O(1)=O(n*k)=O(n^2)
for n in hm3:
l.extend( [n]*hm3[n])
Outer loop: O(n)
extend: O(n)
Total: O(n^2)
Let me know if my thinking is wrong.
Answer from vipul bhatia on Stack OverflowWhat is Python's list.append() method WORST Time Complexity? It can't be O(1), right?
python - Time complexity of appending a list to a list - Stack Overflow
Time Complexity of Python list comprehension then list[i] = value vs. list = [] then list.append(value)
What is the difference between Python's list methods append and extend? - Stack Overflow
It's amortized O(1), not O(1).
Let's say the list reserved size is 8 elements and it doubles in size when space runs out. You want to push 50 elements.
The first 8 elements push in O(1). The nineth triggers reallocation and 8 copies, followed by an O(1) push. The next 7 push in O(1). The seventeenth triggers reallocation and 16 copies, followed by an O(1) push. The next 15 push in O(1). The thirty-third triggers reallocation and 32 copies, followed by an O(1) push. The next 31 push in O(1). This continues as the size of list is doubled again at pushing the 65th, 129th, 257th element, etc..
So all of the pushes have O(1) complexity, we had 64 copies at O(1), and 3 reallocations at O(n), with n = 8, 16, and 32. Note that this is a geometric series and asymptotically equals O(n) with n = the final size of the list. That means the whole operation of pushing n objects onto the list is O(n). If we amortize that per element, it's O(n)/n = O(1).
If you look at the footnote in the document you linked, you can see that they include a caveat:
These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take surprisingly long, depending on the history of the container.
Using amortized analysis, even if we have to occasionally perform expensive operations, we can get a lower bound on the 'average' cost of operations when you consider them as a sequence, instead of individually.
So, any individual operation could be very expensive - O(n) or O(n^2) or something even bigger - but since we know these operations are rare, we guarantee that a sequence of O(n) operations can be done in O(n) time.
I know that lists in Python are implemented using arrays that store addresses to the information. Therefore, after several appends, when an array is loaded, it needs to reserve a new space and copy the entire array of addresses to the new place.
I've read on Stackoverflow that in Python Array doubles in size when run of space. So basically it has to copy all addresses log(n) times.
The bigger the list, the more copying it will need to do. So how can append operation have a Constant Time Complexity O(1) if it has some dependence on the array size
I assume since it copies addresses, not information, it shouldn't take long, python takes only 8 bytes for address after all. Moreover, it does so very rarely. Does that mean that O(1) is the average time complexity? Is my assumption right?
Referencing my comment again, "appending" a list to a list is still O(1), you are just adding the reference to the list, "extending" a list of size m is going to cost you O(m), since it has to traverse the other list of size m to append every single element.
seq = [1, 2, 3]
new_seq = [4, 5, 6]
# Appending is O(1)
seq.append(new_seq)
print(seq) # [1, 2, 3, [4, 5, 6]]
seq = [1, 2, 3]
new_seq = [4, 5, 6]
# Extending is O(m), m = len(new_seq)
seq.extend(new_seq)
print(seq) #[1, 2, 3, 4, 5, 6]
list_ = []
list_.append([_ for _ in range(0,n)])
You take O(n) time to create the list [_ for _ in range(0,n)] then O(1) time to add the reference of this list to your list_. O(n) in total for the line list_.append([_ for _ in range(0,n)]).
list.append(item) is O(1), because lists are random-access. list.extend(other_list) is O(k), where k is the size of the other_list, presumably because memcpy is also a linear-time operation.
Let's say we are writing a function that we know the length of the output list == length of the input list. All we need to do is to insert some value to the output list and return it. I'd like to know if one approach's time complexity is better than another?
First approach:
def someFunc(inputArray):
result = [1 for _ in inputArray]
for i in range(len(inputArray)):
someValue = 100
result[i] = someValuevs.
def someFunc(inputArray):
result = []
for i in range(len(inputArray)):
someValue = 100
result.append(someValue)The first approach `result[i] = someValue` is O(1) operation, however, is is list comprehension O(n) ? if that's the case then the overall algorithm would be O(2n) time?
The second approach `result.append(someValue)` can be view as O(1) ? That leads to the overall algo time complexity O(n)?
Does that mean in terms of time complexity, second approach is better than first approach? Or not?
.append() appends a single object at the end of the list:
>>> x = [1, 2, 3]
>>> x.append([4, 5])
>>> print(x)
[1, 2, 3, [4, 5]]
.extend() appends multiple objects that are taken from inside the specified iterable:
>>> x = [1, 2, 3]
>>> x.extend([4, 5])
>>> print(x)
[1, 2, 3, 4, 5]
.append() adds an element to a list,
whereas .extend() concatenates the first list with another list/iterable.
>>> xs = ['A', 'B']
>>> xs
['A', 'B']
>>> xs.append("D")
>>> xs
['A', 'B', 'D']
>>> xs.append(["E", "F"])
>>> xs
['A', 'B', 'D', ['E', 'F']]
>>> xs.insert(2, "C")
>>> xs
['A', 'B', 'C', 'D', ['E', 'F']]
>>> xs.extend(["G", "H"])
>>> xs
['A', 'B', 'C', 'D', ['E', 'F'], 'G', 'H']