Yes, in your case*1 string concatenation requires all characters to be copied, this is a O(N+M) operation (where N and M are the sizes of the input strings). M appends of the same word will trend to O(M^2) time therefor.

You can avoid this quadratic behaviour by using str.join():

word = ''.join(list_of_words)

which only takes O(N) (where N is the total length of the output). Or, if you are repeating a single character, you can use:

word = m * char

You are prepending characters, but building a list first, then reversing it (or using a collections.deque() object to get O(1) prepending behaviour) would still be O(n) complexity, easily beating your O(N^2) choice here.


*1 As of Python 2.4, the CPython implementation avoids creating a new string object when using strA += strB or strA = strA + strB, but this optimisation is both fragile and not portable. Since you use strA = strB + strA (prepending) the optimisation doesn't apply.

Answer from Martijn Pieters on Stack Overflow
🌐
Python
wiki.python.org › moin › TimeComplexity
TimeComplexity - Python Wiki
Note that there is a fast-path for dicts that (in practice) only deal with str keys; this doesn't affect the algorithmic complexity, but it can significantly affect the constant factors: how quickly a typical program finishes. [1] = These operations rely on the "Amortized" part of "Amortized Worst Case".
Discussions

Algorithm complexity with strings and slices

The python page on time-complexity shows that slicing lists has a time-complexity of O(k), where "k" is the length of the slice. That's for lists, not strings, but the complexity can't be O(1) for strings since the slicing must handle more characters as the size is increased. At a guess, the complexity of slicing strings would also be O(k). We can write a little bit of code to test that guess:

import time

StartSize = 2097152

size = StartSize
for _ in range(10):
    # create string of size "size"
    s = '*' * size

    # now time reverse slice
    start = time.time()
    r = s[::-1]
    delta = time.time() - start

    print(f'Size {size:9d}, time={delta:.3f}')

    # double size of the string
    size *= 2

This uses a simple method of timing. Other tools exist, but this is simple. When run I get:

$ python3 test.py
Size   2097152, time=0.006
Size   4194304, time=0.013
Size   8388608, time=0.024
Size  16777216, time=0.050
Size  33554432, time=0.098
Size  67108864, time=0.190
Size 134217728, time=0.401
Size 268435456, time=0.808
Size 536870912, time=1.610
Size 1073741824, time=3.192

which shows the time doubles when doubling the size of the string for each reverse slice. So O(n) (k == n for whole-string slicing).

Edit: spelling.

More on reddit.com
🌐 r/learnpython
4
3
January 6, 2018
Python string 'in' operator implementation algorithm and time complexity - Stack Overflow
I am thinking of how the in operator implement, for instance >>> s1 = 'abcdef' >>> s2 = 'bcd' >>> s2 in s1 True In CPython, which algorithm is used to implement the string More on stackoverflow.com
🌐 stackoverflow.com
algorithm analysis - Python: A doubt on time and space complexity on string slicing - Computer Science Stack Exchange
Time complexity: O(n^2) where n is the length of the input string. This is because in every loop iteration, the string concatenation of new_word gets longer until it is at worst, length n. Space complexity: O(n), even though there are no delayed operations or new objects being created every ... More on cs.stackexchange.com
🌐 cs.stackexchange.com
python - What is the time complexity of string slice? O(k) or O(n) - Stack Overflow
I understand its extracting only ... the operation have to convert the whole string into a list first before the slice? My thought process is that the conversion of the entire string into a list alone would cost O(n). Unless only part of the string gets converted into a list? So can someone please explain is string slicing on Python O(k) or ... More on stackoverflow.com
🌐 stackoverflow.com
🌐
Reddit
reddit.com › r/leetcode › python string addition time complexity
r/leetcode on Reddit: Python String Addition Time Complexity
May 7, 2023 -

I am trying to determine the time complexity of the encode function below. It goes over every string in the input list, so that's O(n), where n is the length of the input list. But in each iteration, we add to the encodedStr. Since python creates a new string every time the add operation is performed, do we need to take the length of encodedStr into account for the time complexity of the function?

class Codec:
    def encode(self, strs: List[str]) -> str:
        # Encodes a list of strings to a single string.
        encodedStr = ''
        for s in strs
            encodedStr += str(len(s)) + '#' + s

        return encodedStr
🌐
DEV Community
dev.to › williams-37 › understanding-time-complexity-in-python-functions-5ehi
Understanding Time Complexity in Python Functions - DEV Community
October 25, 2024 - Searching for a substring in a string can take linear time in the worst case, where n is the length of the string and m is the length of the substring. ... Finding the length of a list, dictionary, or set is a constant time operation. List Comprehensions: [expression for item in iterable] → O(n) The time complexity of list comprehensions is linear, as they iterate through the entire iterable.
🌐
Codeforces
codeforces.com › blog › entry › 125610
Optimize Your Python Codeforces Solutions: Say Goodbye to str += str and Time Limit - Codeforces
When building strings iteratively, it's a common instinct to use the += operator to concatenate strings. However, what many Python developers may not realize is that this operation has a time complexity of approximately O(n^2).
🌐
Python
python-list.python.narkive.com › oOFdL6yB › time-complexity-of-string-operations
Time Complexity of String Operations
Actually, it is roughly linear, at least for reasonable string lengths: $ python -V Python 2.5.2 $ python -mtimeit -s "n=1000; a='#'*n" "a+a" 1000000 loops, best of 3: 1 usec per loop $ python -mtimeit -s "n=10000; a='#'*n" "a+a" 100000 loops, best of 3: 5.88 usec per loop $ python -mtimeit ...
🌐
Quora
quora.com › What-is-the-time-complexity-of-the-find-function-in-Python-for-strings
What is the time complexity of the find() function in Python (for strings)? - Quora
Answer (1 of 5): I’m assuming you mean CPython, the most commonly used implementation of Python. In which case, the precise answers can be found in this file : https://github.com/python/cpython/blob/main/Objects/stringlib/fastsearch.h In particular, the comments immediately indicates that the B...
Find elsewhere
🌐
AlgoCademy
algocademy.com › link
Time Complexity Practice 2 in Python | AlgoCademy
We can optimize our approach by understanding the time complexity of each operation: Slicing: Slicing a string s[start:end] is an O(k) operation, where k is the length of the slice.
🌐
Reddit
reddit.com › r/learnpython › algorithm complexity with strings and slices
r/learnpython on Reddit: Algorithm complexity with strings and slices
January 6, 2018 -

Recently I was thinking about interview questions I got as an undergrad:
Things like "reverse a string" and "check if a string is a palindrome".

I did most of these in C++ with a loop and scrolling through the index using logic.

When I learned Python, I realized that I could "reverse a string" by simply going:

return mystring[::-1]

Likewise with "check if it is a palindrome" by doing:

return mystring == mystring[::-1]

The problem now is that, I don't know what kinda complexity it is.

From my point of view, it is constant, so O(1). But I am guessing that that is too good to be true as the string splicing is doing something behind the scenes.

Can anyone help me clarify?

Top answer
1 of 2
2

The python page on time-complexity shows that slicing lists has a time-complexity of O(k), where "k" is the length of the slice. That's for lists, not strings, but the complexity can't be O(1) for strings since the slicing must handle more characters as the size is increased. At a guess, the complexity of slicing strings would also be O(k). We can write a little bit of code to test that guess:

import time

StartSize = 2097152

size = StartSize
for _ in range(10):
    # create string of size "size"
    s = '*' * size

    # now time reverse slice
    start = time.time()
    r = s[::-1]
    delta = time.time() - start

    print(f'Size {size:9d}, time={delta:.3f}')

    # double size of the string
    size *= 2

This uses a simple method of timing. Other tools exist, but this is simple. When run I get:

$ python3 test.py
Size   2097152, time=0.006
Size   4194304, time=0.013
Size   8388608, time=0.024
Size  16777216, time=0.050
Size  33554432, time=0.098
Size  67108864, time=0.190
Size 134217728, time=0.401
Size 268435456, time=0.808
Size 536870912, time=1.610
Size 1073741824, time=3.192

which shows the time doubles when doubling the size of the string for each reverse slice. So O(n) (k == n for whole-string slicing).

Edit: spelling.

2 of 2
1

How difficult an algorithm is to write and how difficult it is to calculate are two separate things. Creating a reversed string with the shorthand still requires n order space and n order time. Keep in mind that, in most cases, creating a reversed array isn't necessary, you can just start at the top and go down, which is essentially what Python's reversed() function does

🌐
Medium
andrewwhit.medium.com › time-complexity-of-string-slicing-in-python-db25177d0c48
Time Complexity of String Slicing in Python | by Andrew | Feb, 2026 | Medium
1 month ago - String slicing in Python is a powerful and concise operation, but its time complexity is O(k), where k is the length of the resulting substring.
🌐
UCI
ics.uci.edu › ~pattis › ICS-33 › lectures › complexitypython.txt
Complexity of Python Operations
This change will speed up the code, but it won't change the complexity analysis because O(N + N Log N) = O (N Log N). Speeding up code is always good, but finding an algorithm in a better complexity class (as we did going from is_unique1 to is_unique2) is much better Finally, is_unique2 works ...
🌐
GeeksforGeeks
geeksforgeeks.org › python › complexity-cheat-sheet-for-python-operations
Complexity Cheat Sheet for Python Operations - GeeksforGeeks
July 12, 2025 - Note: Defaultdict has operations same as dict with same time complexity as it inherits from dict. Python’s set is another hash-based collection, optimized for membership checks and set operations: Tuples are immutable sequences, making them lighter but with limited operations compared to lists: Strings are immutable and behave similarly to tuples in terms of time complexities:
🌐
Leyaa
leyaa.ai › codefly › learn › python › part-2 › python-string-slicing-behavior › complexity
String slicing behavior in Python Time Complexity - Big O Analysis | Leyaa.ai
This means the time depends on the length of the slice, not the whole string. ... [OK] Correct: Actually, slicing copies each character in the slice, so bigger slices take more time.
🌐
Post.Byes
post.bytes.com › home › forum › topic › python
Time Complexity of String Operations - Post.Byes - Bytes
Actually, it is roughly linear, at least for reasonable string lengths: $ python -V Python 2.5.2 $ python -mtimeit -s "n=1000; a='#'*n" "a+a" 1000000 loops, best of 3: 1 usec per loop $ python -mtimeit -s "n=10000; a='#'*n" "a+a" 100000 loops, best of 3: 5.88 usec per loop $ python -mtimeit ...
Top answer
1 of 3
4

As can be seen from the source code, the implementation of int.__str__ has runtime complexity of O(m*n) where m is the number of binary digits and n is the number of decimal digits. Since for an integer i the number of digits in an arbitrary base b is given by log(i, base=b) and logarithms in different bases differ only by a constant, the runtime complexity is O(log(i)**2), i.e. quadratic in the number of digits.

This can be verified by running a performance test:

import perfplot

perfplot.show(
    setup=lambda n: 10**n,
    kernels=[str],
    n_range=range(1, 1001, 10),
    xlabel='number of digits',
)

The quadratic time complexity in the number of digits is also mentioned in the issue for CVE-2020-10735:

[...] A huge integer will always consume a near-quadratic amount of CPU time in conversion to or from a base 10 (decimal) string with a large number of digits. No efficient algorithm exists to do otherwise.

2 of 3
2

O(n) in the context of a data structure just means if there are n items then an operation on that structure will require (in the order of) n iterations or passes to achieve the desired result. If you're constructing a string from an integer, then I guess the complexity would be O(log10(n))

EDIT: from the Python docs:

If neither encoding nor errors is given, str(object) returns object.str(), which is the “informal” or nicely printable string representation of object.

Python detects that the object is an int, therefore it will create a string from that int. One way of implementing this conversion is:

if n == 0:
    return "0"
negative = n < 0
if negative:
    n = -n
out = ""
while n > 0:
    digit = n % 10
    n /= 10
    out = chr(48 + digit) + out
if negative:
    out = "-" + out

The number of iterations inside that while loop depends on the number of digits in decimal that n contains, which is log10(n).

🌐
Reddit
reddit.com › r/python › time complexities of various operations in python
r/Python on Reddit: Time Complexities of Various Operations in Python
September 26, 2015 - The performance of those string addition operations will be quadratic until that optimisation hits pypy nightly. ... Big O has nothing to do with performance. The time complexity for a string concat does not change through this optimization.