Generate all the permutations of a list

You have n! lists, so you cannot achieve better efficiency than O(n!).

Answer from zw324 on Stack Overflow

Generate all the permutations of a list

You have n! lists, so you cannot achieve better efficiency than O(n!).

Answer from zw324 on Stack Overflow
🌐
Wikipedia
en.wikipedia.org › wiki › Time_complexity
Time complexity - Wikipedia
3 days ago - Sometimes, exponential time is used to refer to algorithms that have T(n) = 2O(n), where the exponent is at most a linear function of n. This gives rise to the complexity class E. ... An algorithm is said to be factorial time if T(n) is upper bounded by the factorial function n!. Factorial ...
🌐
Universidad de Cantabria
personales.unican.es › corcuerp › python › ejemplos › ch3_complexity.html
Complexity and Time Complexities
Example: an algorithm which has a factorial time complexity is the Heap’s algorithm, which is used for generating all possible permutations of n objects.
🌐
Quora
quora.com › What-is-an-example-of-a-program-with-time-complexity-O-n
What is an example of a program with time complexity O(n!)? - Quora
For example one famous problem for which the naïve approach is in factorial time would be the “traveling salesman problem” (TSP) (optimal circuit among a set of nodes in some spatial arrangement).
🌐
Launch School
launchschool.com › books › dsa › read › exploring_time_complexities
Exploring Various Time Complexities in Algorithms
Factorial time complexity, represented ... The notation N! (read as "N factorial") refers to the product of all positive integers up to N. For example, if N is 5, then 5!...
🌐
Medium
syedtousifahmed.medium.com › calculating-the-factorial-of-number-recursively-time-and-space-analysis-dd47ac5f2607
Calculating the factorial of number recursively (Time and Space analysis) | by Ryan Syed | Medium
February 21, 2021 - T(n) = T(n-1) + 3 = T(n-2) + 6 = T(n-3) + 9 = T(n-4) + 12 = ... = T(n-k) + 3kas we know T(0) = 1 we need to find the value of k for which n - k = 0, k = nT(n) = T(0) + 3n , k = n = 1 + 3nthat gives us a time complexity of O(n)
🌐
Launch School
launchschool.com › books › advanced_dsa › read › time_and_space_complexity_recursive
Analyzing Time and Space Complexity in Recursive Algorithms
Thus, the time complexity of the factorial function is O(n), indicating a linear growth rate with the input size.
Find elsewhere
Top answer
1 of 4
51

If you take multiplication as O(1), then yes, O(N) is correct. However, note that multiplying two numbers of arbitrary length x is not O(1) on finite hardware -- as x tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it's O(x ** 1.585)).

You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one. x, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway of N, grows with O(log N), of course.

If you mean to limit your question to factorials of numbers short enough to be multiplied in O(1), then there's no way N can "tend to infinity" and therefore big-O notation is inappropriate.

2 of 4
30

When you express the complexity of an algorithm, it is always as a function of the input size. It is only valid to assume that multiplication is an O(1) operation if the numbers that you are multiplying are of fixed size. For example, if you wanted to determine the complexity of an algorithm that computes matrix products, you might assume that the individual components of the matrices were of fixed size. Then it would be valid to assume that multiplication of two individual matrix components was O(1), and you would compute the complexity according to the number of entries in each matrix.

However, when you want to figure out the complexity of an algorithm to compute N! you have to assume that N can be arbitrarily large, so it is not valid to assume that multiplication is an O(1) operation.

If you want to multiply an n-bit number with an m-bit number the naive algorithm (the kind you do by hand) takes time O(mn), but there are faster algorithms.

If you want to analyze the complexity of the easy algorithm for computing N!

factorial(N)
  f=1
  for i = 2 to N
    f=f*i

  return f

then at the k-th step in the for loop, you are multiplying (k-1)! by k. The number of bits used to represent (k-1)! is O(k log k) and the number of bits used to represent k is O(log k). So the time required to multiply (k-1)! and k is O(k (log k)^2) (assuming you use the naive multiplication algorithm). Then the total amount of time taken by the algorithm is the sum of the time taken at each step:

sum k = 1 to N [k (log k)^2] <= (log N)^2 * (sum k = 1 to N [k]) =
O(N^2 (log N)^2)

You could improve this performance by using a faster multiplication algorithm, like Schönhage-Strassen which takes time O(n*log(n)*log(log(n))) for 2 n-bit numbers.

The other way to improve performance is to use a better algorithm to compute N!. The fastest one that I know of first computes the prime factorization of N! and then multiplies all the prime factors.

🌐
JavaScript in Plain English
javascript.plainenglish.io › factorial-time-concept-in-big-o-notation-in-javascript-fab9b6acda00
Factorial Time Concept in Big O Notation in JavaScript | by Ankur Patel | JavaScript in Plain English
February 6, 2023 - One example of an algorithm with factorial time is the recursive implementation of the Fibonacci sequence. While there are more efficient ways to calculate the Fibonacci sequence, the following implementation has a factorial time complexity:
🌐
Jarednielsen
jarednielsen.com › big-o-factorial-time-complexity
Big O Factorial Time Complexity | jarednielsen.com
April 17, 2020 - This page has moved to https://superlative.guide/big-o-factorial-time-complexity
🌐
Quora
quora.com › I-have-an-algorithm-with-factorial-time-complexity-What-are-the-ways-to-find-algorithm-which-will-approximate-the-result-and-work-faster
I have an algorithm with factorial time complexity. What are the ways to find algorithm which will approximate the result and work faster? - Quora
Answer (1 of 2): There is no technical way to answer this question without knowing what the problem you are studying exactly is and what the algorithm even looks like. Your algorithm for solving a problem might be considerably slower than possibly the best-known algorithm, or it might indeed be t...
Top answer
1 of 2
2

All possible paths in a graph: If all nodes are connected, you have 100 nodes, and the start node is given, you can move to one of 99 nodes. If you don't want to visit the same node twice, you can visit 98 nodes as the second node from each first node, total 99 x 98. Each path to the second node lets you take 97 paths to the third node, that's 99 x 98 x 97. Quite obviously factorial in the number of nodes.

The most important question is: How does the work required change if the problem size changes? If you are lucky, then you can restrict the growth, but if increasing the problem size by 1 increases the time by a constant factor c > 1 or more, then you have exponential growth.

In practice, just implement the algorithm and run it with growing problem size. Construct a "travelling salesman" problem with varying number of cities from 1 to 10000, set it up with random distances, and solve it. How fast does the execution time grow with the number of cities? What's the largest problem you can solve in a day?

2 of 2
1

I'm particularly confused when trying to calculate the time complexity of algorithms that explore all possible paths in a graph.

Consider a graph $G$ of $n$ vertices. Assuming all pairs of vertices are connected via an edge (i.e. $G$ is complete), then a path $v_1v_2\dots v_n$ can be any permutation of the $n$ vertices. Hence, there exist $n!$ possible paths in $G$ since we have $n$ choices to pick the first vertex, $n-1$ choices for the second, and so on.

An algorithm often exhibits a factorial time complexity when it iterates through all permutations of a certain set of elements (such as vertices in a graph). An algorithm often runs in $\mathcal{O}(2^n)$ when it iterates through all possible subsets of a set of elements (such as a solver to SUBSET-SUM).

🌐
CodingTechRoom
codingtechroom.com › question › example-of-on-factorial-time-complexity
Understanding an Example of O(n!) Time Complexity in Code - CodingTechRoom
O(n!) time complexity is characteristic of algorithms that involve generating permutations of n elements. This is because the number of possible permutations for n distinct items is n factorial (n!). An example of such an algorithm is one that generates all permutations of a given array.
🌐
Matt Bognar
homepage.divms.uiowa.edu › ~ghosh › 2116.4.pdf pdf
Complexity of Algorithms Time complexity is abstracted to the number of steps
take f(x) steps. Then its complexity is represented by · O(g(x)) when there exist positive constants c, x0 such ... Issue 1. Big-O gives a loose upper bound ... Issue 2. The constants are sometimes misleading · A O(n) algorithm can have a running time c.n + d .
🌐
ScienceDirect
sciencedirect.com › science › article › abs › pii › 0196677485900069
On the complexity of calculating factorials - ScienceDirect
September 3, 2004 - It is shown that n! can be evaluated with time complexity O(log log n M (n log n)), where M(n) is the complexity of multiplying two n-digit numbers together. This is effected, in part, by writing n! in terms of its prime factors.
🌐
GeeksforGeeks
geeksforgeeks.org › dsa › analysis-algorithms-big-o-analysis
Big O Notation - GeeksforGeeks
Here’s an example of a factorial time complexity algorithm, which generates all permutations of an array:
Published   3 days ago
🌐
StudyPlan.dev
studyplan.dev › practical-dsa › big-o-intractable
C++ Big O Notation: Exponential and Factorial Time Complexity
January 16, 2026 - If you thought exponential time ... where you need to find the "best" arrangement of a set of items. The most famous example is the Traveling Salesman Problem (TSP)....