estimate of time taken for running an algorithm
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Time_complexity
Time complexity - Wikipedia
8 hours ago - Sometimes, exponential time is used to refer to algorithms that have T(n) = 2O(n), where the exponent is at most a linear function of n. This gives rise to the complexity class E. ... An algorithm is said to be factorial time if T(n) is upper bounded by the factorial function n!. Factorial ...
Discussions

Complexity of recursive factorial program - Stack Overflow
Therefore, The time complexity of recursive factorial is O(n). As there is no extra space taken during the recursive calls,the space complexity is O(N). ... Small note on this example. If you're going to try out the code, make sure you use == instead of = since it would be assigning and not ... More on stackoverflow.com
🌐 stackoverflow.com
Algorithms and Data Structures – Recursive Factorial Complexity
Yeah, people will say "big-Oh" when they mean "big-Theta", all the time (making their statement not wrong, but merely not-as-strong-as-they-meant-to-say). Heck, even I mis-use this a lot, even though it's one of my own pet peeves !-) So every time you hear big-Oh, ask yourself "do I think they really meant the even-stronger statement big-Theta?" And you personally: try to use each term exactly when you mean it. One note/technicality: when talking about a problem, rather than one particular algorithm for that problem, then big-Oh will almost always be what you want. "3SAT is O(2n )" is good; but even if you've just calculated that some particular 3SAT algorithm is Θ(2n ), we still don't know whether the underlying problem has some better algorithm. Last note: as others have mentioned: {big,little}-{O,Θ,Ω} are all sensible for worst-case, and average-case, and best-case. (People often think big-Oh is only for "worst-case", but is just a statement about function-growth, and that function can be "best-case-run-time" and everything is still sensible, if perhaps uninteresting.) And if somebody says "the run-time of an algorithm" without specifying worst/avg/best then you can presume they mean worst-case. More on reddit.com
🌐 r/computerscience
19
30
October 11, 2025
graphs - Understanding Time Complexity Calculation for Factorial and Exponential Algorithms - Computer Science Stack Exchange
I'm trying to wrap my head around how to calculate the time complexity of algorithms that exhibit factorial (𝑛!) or exponential (2^𝑛) growth rates. Specifically, I want to understand the thought More on cs.stackexchange.com
🌐 cs.stackexchange.com
April 25, 2024
What is the time complexity of n factorial with respect to recursive and non-recursive algorithm?
Now the case non-recurssive with certainly is exponencial time certainly. ... Thanks for your response. I respect your answer. Factorial is dependent on the multiplication. There are some other methods of multiplication just like Schonhage-strassen method gives complexity O(log n( log (log n)). More on researchgate.net
🌐 researchgate.net
3
0
February 3, 2016
🌐
Universidad de Cantabria
personales.unican.es › corcuerp › python › ejemplos › ch3_complexity.html
Complexity and Time Complexities
Example: an algorithm which has a factorial time complexity is the Heap’s algorithm, which is used for generating all possible permutations of n objects.
Top answer
1 of 4
51

If you take multiplication as O(1), then yes, O(N) is correct. However, note that multiplying two numbers of arbitrary length x is not O(1) on finite hardware -- as x tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it's O(x ** 1.585)).

You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one. x, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway of N, grows with O(log N), of course.

If you mean to limit your question to factorials of numbers short enough to be multiplied in O(1), then there's no way N can "tend to infinity" and therefore big-O notation is inappropriate.

2 of 4
30

When you express the complexity of an algorithm, it is always as a function of the input size. It is only valid to assume that multiplication is an O(1) operation if the numbers that you are multiplying are of fixed size. For example, if you wanted to determine the complexity of an algorithm that computes matrix products, you might assume that the individual components of the matrices were of fixed size. Then it would be valid to assume that multiplication of two individual matrix components was O(1), and you would compute the complexity according to the number of entries in each matrix.

However, when you want to figure out the complexity of an algorithm to compute N! you have to assume that N can be arbitrarily large, so it is not valid to assume that multiplication is an O(1) operation.

If you want to multiply an n-bit number with an m-bit number the naive algorithm (the kind you do by hand) takes time O(mn), but there are faster algorithms.

If you want to analyze the complexity of the easy algorithm for computing N!

factorial(N)
  f=1
  for i = 2 to N
    f=f*i

  return f

then at the k-th step in the for loop, you are multiplying (k-1)! by k. The number of bits used to represent (k-1)! is O(k log k) and the number of bits used to represent k is O(log k). So the time required to multiply (k-1)! and k is O(k (log k)^2) (assuming you use the naive multiplication algorithm). Then the total amount of time taken by the algorithm is the sum of the time taken at each step:

sum k = 1 to N [k (log k)^2] <= (log N)^2 * (sum k = 1 to N [k]) =
O(N^2 (log N)^2)

You could improve this performance by using a faster multiplication algorithm, like Schönhage-Strassen which takes time O(n*log(n)*log(log(n))) for 2 n-bit numbers.

The other way to improve performance is to use a better algorithm to compute N!. The fastest one that I know of first computes the prime factorization of N! and then multiplies all the prime factors.

🌐
Quora
quora.com › What-is-an-example-of-a-program-with-time-complexity-O-n
What is an example of a program with time complexity O(n!)? - Quora
For example one famous problem for which the naïve approach is in factorial time would be the “traveling salesman problem” (TSP) (optimal circuit among a set of nodes in some spatial arrangement).
🌐
Launch School
launchschool.com › books › advanced_dsa › read › time_and_space_complexity_recursive
Analyzing Time and Space Complexity in Recursive Algorithms
Thus, the time complexity of the factorial function is O(n), indicating a linear growth rate with the input size.
🌐
Medium
syedtousifahmed.medium.com › calculating-the-factorial-of-number-recursively-time-and-space-analysis-dd47ac5f2607
Calculating the factorial of number recursively (Time and Space analysis) | by Ryan Syed | Medium
February 21, 2021 - T(n) = T(n-1) + 3 = T(n-2) + 6 = T(n-3) + 9 = T(n-4) + 12 = ... = T(n-k) + 3kas we know T(0) = 1 we need to find the value of k for which n - k = 0, k = nT(n) = T(0) + 3n , k = n = 1 + 3nthat gives us a time complexity of O(n)
Find elsewhere
🌐
Launch School
launchschool.com › books › dsa › read › exploring_time_complexities
Exploring Various Time Complexities in Algorithms
Factorial time complexity, represented ... The notation N! (read as "N factorial") refers to the product of all positive integers up to N. For example, if N is 5, then 5!...
🌐
JavaScript in Plain English
javascript.plainenglish.io › factorial-time-concept-in-big-o-notation-in-javascript-fab9b6acda00
Factorial Time Concept in Big O Notation in JavaScript | by Ankur Patel | JavaScript in Plain English
February 6, 2023 - One example of an algorithm with factorial time is the recursive implementation of the Fibonacci sequence. While there are more efficient ways to calculate the Fibonacci sequence, the following implementation has a factorial time complexity:
🌐
Jarednielsen
jarednielsen.com › big-o-factorial-time-complexity
Big O Factorial Time Complexity | jarednielsen.com
April 17, 2020 - This page has moved to https://superlative.guide/big-o-factorial-time-complexity
🌐
Quora
quora.com › I-have-an-algorithm-with-factorial-time-complexity-What-are-the-ways-to-find-algorithm-which-will-approximate-the-result-and-work-faster
I have an algorithm with factorial time complexity. What are the ways to find algorithm which will approximate the result and work faster? - Quora
Answer (1 of 2): There is no technical way to answer this question without knowing what the problem you are studying exactly is and what the algorithm even looks like. Your algorithm for solving a problem might be considerably slower than possibly the best-known algorithm, or it might indeed be t...
🌐
Reddit
reddit.com › r/computerscience › algorithms and data structures – recursive factorial complexity
r/computerscience on Reddit: Algorithms and Data Structures – Recursive Factorial Complexity
October 11, 2025 -

Hi everyone! I'm studying algorithm complexity and I came across this recursive implementation of the factorial function:

int factorial_recursive(int n) {
    if (n == 1)
        return 1;
    else
        return n * factorial_recursive(n - 1);
}

Each recursive call does:

  • 1 operation for the if (n == 1) check

  • 1 operation for the multiplication n * factorial_recursive(n - 1)

So the recurrence relation is:

T(n) = T(n - 1) + 2
T(1) = 2

Using the substitution method (induction), I proved that:

T(n) = 2n

Now, here's my question:

Is T(n) = O(n) or T(n) = Θ(n)? And why?

I understand that O(n) is an upper bound, and Θ(n) is a tight bound, but in my lecture slides they wrote T(n) = O(n). Shouldn't it be Θ(n) since we proved the exact expression?

Thanks in advance for your help!

🌐
ScienceDirect
sciencedirect.com › science › article › abs › pii › 0196677485900069
On the complexity of calculating factorials - ScienceDirect
September 3, 2004 - It is shown that n! can be evaluated with time complexity O(log log n M (n log n)), where M(n) is the complexity of multiplying two n-digit numbers together. This is effected, in part, by writing n! in terms of its prime factors.
🌐
Matt Bognar
homepage.divms.uiowa.edu › ~ghosh › 2116.4.pdf pdf
Complexity of Algorithms Time complexity is abstracted to the number of steps
take f(x) steps. Then its complexity is represented by · O(g(x)) when there exist positive constants c, x0 such ... Issue 1. Big-O gives a loose upper bound ... Issue 2. The constants are sometimes misleading · A O(n) algorithm can have a running time c.n + d .
Top answer
1 of 2
2

All possible paths in a graph: If all nodes are connected, you have 100 nodes, and the start node is given, you can move to one of 99 nodes. If you don't want to visit the same node twice, you can visit 98 nodes as the second node from each first node, total 99 x 98. Each path to the second node lets you take 97 paths to the third node, that's 99 x 98 x 97. Quite obviously factorial in the number of nodes.

The most important question is: How does the work required change if the problem size changes? If you are lucky, then you can restrict the growth, but if increasing the problem size by 1 increases the time by a constant factor c > 1 or more, then you have exponential growth.

In practice, just implement the algorithm and run it with growing problem size. Construct a "travelling salesman" problem with varying number of cities from 1 to 10000, set it up with random distances, and solve it. How fast does the execution time grow with the number of cities? What's the largest problem you can solve in a day?

2 of 2
1

I'm particularly confused when trying to calculate the time complexity of algorithms that explore all possible paths in a graph.

Consider a graph $G$ of $n$ vertices. Assuming all pairs of vertices are connected via an edge (i.e. $G$ is complete), then a path $v_1v_2\dots v_n$ can be any permutation of the $n$ vertices. Hence, there exist $n!$ possible paths in $G$ since we have $n$ choices to pick the first vertex, $n-1$ choices for the second, and so on.

An algorithm often exhibits a factorial time complexity when it iterates through all permutations of a certain set of elements (such as vertices in a graph). An algorithm often runs in $\mathcal{O}(2^n)$ when it iterates through all possible subsets of a set of elements (such as a solver to SUBSET-SUM).