First, consider a Turing machine as a model (you can use other models too as long as they are Turing equivalent) of the algorithm at hand. When you provide an input of size , then you can think of the computation as a sequence of the machine's configuration after each step, i.e.,
. Hopefully, the computation is finite, so there is some
such
. Then
is the running time of the given algorithm for an input of size
.
An algorithm is polynomial (has polynomial running time) if for some , its running time on inputs of size
is at most
. Equivalently, an algorithm is polynomial if for some
, its running time on inputs of size
is
. This includes linear, quadratic, cubic and more. On the other hand, algorithms with exponential running times are not polynomial.
There are things in between - for example, the best known algorithm for factoring runs in time for some constant
; such a running time is known as sub-exponential. Other algorithms could run in time
for some
and
, and these are known as quasi-polynomial. Such an algorithm has very recently been claimed for discrete log over small characteristics.
Running an algorithm can take up some computing time. It mainly depends on how complex the algorithm is. Computer scientists have made a way to classify the algorithm based on its behaviour of how many operations it needs to perform (more ops take up more time).
One of that class shows polynomial time complexity. Ie., operational complexity is proportional to while n is size of input and c is some constant. Obviously the name comes because of
which is a polynomial.
There are other 'types' of algorithms that take up constant time irrespective of the size of the input. Some take up time (yes, really slllooooww most of the time).
I just over simplified it for the layman and may have introduced errors. So read more https://stackoverflow.com/questions/4317414/polynomial-time-and-exponential-time
complexity - Why do we focus on polynomial time, rather than other kinds of time? - Cryptography Stack Exchange
algorithm - Polynomial time and exponential time - Stack Overflow
ELI5: In complexity theory, what exactly is "polynomial time" with regards to computation of problem?
algorithm - Polynomial Time Complexity O(N) - Stack Overflow
Videos
I was reading this (http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Polynomial_time.html) and I also want to know what "big O notation" is. Thanks!
The focus on polynomial time comes from cryptography's historical origin as a branch of computational complexity.
It makes sense for a theoretical field to develop technology-independent ways of measuring efficiency. Actual clock time or number of clock cycles are technology-dependent. Talking about running time in an asymptotic sense is convenient because it makes specific technology irrelevant. No matter how fast machine A is compared to machine B, if machine A runs a
algorithm and machine B runs a
algorithm, then eventually (for large enough
) machine B will be faster in terms of actual time.
Algorithms that run in time
are objectively fast/scalable/"efficient".
Suppose an "efficient" algorithm makes calls to a subroutine (we don't count the cost of the subroutine against this algorithm), and the subroutine can also be realized by an "efficient" algorithm. Then, intuitively, the overall system (accounting for both the calling program & subroutine) should be "efficient" too. This is a basic composition property, and without it you have a very messy theory. Polynomial-time is the minimal way to define "efficient" that contains running time
and enjoys this composition property.
It is for these reasons that "polynomial time" is synonymous with "efficient" in computational complexity. Its minimal nature makes it a natural and well-motivated definition.
However, just because it's well-motivated and natural doesn't mean it's sacred. Mihir Bellare points out asymptotic complexity as one of the arbitrary aspects of cryptographic research, to be critically examined.
As others have said here, there is plenty of work that follows a concrete (rather than asymptotic) style. There is nothing wrong with working in this style. The main differences are: (1) things are a little messier; you can't hide all the mess inside an unspecified polynomial; (2) you forfeit the (very convenient) composability property.
why do cryptographers not use other kinds of time for so many thresholds?
Actually cryptographer use non-asymptotic run-time specifications whenever there's a good reason for it. However whenever there's not asymptotic notions tend to be easier to come up with and allow one to more quickly assure oneself that something is (somewhat easily) feasible to compute. You see it a lot here on Crypto.SE probably because using asymptotic notions usually suffices to express that something is (in)feasible which is usually more the point of the question than precise runtime analysis.
Examples for when cryptographers actually prefer non-asymptotic results over asymptotic ones:
- Security proofs. Sure you will find asymptotic proofs every now and then but most of them are concrete (ie give concrete upper bounds for probabilities and advantages) and it is actually advised to use concrete bounds over asymptotic ones because the hidden constants could be so big that it matters.
- Runtime of implementations. Very rarely you will find serious cryptographers arguing about the efficiency of implementations using asymptotic notions like RSA e.g. taking
operations. More often than not, concrete measurements are preferred. A similar argument holds for cryptographic implementation tricks which are usually stated in the number of underlying primitive operations, you can find examples when you are looking for efficient ECC additions and doublings.
- Attack costs. Similar to security proofs most attacks will state that they need
oracle / primitive evaluations to achieve
probability of success using
storage. As with run-time, asymptotic bounds would be completely useless here, because all (symmetric) crypto schemes can be broken in
(with a hidden constant of
or more).
Below are some common Big-O functions while analyzing algorithms.
- O(1) - Constant time
- O(log(n)) - Logarithmic time
- O((log(n))c) - Polylogarithmic time
- O(n) - Linear time
- O(n log(n)) - Linearithmic time
- O(n2) - Quadratic time
- O(nc) - Polynomial time
- O(cn) - Exponential time
- O(n!) - Factorial time
(n = size of input, c = some constant)
Here is the model graph representing Big-O complexity of some functions

graph credits http://bigocheatsheet.com/
Check this out.
Exponential is worse than polynomial.
O(n^2) falls into the quadratic category, which is a type of polynomial (the special case of the exponent being equal to 2) and better than exponential.
Exponential is much worse than polynomial. Look at how the functions grow
n = 10 | 100 | 1000
n^2 = 100 | 10000 | 1000000
k^n = k^10 | k^100 | k^1000
k^1000 is exceptionally huge unless k is smaller than something like 1.1. Like, something like every particle in the universe would have to do 100 billion billion billion operations per second for trillions of billions of billions of years to get that done.
I didn't calculate it out, but ITS THAT BIG.
I understand that some computations take time and others don't seem to be solvable in a time frame. I think I understand this time frame is called "polynomial time" which I am assuming that it's a unit of measurment in a polynomial expression like in Algebra.
Now what I don't understand is what exactly is a "polynomial time" frame and why must we express it in polynomial form? What does that mean really and why not express it in something other than a polynomials?