- O(10^N): trying to break a password by testing every possible combination (assuming numerical password of length N)
p.s. why is your last example is of complexity O(infinity) ? it's linear search O(N) .. there are less than 7 billion people in the world.
Answer from Aziz on Stack Overflow- O(10^N): trying to break a password by testing every possible combination (assuming numerical password of length N)
p.s. why is your last example is of complexity O(infinity) ? it's linear search O(N) .. there are less than 7 billion people in the world.
Answer from Aziz on Stack Overflow- O(10^N): trying to break a password by testing every possible combination (assuming numerical password of length N)
p.s. why is your last example is of complexity O(infinity) ? it's linear search O(N) .. there are less than 7 billion people in the world.
A pizza restaurant has several toppings to choose from
- Pepperoni
- Chilli peppers
- Pineapple (don't knock it until you've tried it!)
Customers may choose any combination of toppings or none at all for their pizza. Now consider an algorithm that finds every possible unique combination of toppings. This is an exponential algorithm with time complexity O(2^n).
Look how the possible combinations grow (exponentially) when you add a new topping to the menu:
0 toppings: 1 combination (no toppings at all)
1 toppings: 2 combinations (none, a)
2 toppings: 4 combinations (none, a, b, ab)
3 toppings: 8 combinations (none, a, b, c, ab, ac, bc, abc)
...
...
10 toppings: 1,024 combinations
20 toppings: 1,048,576 combinations
So with just 20 types of toppings, there are over 1 million possible combinations!
Examples of problems where the best known algorithm is super-exponential time? Are improvements to this complexity classes open questions?
algorithm - What constitutes exponential time complexity? - Stack Overflow
algorithm - Polynomial time and exponential time - Stack Overflow
Are there problems that are proven to have exponential worst case time complexity?
Videos
I wanted to ask if there are any problems in computer science that are PROVEN to have exponential worst case time complexity for the best known solution algorithm. The reason I stress on proven is because I know about NP-complete problems and that they are conjectured to have exponential complexities. And if there aren't any proven problems, what are the NP-Complete problems that are a nightmare to approximate; even for small input size like n<15?
Most of the time when we think of polynomial time algorithms, or even polylog, they arise as clever solutions to some brute force algorithm of exponential time.
There are some super-exponential algorithms, for example the 2-EXPTIME class ( https://en.wikipedia.org/wiki/2-EXPTIME), however the purpose and algorithms given in the examples are quite inaccessible to the casual reader. Are there any problems where it can be more intuitive why one would need super-exponential time to solve naively?
Are any of these problems still open, as in the brute force solution is super exponential, but it is not known or unknown whether there exists a exponential or polynomial time algorithm?
As a simple intuition of what big-O (big-O) and big-Θ (big-Theta) are about, they are about how changes the number of operations you need to do when you significantly increase the size of the problem (for example by a factor of 2).
The linear time complexity means that you increase the size by a factor of 2, the number of steps you need to perform also increases by about 2 times. This is what called Θ(n) and often interchangeably but not accurate O(n) (the difference between O and Θ is that O provides only an upper bound but Θ guarantees both upper and lower bounds).
The logarithmic time complexity (Θ(log(N))) means that when increase the size by a factor of 2, the number of steps you need to perform increases by some fixed amount of operations. For example, using binary search you can find given element in twice as long list using just one ore loop iterations.
Similarly the exponential time complexity (Θ(a^N) for some constant a > 1) means that if you increase that size of the problem just by 1, you need a times more operations. (Note that there is a subtle difference between Θ(2^N) and 2^Θ(N) and actually the second one is more generic, both lie inside the exponential time but neither of two covers it all, see wiki for some more details)
Note that those definition significantly depend on how you define "the size of the task"
As @DavidEisenstat correctly pointed out there are two possible context in which your algorithm can be seen:
Some fixed width numbers (for example 32-bit numbers). In such a context an obvious measure of the complexity of the prime-testing algorithm is the value being tested itself. In such case your algorithm is linear.
In practice there are many contexts where prime testing algorithm should work for really big numbers. For example many crypto-algorithms used today (such as Diffie–Hellman key exchange or RSA) rely on very big prime numbers like 512-bits, 1024-bits and so on. Also in those context the security is measured in the number of those bits rather than particular prime value. So in such contexts a natural way to measure the size of the task is the number of bits. And now the question arises: how many operations do we need to perform to check a value of known size in bits using your algorithm? Obviously if the value
Nhasmbits it is aboutN ≈ 2^m. So your algorithm from linearΘ(N)converts into exponential2^Θ(m). In other words to solve the problem for a value just 1 bit longer, you need to do about 2 times more work.
Exponential versus linear is a question of how the input is represented and the machine model. If the input is represented in unary (e.g., 7 is sent as 1111111) and the machine can do constant time division on numbers, then yes, the algorithm is linear time. A binary representation of n, however, uses about lg n bits, and the quantity n has an exponential relationship to lg n (n = 2^(lg n)).
Given that the number of loop iterations is within a constant factor for both solutions, they are in the same big O class, Theta(n). This is exponential if the input has lg n bits, and linear if it has n.
Below are some common Big-O functions while analyzing algorithms.
- O(1) - Constant time
- O(log(n)) - Logarithmic time
- O((log(n))c) - Polylogarithmic time
- O(n) - Linear time
- O(n log(n)) - Linearithmic time
- O(n2) - Quadratic time
- O(nc) - Polynomial time
- O(cn) - Exponential time
- O(n!) - Factorial time
(n = size of input, c = some constant)
Here is the model graph representing Big-O complexity of some functions

graph credits http://bigocheatsheet.com/
Check this out.
Exponential is worse than polynomial.
O(n^2) falls into the quadratic category, which is a type of polynomial (the special case of the exponent being equal to 2) and better than exponential.
Exponential is much worse than polynomial. Look at how the functions grow
n = 10 | 100 | 1000
n^2 = 100 | 10000 | 1000000
k^n = k^10 | k^100 | k^1000
k^1000 is exceptionally huge unless k is smaller than something like 1.1. Like, something like every particle in the universe would have to do 100 billion billion billion operations per second for trillions of billions of billions of years to get that done.
I didn't calculate it out, but ITS THAT BIG.