@Blindy talks about possible approaches that Java could take in implementing pow.
First of all, the general case cannot be repeated multiplication. It won't work for the general case where the exponent is not an integer. (The signature for pow is Math.pow(double, double)!)
In the OpenJDK 8 codebase, the native code implementation for pow can work in two ways:
The first implementation in
e_pow.cuses a power series. The approach is described in the C comments as follows:Copy* Method: Let x = 2 * (1+f) * 1. Compute and return log2(x) in two pieces: * log2(x) = w1 + w2, * where w1 has 53-24 = 29 bit trailing zeros. * 2. Perform y*log2(x) = n+y' by simulating multi-precision * arithmetic, where |y'|<=0.5. * 3. Return x**y = 2**n*exp(y'*log2)The second implementation in
w_pow.cis a wrapper for thepowfunction provided by the Standard C library. The wrapper deals with edge cases.
Now it is possible that the Standard C library uses CPU specific math instructions. If it did, and the JDK build (or runtime) selected1 the second implementation, then Java would use those instructions too.
But either way, I can see no trace of any special case code that uses repeated multiplication. You can safely assume that it is O(1).
1 - I haven't delved into how when the selection is / can be made.
Answer from Stephen C on Stack Overflow@Blindy talks about possible approaches that Java could take in implementing pow.
First of all, the general case cannot be repeated multiplication. It won't work for the general case where the exponent is not an integer. (The signature for pow is Math.pow(double, double)!)
In the OpenJDK 8 codebase, the native code implementation for pow can work in two ways:
The first implementation in
e_pow.cuses a power series. The approach is described in the C comments as follows:Copy* Method: Let x = 2 * (1+f) * 1. Compute and return log2(x) in two pieces: * log2(x) = w1 + w2, * where w1 has 53-24 = 29 bit trailing zeros. * 2. Perform y*log2(x) = n+y' by simulating multi-precision * arithmetic, where |y'|<=0.5. * 3. Return x**y = 2**n*exp(y'*log2)The second implementation in
w_pow.cis a wrapper for thepowfunction provided by the Standard C library. The wrapper deals with edge cases.
Now it is possible that the Standard C library uses CPU specific math instructions. If it did, and the JDK build (or runtime) selected1 the second implementation, then Java would use those instructions too.
But either way, I can see no trace of any special case code that uses repeated multiplication. You can safely assume that it is O(1).
1 - I haven't delved into how when the selection is / can be made.
You can consider Math.pow to be O(1).
There's a few possible implementations, ranging from a CPU assembler instruction (Java doesn't use this) to a stable software implementation based on (for example) the Taylor series expansion over a few terms (though not exactly the Taylor implementation, there's some more specific algorithms).
It most definitely won't repeatedly multiply if that's what you're worried about.
for (int i = 0; Math.pow(2,i) < N; i++)
sum += i;I'm struggling understanding the time complexity when Math.Pow is in the loop condition. What is worst case time complexity in the for loop? O(log(N))?
Answering that question depends on what you consider the complexity of Math.pow(). For most hardware, it's O(1), but on a theoretical Turing machine, it isn't. Let's assume it's free.
Let's consider the loop. The complexity is going to be a function describing how many times the loop runs, given N. The loop starts with i=0, adds 1 to i each time, and continues while Math.pow(2,i) < N. Let's consider some cases.
N is 0, loop runs 0 times.
N is 1, loop runs 1 time because the second time, 2^1 > 1.
N is 2, loop runs 2 times, 2^2 > 2.N is 3, loop runs 2 times, 2^2 > 3.
N is 4, loop runs 3 times, 2^3 > 4.
N is 5, loop runs 3 times, 2^3 > 5.
N is 6, loop runs 3 times, 2^3 > 6.
N is 7, loop runs 3 times, 2^3 > 7.
N is 8, loop runs 4 times, 2^4 > 8
N is 100, loop runs 7 times, 2^7 > 100
N is 1000, loop runs 10 times, 2^10 > 1000
Okay, so a function that describes this growth looks to be the opposite of taking an exponent. That's logarithms. O(log(N)) is probably right. Let's verify, remembering that in CS, we tend to assume log in base-2.
N is 2, loop runs 2 times, log(2) is 1
N is 3, loop runs 2 times, log(3) is ~1.5
N is 4, loop runs 3 times, log(4) is 2
N is 5, loop runs 3 times, log(5) is ~2.3
N is 6, loop runs 3 times, log(6) is ~2.5
N is 100, loop runs 7 times,log(100) is ~6.6
N is 1000, loop runs 10 times, log(1000) is ~9.9
Yep, it's O(log(N)).
All that said, I already knew it would be O(log(N)) because of a useful shortcut. If your for loop iterates through every value from 1 to N, that's always going to be O(N). If your for loop does N times N times, that's O(N^(2)). And, most importantly, if your loop gets halfway to the end with every step, that's O(log(N)) for sure. Walking twice as far with every step to a target is the same thing as walking a number down to zero by removing a bit each time, which is the same thing as walking halfway to your goal (removing a bit is the same as dividing by 2).
So your loop runs while 2i < N.
Take the log base 2 of both sides of the inequality and you learn that your loop runs while i < log(N).
So, the answer to the question "How many times does this loop run?" is O(log N).
The time complexity is a slightly trickier question, because Math.powmay not be a constant time operation. I can't find any definitive answer about that. However, if this is a homework question you are almost certainly expected to assume that Math.pow is constant time.
how is that possible, please
Because Math.pow is JVM intrinsic, that is, JIT-compiler inlines the call. Furthermore, when it sees that exponent is a constant 2, it replaces the call with exactly x*x.
Proof from HotSpot sources
The internal implementation of Math.pow() is delegated to a native function so it could be reasonable for a good performance.
In any case to have valid test results you have to test the time in a loop to have an execution time realistic.
Well this particular code deterministically runs in O(1).
However, in more general terms for arbitrary variables, multiply() will run in O(nlog n) where n is the number of bits.
pow() method will run in O(log b) for small a and b. This is achieved by exponentiation by squaring. For larger values, the number of bits gets large (linearly) and so the multiplication takes more time. I'll leave it up to you to figure out the exact analysis.
I'm not 100% about the details about modPow(), but I suspect it runs similarly to pow() except with the extra mod at each step in the exponentiation by squaring. So it'll still be O(log b) multiplications with the added benefit that the number of bits is bounded by log m where m is the mod.
tskuzzy is correct.
But maybe reading between the lines a bit, and assuming this is a homework question, they probably want you to realize that there are several operations happening in this method with varying complexities. And then they probably want you to realize that the complexity of the overall method is the same as whatever the most complex operation is that happens in the method.