"The power of 2" is squaring. You'd be better off doing that by multiplying the number by itself.
The library version of sqrt is probably faster than anything you could dig up elsewhere. If you call a C routine, you'll just add overhead from the cross-language call. But do you need accurate square roots, or would a table lookup of approximations do? Do the values repeat a lot, i.e. do you often need to calculate the roots of the same numbers? If so, caching the square roots in a HashMap might be faster than computing them.
"The power of 2" is squaring. You'd be better off doing that by multiplying the number by itself.
The library version of sqrt is probably faster than anything you could dig up elsewhere. If you call a C routine, you'll just add overhead from the cross-language call. But do you need accurate square roots, or would a table lookup of approximations do? Do the values repeat a lot, i.e. do you often need to calculate the roots of the same numbers? If so, caching the square roots in a HashMap might be faster than computing them.
You are unlikely to find a better(faster) implementation than the Java Math one. You might have more luck trying to change the way you do the calculations in your algorithm. For example, is there any way you can avoid finding the square root of a huge number?
If this doesn't work, you can try implementing it in a more appropriate language that's meant for fast mathematical computations (something like Matlab).
Otherwise, you can try to optimize this in other areas. Perhaps you can try to cache past results if they are useful later.
Videos
2^n + 2^(n-1) + 2^(n-2) + ... + 2 + 1 = (2^(n+1) - 1) = ((1 << (n+1)) - 1)
You don't have to calculate it in a loop, what you are trying to compute is equivalent to
Math.pow(2, x+1) - 1
Even better, you can calculate it like torquestomp suggested, which will be faster:
(1 << (x + 1)) - 1
I've been trying to calculate the Xn without using math.pow in order to compare it with other algorithms to test their performance and math.pow is too optimized so it will impede me to compare it successfully with other algorithms:
I will use the Xn method to use it within a bruteforce method that evaluates a polynomial at x.
This method will be compared with my horners method.
If I do it using math.pow(x,n) the bruteforce will be faster than horners sometimes and I presume that its because math.pow being too fast.
Here are both method to compare, can you shed some light on why sometimes bruteforce is faster than horners, is it because of math.pow?
//Metodo de Horner:
public double horners(int[] coef_array, double x) {
double resultado = 0;
int c = 0;
for (int i = coef_array.length - 1; i >= 0; i--) {
resultado = (resultado * x) + coef_array[i];
c++;
} //Evalua en x;
set_cant_mult_H(c);
return resultado;
}
//Metodo "obvio", "a pie" o "bruteforceado":
public double brute(int[] coef_array, double x) {
double resultado = 0;
int c = 0;
for (int i = coef_array.length - 1; i >= 0; i--) {
c++;
resultado += coef_array[i] * Math.pow(x, i); // Change this mathpow with something slower?!
}
set_cant_mult_BF(c);
return resultado;
} So to wrap it off, Horners bruteforce are compared but sometimes horners is slower than bruteforce and I presume its because math.pow(x,n) is too fast.
Remember I am using arrays so maybe the new pow method may need to receive an array!
The ^ operator is not performing exponentiation - it's a bitwise "exclusive OR" (aka "xor").
Using integer math for 100000000 raised to the fourth power will give incorrect results - a 32-bit integer cannot store numbers that large.
Math.pow() will use floating point arithmetic. The answers may not be 100% accurate due to precision issues, but should be capable of representing the required range of results.
To get 100% accurate values for numbers that large, you should use the BigInteger class. However it will not be particularly fast. This is a trade off you have to make when considering accuracy vs performance.
the ^ operator in Java is bitwise exclusive OR and definitaly not similiar to the power function.
Reference
- http://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html
Powers of 2 can simply be computed by Bit Shift Operators
int exponent = ...
int powerOf2 = 1 << exponent;
Even for the more general form, you should not compute an exponent by "multiplying n times". Instead, you could do Exponentiation by squaring
Here is a post that allows both negative/positive power calculations.
https://stackoverflow.com/a/23003962/3538289
Function to handle +/- exponents with O(log(n)) complexity.
double power(double x, int n){
if(n==0)
return 1;
if(n<0){
x = 1.0/x;
n = -n;
}
double ret = power(x,n/2);
ret = ret * ret;
if(n%2!=0)
ret = ret * x;
return ret;
}
You are considering if n is negative in the n % 2 != 0 case, but not in the else case. To make it clearer, I would handle it in a different recursive case. Take the negative n handling out of the if block, and add this line after the if(n == 0) line.
if (n < 0) return 1 / myPow(x, -n);
This also eliminates the integer division you were doing in this line: return (1 / t * t * x);. It also had the error that you would have divided by t, multiplied by t, then multiplied by x, instead of dividing by the entire product.
Found this to be faster at higher powers than your solution:
public static double posIntPow(final double pVal, final int pPow) {
double ret = 1;
double v1, v2;
int n = pPow;
v1 = pVal;
if ((n & 1) == 1) {
ret = pVal;
}
n = n >>> 1;
while (n > 0) {
v2 = v1 * v1;
if ((n & 1) == 1) {
ret = ret * v2;
}
v1 = v2;
n = n >>> 1;
}
return ret;
}
And about 10 times faster than the normal Math lib one - though this obviously only does integer. Can use same test you used for negative.
To match the formula,
double lnRicker = 1 - (2 * p) * Math.exp(-p);
needs to be
double lnRicker = (1 - (2 * p)) * Math.exp(-p);
Since * has higher operator precedence than -, in your expression the multiplication of (2 * p) with Math.exp(-p) will be done first, which is not what you want.
I'd just like to add that Math.pow(x, 2) can be written more simply (and possibly more accurately and more efficiently) as x * x ... for any variable or constant x.
Hello,
I'm trying to create a method to calculate exponents through addition and factorial instead of multiplication to compare it to the "usual" Math.pow approach. My problem is that Math.pow only works with doubles and I need bigger numbers to actually see any differences.
Hi all, I have just updated my java version of the approximation for Math.pow(), which is now 30% faster than the previous version. On my machine (Core2 Quad Q9550, Java 1.7.0_01-b08, 64-Bit Server VM) this is now 130 times faster than Math.pow(), but a hell of a lot less precise.
here is the code:
public static double pow(final double a, final double b) {
final long tmp = Double.doubleToLongBits(a);
final long tmp2 = (long)(b * (tmp - 4606921280493453312L)) + 4606921280493453312L;
return Double.longBitsToDouble(tmp2);
}More info (and C / C++ / C# code) here. Depending on the range of values you want to use it for, the error can be quite high. You definitely need to test if it is good enough for your application. I personally have used it for simulation code, but I think it might be quite useful for games too.
UPDATE: Thanks to Madsy9's suggestion, a 3 times slower approximation (still 40 times faster than Math.pow) is now here: http://pastebin.com/ZW95gEyr This has 1.7% average error, no matter how large the exponent gets :)
As others have said, you cannot just ignore the use of double, as floating point arithmetic will almost certainly be slower. However, this is not the only reason - if you change your implementation to use them, it is still faster.
This is because of two things: the first is that 2^2 (exponent, not xor) is a very quick calculation to perform, so your algorithm is fine to use for that - try using two values from Random#nextInt (or nextDouble) and you'll see that Math#pow is actually much quicker.
The other reason is that calling native methods has overhead, which is actually meaningful here, because 2^2 is so quick to calculate, and you are calling Math#pow so many times. See What makes JNI calls slow? for more on this.
There is no pow(int,int) function. You are comparing apples to oranges with your simplifying assumption that floating point numbers can be ignored.