def power():
a = eval(input("enter base: "))
b = eval(input("enter exponent: "))
total = 0
for x in range(0,b+1):
total = (total) + (a*a)
print(total)Given indices are just repeated multiplication, you could use the for loop to progressively multiply it.
Something like:
n = 5
runningPower = 1
for i in range(n + 1):
runningPower *= 2
print(runningPower)
As we all know, pow is just repeated multiplication.
So, we can create a loop that on every iteration it'll multiply a variable (we'll declare a variable that his starting value is 1) by 2.
Example code:
powers = 1 # Declare variable.
for pow in range(10): # Set a loop for range(10).
print(powers) # Print the variable.
powers *= 2 # Power it by 2.
Videos
Try with recursion:
def powerof(base,exp):
if exp == 0:
return 1
if exp == 1:
return base
return base * powerof(base, exp-1)
# I want to print 2**8. Suprisingly getting two (incorrect) values as a result
print(powerof(2,8))
So what it does, it calls itself while decreasing the exponent, thus the call will look like: 2*(2*(2*2))) ... when being executed. You could also do this in a for-loop, but recursion is more compact.
Naive implementation(not the the best of solutions but i think you should be able to follow this one):
def powerof(base, exp):
results = 1
for n in range(exp):
results *= base
return results
print(powerof(5,2))
Hope it helps.
I was writing some code in Python, and usually when typing up equations, I use x**2 to calculate the value of a variable squared. I had seen it typed as x*x a few times, and was curious about the speed difference between the two, so I ran a timeit test on both and expected both to be almost exactly the same since the two are mathematically equivalent expressions.
When I ran the benchmarks, I saw that using plain multiplication was almost 9-10x faster than just using the power operator. Even (sometimes) for 4, 5, 6, and much higher exponents, python's multiplication was still faster by a few times, if not, more.
I inspected the bytecode, and found that Python uses binary power for the power operator, and binary multiplication for the multiplication operator, and for some reason, that is faster than power. While I understand that both use different operations, why hasn't this become an optimization at the interpreter level, especially since having to write x*x*x*x*x is not that practical vs x**5 to get a speedup in your code?
I applied multiplication instead of power to some of my code that did calculations, and found a significant speed increase for algorithms that I otherwise would've thought were as fast as they possibly could be in Python.
Operator ^ is a bitwise operator, which does bitwise exclusive or.
The power operator is **, like 8**3 which equals to 512.
The symbols represent different operators.
The ^ represents the bitwise exclusive or (XOR).
Each bit of the output is the same as the corresponding bit in x if that bit in y is 0, and it's the complement of the bit in x if that bit in y is 1.
** represents the power operator. That's just the way that the language is structured.
I've set out on a dumb project of optimizing every little thing in a decimal -> binary converter. The largest "performance" issue right now is that using the pow function takes quite a bit longer than just multiplying it with simple operators. In the following code, the pow function takes about 26ms where the simple calculations take roughly 0.1ms.
// ~26ms
timeElapsedInSecondsWhenRunningCode {
let value: Decimal = pow(2, 64)
print(value)
}
// ~0.1ms
timeElapsedInSecondsWhenRunningCode {
let value: Decimal = 2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2
print(value)
}In a real world use case, I would just use the pow function but since my goal here is to create something over-optimized... how can I calculate a power without using the pow function?