math.sqrt is the C implementation of square root and is therefore different from using the ** operator which implements Python's built-in pow function. Thus, using math.sqrt actually gives a different answer than using the ** operator and there is indeed a computational reason to prefer numpy or math module implementation over the built-in. Specifically the sqrt functions are probably implemented in the most efficient way possible whereas ** operates over a large number of bases and exponents and is probably unoptimized for the specific case of square root. On the other hand, the built-in pow function handles a few extra cases like "complex numbers, unbounded integer powers, and modular exponentiation".
See this Stack Overflow question for more information on the difference between ** and math.sqrt.
In terms of which is more "Pythonic", I think we need to discuss the very definition of that word. From the official Python glossary, it states that a piece of code or idea is Pythonic if it "closely follows the most common idioms of the Python language, rather than implementing code using concepts common to other languages." In every single other language I can think of, there is some math module with basic square root functions. However there are languages that lack a power operator like ** e.g. C++. So ** is probably more Pythonic, but whether or not it's objectively better depends on the use case.
math.sqrt is the C implementation of square root and is therefore different from using the ** operator which implements Python's built-in pow function. Thus, using math.sqrt actually gives a different answer than using the ** operator and there is indeed a computational reason to prefer numpy or math module implementation over the built-in. Specifically the sqrt functions are probably implemented in the most efficient way possible whereas ** operates over a large number of bases and exponents and is probably unoptimized for the specific case of square root. On the other hand, the built-in pow function handles a few extra cases like "complex numbers, unbounded integer powers, and modular exponentiation".
See this Stack Overflow question for more information on the difference between ** and math.sqrt.
In terms of which is more "Pythonic", I think we need to discuss the very definition of that word. From the official Python glossary, it states that a piece of code or idea is Pythonic if it "closely follows the most common idioms of the Python language, rather than implementing code using concepts common to other languages." In every single other language I can think of, there is some math module with basic square root functions. However there are languages that lack a power operator like ** e.g. C++. So ** is probably more Pythonic, but whether or not it's objectively better depends on the use case.
Even in base Python you can do the computation in generic form
result = sum(x**2 for x in some_vector) ** 0.5
x ** 2 is surely not an hack and the computation performed is the same (I checked with cpython source code). I actually find it more readable (and readability counts).
Using instead x ** 0.5 to take the square root doesn't do the exact same computations as math.sqrt as the former (probably) is computed using logarithms and the latter (probably) using the specific numeric instruction of the math processor.
I often use x ** 0.5 simply because I don't want to add math just for that. I'd expect however a specific instruction for the square root to work better (more accurately) than a multi-step operation with logarithms.
Why is the exponentiation operator ** instead of say ^?
Why is the power operator much slower than multiplication in Python?
2 ** 1 ** 3 = 2??
This is the standard order of operations for power towers in mathematics:
https://math.hmc.edu/funfacts/tower-of-powers/
I did not know that Python actually handles this correctly. That is awesome.
More on reddit.comDoes PEP8 Dictate Spaces Around Exponentiation Operator "**"
Videos
Is ^ used for something else, or it is just a stylistic choice?
I was writing some code in Python, and usually when typing up equations, I use x**2 to calculate the value of a variable squared. I had seen it typed as x*x a few times, and was curious about the speed difference between the two, so I ran a timeit test on both and expected both to be almost exactly the same since the two are mathematically equivalent expressions.
When I ran the benchmarks, I saw that using plain multiplication was almost 9-10x faster than just using the power operator. Even (sometimes) for 4, 5, 6, and much higher exponents, python's multiplication was still faster by a few times, if not, more.
I inspected the bytecode, and found that Python uses binary power for the power operator, and binary multiplication for the multiplication operator, and for some reason, that is faster than power. While I understand that both use different operations, why hasn't this become an optimization at the interpreter level, especially since having to write x*x*x*x*x is not that practical vs x**5 to get a speedup in your code?
I applied multiplication instead of power to some of my code that did calculations, and found a significant speed increase for algorithms that I otherwise would've thought were as fast as they possibly could be in Python.