In standard-conforming code that has included cmath and only calls std::abs on floats, doubles, and long doubles, there is no difference. However, it is instructive to look at the types returned by std::abs on various types when you call it with various sets of header files included.
On my system, std::abs(-42) is a double if I've included cmath but not cstdlib, an int if I've included cstdlib, and it produces a compilation error if I've included neither. Conversely, std::abs(-42.0) produces a compilation error (ambiguous overload) if I've included cstdlib but I haven't included cmath or a different compilation error (never heard of std::abs) if I've included neither.
On my platform, std::abs('x') gives me a double if I've included cmath or an int if I've included cstdlib but not cmath. Similar for a short int. Signedness does not appear to matter.
On my platform, the complex header apparently causes both the integral and the floating-point overloads of std::abs to be declared. I'm not certain this is mandated; perhaps you can find an otherwise reasonable platform on which std::abs(1e222) returns infinity with the wrong set of standard headers included.
The usual consequence of "you forgot a header in your program" is a compilation failure complaining of an undefined symbol, not a silent change in behaviour. With std::abs, however, the result can be std::abs(42) returning a double if you forgot cstdlib or std::abs('x') returning an int if you didn't. (Or perhaps you expected std::abs to give you an integral type when you pass it a short? Then, assuming my compiler got its promotion and overload resolution right, you had better make sure you don't include cmath.)
I have also spent too much time in the past trying to work out why code like double precise_sine = std::sin(myfloat) gives imprecise results. Because I don't like wasting my time on these sorts of surprises, I tend to avoid the overloaded variants of standard C functions in namespace std. That is, I use ::fabs when I want a double to be returned, ::fabsf when I want a float out, and ::fabsl when I want a long double out. Similarly for ::abs when I want an int, ::absl when I want a long, and ::absll when I want a long long.
Is there any difference at all between std::abs and std::fabs when applied to floating point values?
No there is not. Nor is there a difference for integral types.
It is idiomatic to use std::abs() because it is closest to the commonly used mathematical nomenclature.
c++ - What's the difference between abs and fabs? - Stack Overflow
Fabs and abs
abs() vs fabs()
c++ - abs vs std::abs, what does the reference say? - Stack Overflow
In C++, it's always sufficient to use std::abs; it's overloaded for all the numerical types.
In C, abs only works on integers, and you need fabs for floating point values. These are available in C++ (along with all of the C library), but there's no need to use them.
It's still okay to use fabs for double and float arguments. I prefer this because it ensures that if I accidentally strip the std:: off the abs, that the behavior remains the same for floating point inputs.
I just spent 10 minutes debugging this very problem, due to my own mistake of using abs instead of std::abs. I assumed that the using namespace std;would infer std::abs but it did not, and instead was using the C version.
Anyway, I believe it's good to use fabs instead of abs for floating-point inputs as a way of documenting your intention clearly.
In C++, std::abs is overloaded for both signed integer and floating point types. std::fabs only deals with floating point types (pre C++11). Note that the std:: is important; the C function ::abs that is commonly available for legacy reasons will only handle int!
The problem with
float f2= fabs(-9);
is not that there is no conversion from int (the type of -9) to double, but that the compiler does not know which conversion to pick (int -> float, double, long double) since there is a std::fabs for each of those three. Your workaround explicitly tells the compiler to use the int -> double conversion, so the ambiguity goes away.
C++11 solves this by adding double fabs( Integral arg ); which will return the abs of any integer type converted to double. Apparently, this overload is also available in C++98 mode with libstdc++ and libc++.
In general, just use std::abs, it will do the right thing. (Interesting pitfall pointed out by @Shafik Yaghmour. Unsigned integer types do funny things in C++.)
With C++ 11, using abs() alone is very dangerous:
#include <iostream>
#include <cmath>
int main() {
std::cout << abs(-2.5) << std::endl;
return 0;
}
This program outputs 2 as a result. (See it live)
Always use std::abs():
#include <iostream>
#include <cmath>
int main() {
std::cout << std::abs(-2.5) << std::endl;
return 0;
}
This program outputs 2.5.
You can avoid the unexpected result with using namespace std; but I would adwise against it, because it is considered bad practice in general, and because you have to search for the using directive to know if abs() means the int overload or the double overload.
I believe I have found a counter example. I post this as a separate answer, because I don't think that this is at all analogous to the case for integers.
In the cases I considered, I missed that it is possible to change the rounding mode for floating point arithmetic. Problematically, GCC seems to to ignore that when he (I guess) optimizes "known" quantities at compile time. Consider the following code:
#include <iostream>
#include <cmath>
#include <cfenv>
double fabsprod1(double a, double b) {
return std::fabs(a*b);
}
double fabsprod2(double a, double b) {
return std::fabs(a) * std::fabs(b);
}
int main() {
std::fesetround(FE_DOWNWARD);
double a = 0.1;
double b = -3;
std::cout << std::hexfloat;
std::cout << "fabsprod1(" << a << "," << b << "): " << fabsprod1(a,b) << "\n";
std::cout << "fabsprod2(" << a << "," << b << "): " << fabsprod2(a,b) << "\n";
#ifdef CIN
std::cin >> b;
#endif
}
Output differs, depending on whether I compile with
g++ -DCIN -O1 -march=native main2.cpp && ./a.out
or
g++ -O1 -march=native main2.cpp && ./a.out
Notably, it only takes O1 (what I would consider completely reliable) to change the output in a way that does not seem reasonable to me.
With -DCIN the output is
fabsprod1(0x1.999999999999ap-4,-0x1.8p+1): 0x1.3333333333334p-2
fabsprod2(0x1.999999999999ap-4,-0x1.8p+1): 0x1.3333333333333p-2
without -DCIN the output is
fabsprod1(0x1.999999999999ap-4,-0x1.8p+1): 0x1.3333333333334p-2
fabsprod2(0x1.999999999999ap-4,-0x1.8p+1): 0x1.3333333333334p-2
Edit: Peter Cordes (thank you for the comment) pointed out, that this surprising result was due to my failure in telling GCC to respect the change of rounding mode. By building with the following command, the expected results are achieved:
g++ -O1 -frounding-math -march=native main2.cpp && ./a.out
(works with O2 and O3 as well on my machine).
Comparable problem with intregrals (int) :
#include<cmath>
int fabsprod1(int a, int b) {
return std::abs(a*b);
}
int fabsprod2(int a, int b) {
return std::abs(a) * std::abs(b);
}
int fabsprod3(int a, int b) {
return std::abs(std::abs(a) * std::abs(b));
}
Results in (using your options -O3 -std=c++2a -march=cannonlake):
fabsprod1(int, int):
mov eax, edi
imul eax, esi
cdq
xor eax, edx
sub eax, edx
ret
fabsprod2(int, int):
mov eax, edi
cdq
xor eax, edx
sub eax, edx
mov edx, esi
sar edx, 31
xor esi, edx
sub esi, edx
imul eax, esi
ret
fabsprod3(int, int):
mov eax, edi
cdq
xor eax, edx
sub eax, edx
mov edx, esi
sar edx, 31
xor esi, edx
sub esi, edx
imul eax, esi
ret
https://godbolt.org/z/tf3nZN
This contradicts your statement about "real/floating point" numbers.
Overall you shouldn't expect the compiler to shortcut math problems for you. That being said some optimizations is made possible. Please provide documentation or a comparable example where you see the optimaztion.