When to use
fabsfrather thanfabs
In C++, there is hardly ever a reason to use fabsf. Use std::abs instead with floating point types. std::fabs may have use when you wish to convert the absolute value of an integer to double, but that's probably quite niche use case.
If you were writing C instead, then it is almost as simple: Use fabsf when you have a float, and fabs when you have a double. Same applies to other standard math functions with f suffix.
The definitions I have in my math.h are
_Check_return_ inline float fabs(_In_ float _Xx) _NOEXCEPT
The C++ standard library specifies overloads for std::fabs. One of them takes a float. Your standard library is not standard compliant, if the other overloads are missing.
The C standard library specifies double fabs(double). Your standard library is not standard compliant, if the quoted declaration applies to C.
You can easily test different possibilities using the code below. It essentially tests your bitfiddling against naive template abs, and std::abs. Not surprisingly, naive template abs wins. Well, kind of surprisingly it wins. I'd expect std::abs to be equally fast. Note that -O3 actually makes things slower (at least on coliru).
Coliru's host system shows these timings:
random number generation: 4240 ms
naive template abs: 190 ms
ugly bitfiddling abs: 241 ms
std::abs: 204 ms
::fabsf: 202 ms
And these timings for a Virtualbox VM running Arch with GCC 4.9 on a Core i7:
random number generation: 1453 ms
naive template abs: 73 ms
ugly bitfiddling abs: 97 ms
std::abs: 57 ms
::fabsf: 80 ms
And these timings on MSVS2013 (Windows 7 x64):
random number generation: 671 ms
naive template abs: 59 ms
ugly bitfiddling abs: 129 ms
std::abs: 109 ms
::fabsf: 109 ms
If I haven't made some blatantly obvious mistake in this benchmark code (don't shoot me over it, I wrote this up in about 2 minutes), I'd say just use std::abs, or the template version if that turns out to be slightly faster for you.
The code:
#include <algorithm>
#include <cmath>
#include <cstdint>
#include <cstdlib>
#include <chrono>
#include <iostream>
#include <random>
#include <vector>
#include <math.h>
using Clock = std::chrono::high_resolution_clock;
using milliseconds = std::chrono::milliseconds;
template<typename T>
T abs_template(T t)
{
return t>0 ? t : -t;
}
float abs_ugly(float f)
{
(*reinterpret_cast<std::uint32_t*>(&f)) &= 0x7fffffff;
return f;
}
int main()
{
std::random_device rd;
std::mt19937 mersenne(rd());
std::uniform_real_distribution<> dist(-std::numeric_limits<float>::lowest(), std::numeric_limits<float>::max());
std::vector<float> v(100000000);
Clock::time_point t0 = Clock::now();
std::generate(std::begin(v), std::end(v), [&dist, &mersenne]() { return dist(mersenne); });
Clock::time_point trand = Clock::now();
volatile float temp;
for (float f : v)
temp = abs_template(f);
Clock::time_point ttemplate = Clock::now();
for (float f : v)
temp = abs_ugly(f);
Clock::time_point tugly = Clock::now();
for (float f : v)
temp = std::abs(f);
Clock::time_point tstd = Clock::now();
for (float f : v)
temp = ::fabsf(f);
Clock::time_point tfabsf = Clock::now();
milliseconds random_time = std::chrono::duration_cast<milliseconds>(trand - t0);
milliseconds template_time = std::chrono::duration_cast<milliseconds>(ttemplate - trand);
milliseconds ugly_time = std::chrono::duration_cast<milliseconds>(tugly - ttemplate);
milliseconds std_time = std::chrono::duration_cast<milliseconds>(tstd - tugly);
milliseconds c_time = std::chrono::duration_cast<milliseconds>(tfabsf - tstd);
std::cout << "random number generation: " << random_time.count() << " ms\n"
<< "naive template abs: " << template_time.count() << " ms\n"
<< "ugly bitfiddling abs: " << ugly_time.count() << " ms\n"
<< "std::abs: " << std_time.count() << " ms\n"
<< "::fabsf: " << c_time.count() << " ms\n";
}
Oh, and to answer your actual question: if the compiler can't generate more efficient code, I doubt there is a faster way save for micro-optimized assembly, especially for elementary operations such as this.
There are many things at play here. First off, the x87 co-processor is deprecated in favor of SSE/AVX, so I'm surprised to read that your compiler still uses the fabs instruction. It's quite possible that the others who posted benchmark answers on this question use a platform that supports SSE. Your results might be wildly different.
I'm not sure why your compiler uses a different logic for fabs and fabsf. It's totally possible to load a float to the x87 stack and use the fabs instruction on it just as easily. The problem with reproducing this by yourself, without compiler support, is that you can't integrate the operation into the compiler's normal optimizing pipeline: if you say "load this float, use the fabs instruction, return this float to memory", then the compiler will do exactly that... and it may involve putting back to memory a float that was already ready to be processed, loading it back in, using the fabs instruction, putting it back to memory, and loading it again to the x87 stack to resume the normal, optimizable pipeline. This would be four wasted load-store operations because it only needed to do fabs.
In short, you are unlikely to beat integrated compiler support for floating-point operations. If you don't have this support, inline assembler might just make things even slower than they presumably already are. The fastest thing for you to do might even be to use the fabs function instead of the fabsf function on your floats.
For reference, modern compilers and modern platforms use the SSE instructions andps (for floats) and andpd (for doubles) to AND out the bit sign, very much like you're doing yourself, but dodging all the language semantics issues. They're both as fast. Modern compilers may also detect patterns like x < 0 ? -x : x and produce the optimal andps/andpd instruction without the need for a compiler intrinsic.
In C++, it's always sufficient to use std::abs; it's overloaded for all the numerical types.
In C, abs only works on integers, and you need fabs for floating point values. These are available in C++ (along with all of the C library), but there's no need to use them.
It's still okay to use fabs for double and float arguments. I prefer this because it ensures that if I accidentally strip the std:: off the abs, that the behavior remains the same for floating point inputs.
I just spent 10 minutes debugging this very problem, due to my own mistake of using abs instead of std::abs. I assumed that the using namespace std;would infer std::abs but it did not, and instead was using the C version.
Anyway, I believe it's good to use fabs instead of abs for floating-point inputs as a way of documenting your intention clearly.