In my experience, tests of the form if (ptr) or if (!ptr) are preferred. They do not depend on the definition of the symbol NULL. They do not expose the opportunity for the accidental assignment. And they are clear and succinct.
Edit: As SoapBox points out in a comment, they are compatible with C++ classes such as unique_ptr, shared_ptr, auto_ptr that are objects that act as pointers and which provide a conversion to bool to enable exactly this idiom. For these objects, an explicit comparison to NULL would have to invoke a conversion to pointer which may have other semantic side effects or be more expensive than the simple existence check that the bool conversion implies.
I have a preference for code that says what it means without unneeded text. if (ptr != NULL) has the same meaning as if (ptr) but at the cost of redundant specificity. The next logical thing is to write if ((ptr != NULL) == TRUE) and that way lies madness. The C language is clear that a boolean tested by if, while or the like has a specific meaning of non-zero value is true and zero is false. Redundancy does not make it clearer.
In my experience, tests of the form if (ptr) or if (!ptr) are preferred. They do not depend on the definition of the symbol NULL. They do not expose the opportunity for the accidental assignment. And they are clear and succinct.
Edit: As SoapBox points out in a comment, they are compatible with C++ classes such as unique_ptr, shared_ptr, auto_ptr that are objects that act as pointers and which provide a conversion to bool to enable exactly this idiom. For these objects, an explicit comparison to NULL would have to invoke a conversion to pointer which may have other semantic side effects or be more expensive than the simple existence check that the bool conversion implies.
I have a preference for code that says what it means without unneeded text. if (ptr != NULL) has the same meaning as if (ptr) but at the cost of redundant specificity. The next logical thing is to write if ((ptr != NULL) == TRUE) and that way lies madness. The C language is clear that a boolean tested by if, while or the like has a specific meaning of non-zero value is true and zero is false. Redundancy does not make it clearer.
if (foo) is clear enough. Use it.
Videos
What's the rules for checking for NULL pointer, I mean when do you choose to check for null pointer, and when not to.
For example:
int dummy_func(uint8* in_data, size_t len)
{
// what's the rules for checking if in_data is null???
// always check?
}I always think simply if(p != NULL){..} will do the job.
It will.
First, to be 100% clear, there is no difference between C and C++ here. And second, the Stack Overflow question you cite doesn't talk about null pointers; it introduces invalid pointers; pointers which, at least as far as the standard is concerned, cause undefined behavior just by trying to compare them. There is no way to test in general whether a pointer is valid.
In the end, there are three widespread ways to check for a null pointer:
if ( p != NULL ) ...
if ( p != 0 ) ...
if ( p ) ...
All work, regardless of the representation of a null pointer on the
machine. And all, in some way or another, are misleading; which one you
choose is a question of choosing the least bad. Formally, the first two
are indentical for the compiler; the constant NULL or 0 is converted
to a null pointer of the type of p, and the results of the conversion
are compared to p. Regardless of the representation of a null
pointer.
The third is slightly different: p is implicitly converted
to bool. But the implicit conversion is defined as the results of p
!= 0, so you end up with the same thing. (Which means that there's
really no valid argument for using the third style—it obfuscates
with an implicit conversion, without any offsetting benefit.)
Which one of the first two you prefer is largely a matter of style,
perhaps partially dictated by your programming style elsewhere:
depending on the idiom involved, one of the lies will be more bothersome
than the other. If it were only a question of comparison, I think most
people would favor NULL, but in something like f( NULL ), the
overload which will be chosen is f( int ), and not an overload with a
pointer. Similarly, if f is a function template, f( NULL ) will
instantiate the template on int. (Of course, some compilers, like
g++, will generate a warning if NULL is used in a non-pointer context;
if you use g++, you really should use NULL.)
In C++11, of course, the preferred idiom is:
if ( p != nullptr ) ...
, which avoids most of the problems with the other solutions. (But it is not C-compatible:-).)
In C and C++, pointers are inherently unsafe, that is, when you dereference a pointer, it is your own responsibility to make sure it points somewhere valid; this is part of what "manual memory management" is about (as opposed to the automatic memory management schemes implemented in languages like Java, PHP, or the .NET runtime, which won't allow you to create invalid references without considerable effort).
A common solution that catches many errors is to set all pointers that don't point to anything as NULL (or, in correct C++, 0), and checking for that before accessing the pointer. Specifically, it is common practice to initialize all pointers to NULL (unless you already have something to point them at when you declare them), and set them to NULL when you delete or free() them (unless they go out of scope immediately after that). Example (in C, but also valid C++):
void fill_foo(int* foo) {
*foo = 23; // this will crash and burn if foo is NULL
}
A better version:
void fill_foo(int* foo) {
if (!foo) { // this is the NULL check
printf("This is wrong\n");
return;
}
*foo = 23;
}
Without the null check, passing a NULL pointer into this function will cause a segfault, and there is nothing you can do - the OS will simply kill your process and maybe core-dump or pop up a crash report dialog. With the null check in place, you can perform proper error handling and recover gracefully - correct the problem yourself, abort the current operation, write a log entry, notify the user, whatever is appropriate.
The other answers pretty much covered your exact question. A null check is made to be sure that the pointer you received actually points to a valid instance of a type (objects, primitives, etc).
I'm going to add my own piece of advice here, though. Avoid null checks. :) Null checks (and other forms of Defensive Programming) clutter code up, and actually make it more error prone than other error-handling techniques.
My favorite technique when it comes to object pointers is to use the Null Object pattern. That means returning a (pointer - or even better, reference to an) empty array or list instead of null, or returning an empty string ("") instead of null, or even the string "0" (or something equivalent to "nothing" in the context) where you expect it to be parsed to an integer.
As a bonus, here's a little something you might not have known about the null pointer, which was (first formally) implemented by C.A.R. Hoare for the Algol W language in 1965.
I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
Invalid null pointers can either be caused by programmer error or by runtime error. Runtime errors are something a programmer can't fix, like a malloc failing due to low memory or the network dropping a packet or the user entering something stupid. Programmer errors are caused by a programmer using the function incorrectly.
The general rule of thumb I've seen is that runtime errors should always be checked, but programmer errors don't have to be checked every time. Let's say some idiot programmer directly called graph_get_current_column_color(0). It will segfault the first time it's called, but once you fix it, the fix is compiled in permanently. No need to check every single time it's run.
Sometimes, especially in third party libraries, you'll see an assert to check for the programmer errors instead of an if statement. That allows you to compile in the checks during development, and leave them out in production code. I've also occasionally seen gratuitous checks where the source of the potential programmer error is far removed from the symptom.
Obviously, you can always find someone more pedantic, but most C programmers I know favor less cluttered code over code that is marginally safer. And "safer" is a subjective term. A blatant segfault during development is preferable to a subtle corruption error in the field.
Kernighan & Plauger, in "Software Tools", wrote that they would check everything, and, for conditions that they believed could in fact never happen, they would abort with an error message "Can't happen".
They report being rapidly humbled by the number of times they saw "Can't happen" come out on their terminals.
You should ALWAYS check the pointer for NULL before you (attempt to) dereference it. ALWAYS. The amount of code you duplicate checking for NULLs that don't happen, and the processor cycles you "waste", will be more than paid for by the number of crashes you don't have to debug from nothing more than a crash dump - if you're that lucky.
If the pointer is invariant inside a loop, it suffices to check it outside the loop, but you should then "copy" it into a scope-limited local variable, for use by the loop, that adds the appropriate const decorations. In this case, you MUST ensure that every function called from the loop body includes the necessary const decorations on the prototypes, ALL THE WAY DOWN. If you don't, or can't (because of e.g. a vendor package or an obstinate coworker), then you must check it for NULL EVERY TIME IT COULD BE MODIFIED, because sure as COL Murphy was an incurable optimist, someone IS going to zap it when you aren't looking.
If you are inside a function, and the pointer is supposed to be non-NULL coming in, you should verify it.
If you are receiving it from a function, and it is supposed to be non-NULL coming out, you should verify it. malloc() is particularly notorious for this. (Nortel Networks, now defunct, had a hard-and-fast written coding standard about this. I got to debug a crash at one point, that I traced back to malloc() returning a NULL pointer and the idiot coder not bothering to check it before he wrote to it, because he just KNEW he had plenty of memory... I said some very nasty things when I finally found it.)
I'd simply write if (!ptr).
NULL is basically just 0 and !0 is true.
Be sure to include a definition of NULL.
#include <stddef.h>
void *X = NULL;
// do stuff
if (X != NULL) // etc.
If you include <stdio.h> and similar then stddef.h gets pulled in automatically.
Finally, it is interesting to look at the definition of NULL in stddef.h and by doing this you will see why your initial guess at what to do is interesting.
It doesn't really matter. Even if you'd stack all pointers up and loop over this array or if you or-ed all values.. you'd still have to do that one after another. And if you have something like this if( a != 0 && b != 0 && .. && z != null) the compiler will convert that to as many instructions as it will need in all other cases.
The only thing you might could save using an array which you e.g. you loop over is maybe memory at some point but I don't think this is what you were looking for.
No there is not. Think about it: to really be sure that not a single one of your values is zero, you absolutely have to look at each an every one of them. As you correctly noted, it is possible to short-circuit, one a zero-value has been found. I would recommend something similar to this:
int has_null = -1;
for(int i=0; i < null_list_len && has_null &= null_list[i]; ++i)
;
if(has_null)
//do stuff
You can improve the run time, if you have more assumptions about the values you are testing. If, for example you knew, that the null_list array is sorted, you only have to check wether the very first entry is zero, as a non-zero entry would imply that all other values are also greater than zero.