Both stdio.h and stdlib.h are, in fact, required to define NULL, all the way back to the original ANSI C standard in 19891 (unfortunately this is a .txt file, so I can't link to a specific section; search for 4.9 INPUT/OUTPUT <stdio.h> and/or 4.10 GENERAL UTILITIES <stdlib.h>, and then scroll down a little). If either of the minimized test programs
#include <stdio.h>
void *p = NULL;
or
#include <stdlib.h>
void *p = NULL;
fails to compile to an object file, then your C implementation is buggy. (If the above test programs do not fail to compile, you're gonna need to do some delta-minimization on your actual program, and probably then track down your wiseacre cow-orker who thought it would be funny to put #undef NULL in an application header file.)
NULL is also required to be defined in several other standard headers, but its true home, as you may guess from the cross-references to section 4.1.5 to explain what NULL is supposed to be defined to, is stddef.h. A C implementation that fails to define NULL in stddef.h is egregiously buggy. Also, stddef.h is one of the very few headers that is required to be provided by a "freestanding implementation"; if you are working in an embedded environment, it's possible that they thought they could get away with leaving NULL out of stdio.h or stdlib.h, but they have no excuse whatsoever for leaving it out of stddef.h.
In the alternative, just use 0 for the null pointer constant. That's perfectly fine style as long as all your functions have prototypes. (You have to cast it to pass it correctly to a function that takes a variable number of arguments, e.g. to execl, but you have to cast NULL to pass it correctly to a function that takes a variable number of arguments, so it comes out in the wash.)
1 Footnote for historians: yes, the linked document really is the ANSI C standard, not the ISO standard with nigh-identical wording (but very different section numbering) that came out a year later. I am not aware of any copy of the 1990 edition of the ISO C standard that is available online at no charge.
Answer from zwol on Stack OverflowNULL is not a built-in constant in the C or C++ languages. In fact, in C++ it's more or less obsolete, just use a plain literal 0 instead, the compiler will do the right thing depending on the context.
In newer C++ (C++11 and higher), use nullptr (as pointed out in a comment, thanks).
Otherwise, add
#include <stddef.h>
to get the NULL definition.
Do use NULL. It is just #defined as 0 anyway and it is very useful to semantically distinguish it from the integer 0.
There are problems with using 0 (and hence NULL). For example:
void f(int);
void f(void*);
f(0); // Ambiguous. Calls f(int).
The next version of C++ (C++0x) includes nullptr to fix this.
f(nullptr); // Calls f(void*).
Considering the following code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int* p = NULL;
printf("p is %p\n", p);
printf("p += 1 is %p\n", p += 1);
return 0;
}Output using clang test.c -Wall -Wextra:
p is (nil) p += 1 is 0x4
Output using clang test8.c -fsanitize=undefined
p is (nil) test.c:9:29: runtime error: applying non-zero offset 4 to null pointer SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior test.c:9:29 in p++ is 0x4
However, since p is the NULL pointer, does p += 1 induce undefined behaviour? Is -fsanitize correct?
The arithmetic itself is not undefined here. Look up how NULL is defined.
Edit: I stand corrected (see below)
Pointer arithmetic is undefined behavior unless kept within the bounds of an array object (or one index past the end). So this should be UB.
Edit: I'm being downvoted. Can someone explain where I've misunderstood? This is from the standard, emphasis mine.
When an expression that has integer type is added to or subtracted from a pointer, the result has the type of the pointer operand. If the pointer operand points to an element of an array object, and the array is large enough, the result points to an element offset from the original element such that the difference of the subscripts of the resulting and original array elements equals the integer expression. In other words, if the expression P points to the i-th element of an array object, the expressions (P)+N (equivalently, N+(P)) and(P)-N (where N has the value n) point to, respectively, the i+n-th and i−n-th elements of the array object, provided they exist. Moreover, if the expression P points to the last element of an array object, the expression (P)+1 points one past the last element of the array object, and if the expression Q points one past the last element of an array object, the expression (Q)-1 points to the last element of the array object. If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined. If the result points one past the last element of the array object, it shall not be used as the operand of a unary * operator that is evaluated.
In C and C++, pointers are inherently unsafe, that is, when you dereference a pointer, it is your own responsibility to make sure it points somewhere valid; this is part of what "manual memory management" is about (as opposed to the automatic memory management schemes implemented in languages like Java, PHP, or the .NET runtime, which won't allow you to create invalid references without considerable effort).
A common solution that catches many errors is to set all pointers that don't point to anything as NULL (or, in correct C++, 0), and checking for that before accessing the pointer. Specifically, it is common practice to initialize all pointers to NULL (unless you already have something to point them at when you declare them), and set them to NULL when you delete or free() them (unless they go out of scope immediately after that). Example (in C, but also valid C++):
void fill_foo(int* foo) {
*foo = 23; // this will crash and burn if foo is NULL
}
A better version:
void fill_foo(int* foo) {
if (!foo) { // this is the NULL check
printf("This is wrong\n");
return;
}
*foo = 23;
}
Without the null check, passing a NULL pointer into this function will cause a segfault, and there is nothing you can do - the OS will simply kill your process and maybe core-dump or pop up a crash report dialog. With the null check in place, you can perform proper error handling and recover gracefully - correct the problem yourself, abort the current operation, write a log entry, notify the user, whatever is appropriate.
The other answers pretty much covered your exact question. A null check is made to be sure that the pointer you received actually points to a valid instance of a type (objects, primitives, etc).
I'm going to add my own piece of advice here, though. Avoid null checks. :) Null checks (and other forms of Defensive Programming) clutter code up, and actually make it more error prone than other error-handling techniques.
My favorite technique when it comes to object pointers is to use the Null Object pattern. That means returning a (pointer - or even better, reference to an) empty array or list instead of null, or returning an empty string ("") instead of null, or even the string "0" (or something equivalent to "nothing" in the context) where you expect it to be parsed to an integer.
As a bonus, here's a little something you might not have known about the null pointer, which was (first formally) implemented by C.A.R. Hoare for the Algol W language in 1965.
I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.