Actually, you can use a literal 0 anyplace you would use NULL.
Section 6.3.2.3p3 of the C standard states:
An integer constant expression with the value 0, or such an expression cast to type
void *, is called a null pointer constant. If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
And section 7.19p3 states:
The macros are:
NULLwhich expands to an implementation-defined null pointer constant
So 0 qualifies as a null pointer constant, as does (void *)0 and NULL. The use of NULL is preferred however as it makes it more evident to the reader that a null pointer is being used and not the integer value 0.
Why is there a NULL in the C language? - Stack Overflow
Understanding difference between 0 and NULL
What does it mean to do a "null check" in C or C++? - Software Engineering Stack Exchange
What is Null?
Videos
Actually, you can use a literal 0 anyplace you would use NULL.
Section 6.3.2.3p3 of the C standard states:
An integer constant expression with the value 0, or such an expression cast to type
void *, is called a null pointer constant. If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
And section 7.19p3 states:
The macros are:
NULLwhich expands to an implementation-defined null pointer constant
So 0 qualifies as a null pointer constant, as does (void *)0 and NULL. The use of NULL is preferred however as it makes it more evident to the reader that a null pointer is being used and not the integer value 0.
NULL is used to make it clear it is a pointer type.
Ideally, the C implementation would define NULL as ((void *) 0) or something equivalent, and programmers would always use NULL when they want a null pointer constant.
If this is done, then, when a programmer has, for example, an int *x and accidentally writes *x = NULL;, then the compiler can recognize that a mistake has been made, because the left side of = has type int, and the right side has type void *, and this is not a proper combination for assignment.
In contrast, if the programmer accidentally writes *x = 0; instead of x = 0;, then the compiler cannot recognize this mistake, because the left side has type int, and the right side has type int, and that is a valid combination.
Thus, when NULL is defined well and is used, mistakes are detected earlier.
In particular answer to your question “Is there a context in which just plain literal 0 would not work exactly the same?”:
- In correct code,
NULLand0may be used interchangeably as null pointer constants. 0will function as an integer (non-pointer) constant, butNULLmight not, depending on how the C implementation defines it.- For the purpose of detecting errors,
NULLand0do not work exactly the same; usingNULLwith a good definition serves to help detect some mistakes that using0does not.
The C standard allows 0 to be used for null pointer constants for historic reasons. However, this is not beneficial except for allowing previously written code to compile in compilers using current C standards. New code should avoid using 0 as a null pointer constant.
Actually, you can use a literal 0 anyplace you would use NULL.
Section 6.3.2.3p3 of the C standard states:
An integer constant expression with the value 0, or such an expression cast to type
void *, is called a null pointer constant. If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
And section 7.19p3 states:
The macros are:
NULLwhich expands to an implementation-defined null pointer constant
So 0 qualifies as a null pointer constant, as does (void *)0 and NULL. The use of NULL is preferred however as it makes it more evident to the reader that a null pointer is being used and not the integer value 0.
0 being an int like other integers, sizeof(0) will yield 4 bytes.
sizeof(NULL) will yield 8 bytes. In binary system, it is 8x8=64 bits, all bits with 0.
Pointers have 8 bytes allocated against characters with 1 bytes and integers 4 bytes. Is 8 bytes the maximum bytes for any datatype? I believe so as NULL is set to 8 bytes apparently for that reason to take care NULL denotes 0 for all datatypes.
In C and C++, pointers are inherently unsafe, that is, when you dereference a pointer, it is your own responsibility to make sure it points somewhere valid; this is part of what "manual memory management" is about (as opposed to the automatic memory management schemes implemented in languages like Java, PHP, or the .NET runtime, which won't allow you to create invalid references without considerable effort).
A common solution that catches many errors is to set all pointers that don't point to anything as NULL (or, in correct C++, 0), and checking for that before accessing the pointer. Specifically, it is common practice to initialize all pointers to NULL (unless you already have something to point them at when you declare them), and set them to NULL when you delete or free() them (unless they go out of scope immediately after that). Example (in C, but also valid C++):
void fill_foo(int* foo) {
*foo = 23; // this will crash and burn if foo is NULL
}
A better version:
void fill_foo(int* foo) {
if (!foo) { // this is the NULL check
printf("This is wrong\n");
return;
}
*foo = 23;
}
Without the null check, passing a NULL pointer into this function will cause a segfault, and there is nothing you can do - the OS will simply kill your process and maybe core-dump or pop up a crash report dialog. With the null check in place, you can perform proper error handling and recover gracefully - correct the problem yourself, abort the current operation, write a log entry, notify the user, whatever is appropriate.
The other answers pretty much covered your exact question. A null check is made to be sure that the pointer you received actually points to a valid instance of a type (objects, primitives, etc).
I'm going to add my own piece of advice here, though. Avoid null checks. :) Null checks (and other forms of Defensive Programming) clutter code up, and actually make it more error prone than other error-handling techniques.
My favorite technique when it comes to object pointers is to use the Null Object pattern. That means returning a (pointer - or even better, reference to an) empty array or list instead of null, or returning an empty string ("") instead of null, or even the string "0" (or something equivalent to "nothing" in the context) where you expect it to be parsed to an integer.
As a bonus, here's a little something you might not have known about the null pointer, which was (first formally) implemented by C.A.R. Hoare for the Algol W language in 1965.
I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
So I use C# and I often see many devs use null.
What and which kind of situation do you use this variable?
I am reading c# guide on programming book and I am on Clearing memory now and I haven't encountered null yet. Should I be worried?
If we want to "Nullify" something why dont we create a pointer in a controlled way that points to value 0?