So I use C# and I often see many devs use null.
What and which kind of situation do you use this variable?
I am reading c# guide on programming book and I am on Clearing memory now and I haven't encountered null yet. Should I be worried?
language agnostic - What is the purpose of null? - Stack Overflow
Why is there a NULL in the C language? - Stack Overflow
What's the difference between Null, NA, #NULL, nothing, ""
programming languages - If null is a billion dollar mistake, what is the solution to represent a non-initialized object? - Software Engineering Stack Exchange
Videos
Null: The Billion Dollar Mistake. Tony Hoare:
I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. In recent years, a number of program analysers like PREfix and PREfast in Microsoft have been used to check references, and give warnings if there is a risk they may be non-null. More recent programming languages like Spec# have introduced declarations for non-null references. This is the solution, which I rejected in 1965.
null is a sentinel value that is not an integer, not a string, not a boolean - not anything really, except something to hold and be a "not there" value. Don't treat it as or expect it to be a 0, or an empty string or an empty list. Those are all valid values and can be geniunely valid values in many circumstances - the idea of a null instead means there is no value there.
Perhaps it's a little bit like a function throwing an exception instead of returning a value. Except instead of manufacturing and returning an ordinary value with a special meaning, it returns a special value that already has a special meaning. If a language expects you to work with null, then you can't really ignore it.
Actually, you can use a literal 0 anyplace you would use NULL.
Section 6.3.2.3p3 of the C standard states:
An integer constant expression with the value 0, or such an expression cast to type
void *, is called a null pointer constant. If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
And section 7.19p3 states:
The macros are:
NULLwhich expands to an implementation-defined null pointer constant
So 0 qualifies as a null pointer constant, as does (void *)0 and NULL. The use of NULL is preferred however as it makes it more evident to the reader that a null pointer is being used and not the integer value 0.
NULL is used to make it clear it is a pointer type.
Ideally, the C implementation would define NULL as ((void *) 0) or something equivalent, and programmers would always use NULL when they want a null pointer constant.
If this is done, then, when a programmer has, for example, an int *x and accidentally writes *x = NULL;, then the compiler can recognize that a mistake has been made, because the left side of = has type int, and the right side has type void *, and this is not a proper combination for assignment.
In contrast, if the programmer accidentally writes *x = 0; instead of x = 0;, then the compiler cannot recognize this mistake, because the left side has type int, and the right side has type int, and that is a valid combination.
Thus, when NULL is defined well and is used, mistakes are detected earlier.
In particular answer to your question “Is there a context in which just plain literal 0 would not work exactly the same?”:
- In correct code,
NULLand0may be used interchangeably as null pointer constants. 0will function as an integer (non-pointer) constant, butNULLmight not, depending on how the C implementation defines it.- For the purpose of detecting errors,
NULLand0do not work exactly the same; usingNULLwith a good definition serves to help detect some mistakes that using0does not.
The C standard allows 0 to be used for null pointer constants for historic reasons. However, this is not beneficial except for allowing previously written code to compile in compilers using current C standards. New code should avoid using 0 as a null pointer constant.
The problem isn't null itself. It's the implicit nullability of all object references, as is the case in Java, Python, Ruby (and previously C#, although that picture is changing).
Imagine you're using a language where every type T really means T | Null. Suppose you had a system that takes such a nullable reference at its entry point, does some null checking, and wants to forward on to another function to do the "meat" of the work. That other function has no possible way to express "a non-null T" in Java (absent annotations), Python or Ruby. The interface can't encode its expectation for a non-null value, thus the compiler can't stop you from passing null. And if you do, fun things happen.
The "solution" (or "a solution", I should say), is to make your language's references non-nullable by default (equivalently, not having any null references at all, and introducing them at the library level using an Optional/Option/Maybe monad). Nullability is opt-in, and explicit. This is the case in Swift, Rust, Kotlin, TypeScript, and now C#. That way, you clearly distinguish T? vs T.
I would not recommend doing what you did, which is to obscure the nullability behind a type alias.
You are probably looking for optionals. Here, for example, in TypeScript that you use:
http://dotnetpattern.com/typescript-optional-parameters
The idea is to have explicitly state:
- A) if an object can be null or not
- B) unwrapping nullable objects before using them to avoid null pointer exceptions (NPEs)
The TypeScript parameter example I linked does not really do B)
if (address == undefined) {
// Do something with a
} else {
// Address not found, do something else
}
Maybe you can find a better way in TypeScript like it is done in Swift:
if let a = address {
// Do something with a
} else {
// Address not found. Do something else.
}
Or Kotlin:
address?.let { a ->
// Do something with a
} ?: {
// Address not found. Do something else
}()
The difference is that in the TypeScript example above you can still forget the check and just address, leading to an NPE. Optionals like in Swift or Kotlin will force you to first check the value existence before you can use them.
A common mistake done then is to force unwarp the value because you think the value always exists:
address!.doSomething() // In Swift
address!!.doSomething() // in Kotlin
If you see code like this, run! Usually a value is an optional for a reason, so just skipping over that safety-measure and claiming "there is a value, I am sure" leads you back to the old NPE-coding we try to get rid of.
A quick search looks like you are not really able to do that in TypeScript without voodoo:
Is it possible to unwrap an optional/nullable value in TypeScript?