If you need to represent unknown data in a column, you make it nullable. If you will always have data in the column, it's better to make it not nullable, as
- Dealing with nulls can be annoying and counterintuitive
- It saves a bit of space
- On some database systems, null values are not indexed.
Videos
If you need to represent unknown data in a column, you make it nullable. If you will always have data in the column, it's better to make it not nullable, as
- Dealing with nulls can be annoying and counterintuitive
- It saves a bit of space
- On some database systems, null values are not indexed.
When a field is set to NOT NULL, it cannot be empty. Which means you have to specify a value for that field when inserting a record.
So I use C# and I often see many devs use null.
What and which kind of situation do you use this variable?
I am reading c# guide on programming book and I am on Clearing memory now and I haven't encountered null yet. Should I be worried?
Where and why exactly a null is used?
What is exactly null and not null? To my understanding Not null we use when its mandatory to insert some value in that field, also when we give check constraint so by default the column will be not null right?
By adding new column through alter method default values are null, so how would I be able to insert values in it and is it right to give not null constraint to that new column while adding through alter method, basically when null and when not null to be used?...
god this is so confusing please help me, ik im asking alot but im really confused
TL;DR
The key to understanding what null! means is understanding the ! operator. You may have used it before as the "not" operator. However, since C# 8.0 and its new "nullable-reference-types" feature, the operator got a second meaning. It can be used on a type to control Nullability, it is then called the "Null Forgiving Operator".
Basically, null! applies the ! operator to the value null. This overrides the nullability of the value null to non-nullable, telling the compiler that null is a "non-null" type.
Typical usage
Assuming this definition:
class Person
{
// Not every person has a middle name. We express "no middle name" as "null"
public string? MiddleName;
}
The usage would be:
void LogPerson(Person person)
{
Console.WriteLine(person.MiddleName.Length); // WARNING: may be null
Console.WriteLine(person.MiddleName!.Length); // No warning
}
This operator basically turns off the compiler null checks for this usage.
Technical Explanation
The groundwork that you will need to understand what null! means.
Null Safety
C# 8.0 tries to help you manage your null-values. Instead of allowing you to assign null to everything by default, they have flipped things around and now require you to explicitly mark everything you want to be able to hold a null value.
This is a super useful feature, it allows you to avoid NullReferenceExceptions by forcing you to make a decision and enforcing it.
How it works
There are 2 states a variable can be in - when talking about null-safety.
- Nullable - Can be null.
- Non-Nullable - Cannot be null.
Since C# 8.0 all reference types are non-nullable by default. Value types have been non-nullable since C# 2.0!
The "nullability" can be modified by 2 new (type-level) operators:
!= fromNullabletoNon-Nullable?= fromNon-NullabletoNullable
These operators are counterparts to one another. The Compiler uses the information that you define with these operators to ensure null-safety.
Examples
? Operator usage.
This operator tells the compiler that a variable can hold a null value. It is used when defining variables.
Nullable
string? x;xis a reference type - So by default non-nullable.- We apply the
?operator - which makes it nullable. x = nullWorks fine.
Non-Nullable
string y;yis a reference type - So by default non-nullable.y = nullGenerates a warning since you assign a null value to something that is not supposed to be null.
Nice to know: Using object? is basically just syntactic sugar for System.Nullable<object>
! Operator usage.
This operator tells the compiler that something that could be null, is safe to be accessed. You express the intent to "not care" about null safety in this instance. It is used when accessing variables.
string x;
string? y;
x = y- Illegal!
Warning: "y" may be null - The left side of the assignment is non-nullable but the right side is nullable.
- So it does not work, since it is semantically incorrect
- Illegal!
x = y!- Legal!
yis a reference type with the?type modifier applied so it is nullable if not proven otherwise.- We apply
!toywhich overrides its nullability settings to make it non-nullable - The right and left side of the assignment are non-nullable. Which is semantically correct.
WARNING The
!operator only turns off the compiler-checks at a type-system level - At runtime, the value may still be null.
Use carefully!
You should try to avoid using the Null-Forgiving-Operator, usage may be the symptom of a design flaw in your system since it negates the effects of null-safety you get guaranteed by the compiler.
Reasoning
Using the ! operator will create very hard to find bugs. If you have a property that is marked non-nullable, you will assume you can use it safely. But at runtime, you suddenly run into a NullReferenceException and scratch your head. Since a value actually became null after bypassing the compiler-checks with !.
Why does this operator exist then?
There are valid use-cases (outlined in detail below) where usage is appropriate. However, in 99% of the cases, you are better off with an alternative solution. Please do not slap dozens of !'s in your code, just to silence the warnings.
- In some (edge) cases, the compiler is not able to detect that a nullable value is actually non-nullable.
- Easier legacy code-base migration.
- In some cases, you just don't care if something becomes null.
- When working with Unit-tests you may want to check the behavior of code when a
nullcomes through.
Ok!? But what does null! mean?
It tells the compiler that null is not a nullable value. Sounds weird, doesn't it?
It is the same as y! from the example above. It only looks weird since you apply the operator to the null literal. But the concept is the same. In this case, the null literal is the same as any other expression/type/value/variable.
The null literal type is the only type that is nullable by default! But as we learned, the nullability of any type can be overridden with ! to non-nullable.
The type system does not care about the actual/runtime value of a variable. Only its compile-time type and in your example the variable you want to assign to LastName (null!) is non-nullable, which is valid as far as the type-system is concerned.
Consider this (invalid) piece of code.
object? null;
LastName = null!;
null! is used to assign null to non-nullable variables, which is a way of promising that the variable won't be null when it is actually used.
I'd use null! in a Visual Studio extension, where properties are initialized by MEF via reflection:
[Import] // Set by MEF
VSImports vs = null!;
[Import] // Set by MEF
IClassificationTypeRegistryService classificationRegistry = null!;
(I hate how variables magically get values in this system, but it is what it is.)
I also use it in unit tests to mark variables initialized by a setup method:
public class MyUnitTests
{
IDatabaseRepository _repo = null!;
[OneTimeSetUp]
public void PrepareTestDatabase()
{
...
_repo = ...
...
}
}
If you don't use null! in such cases, you'll have to use an exclamation mark every single time you read the variable, which would be a hassle without benefit.
Note: cases where null! is a good idea are fairly rare. I treat it as somewhat of a last resort.
It's convenient for the way SQL is typically used. Consider this statements:
SELECT people.name, cars.model FROM people
INNER JOIN cars
ON people.car_licenceplate = cars.licenceplate
If null = null, then this would return all pairs of people with no license plate with all unregistered cars in the database, a usually undesirable result.
It's particularly convenient that, even if you use any null value even in a more complex expression, you won't get a value back, even if other values may also happen to be null. In other languages you'd need to null check everything in advance to get that behavior, having it by default is very convenient for the type of things SQL is typically used for.
null in SQL is exempt from a lot of other rules too. For example they are excluded from unique constraints. All indicating it represents more the absence of a value rather than a special value.
Some other languages do also have a ThreeValueBoolean or a similar type that behaves more like a SQL null, though only for booleans. Also most every language has similar non self-equality for NaN. It's not a concept unique to SQL.
One way to look at this is to compare these two questions:
- Is value A definitely the same as value B?
- Is value A definitely different from value B?
On the face of it, these are symmetrical: if question 1 is true, question 2 is false, and vice versa.
But what if both A and B are missing or invalid data points?
- False. We can't know for sure that the two missing or invalid data points are the same.
- False. We can't know for sure that the two missing or invalid data points are different.
That puts us in a peculiar position: A = B and A <> B should both be false, but that means that NOT (A = B) is no longer the same as A <> B, which is surprising.
SQL handles this by returning a further NULL - if the data for A and B is missing, then the information about whether they are the same or different is also missing. This is consistent with other operations on NULL, e.g. NULL + NULL is NULL, because adding two unknown numbers gives you a third unknown number. And since that also includes boolean negation - if A is NULL, then NOT A is also NULL, the result of NOT (A = B) is always the same as A <> B, as we'd intuitively expect.
However, there are situations where we want to ask the strict negation of those questions:
- Is value A not definitely the same as value B? (Strict inverse of question 1)
- Is value A not definitely different from value B? (Strict inverse of question 2)
For these, SQL provides the DISTINCT FROM and NOT DISTINCT FROM operators.
More commonly, you want to know explicitly that a particular value is or is not null, for which there are the operators IS NULL and IS NOT NULL.
Null: The Billion Dollar Mistake. Tony Hoare:
I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. In recent years, a number of program analysers like PREfix and PREfast in Microsoft have been used to check references, and give warnings if there is a risk they may be non-null. More recent programming languages like Spec# have introduced declarations for non-null references. This is the solution, which I rejected in 1965.
null is a sentinel value that is not an integer, not a string, not a boolean - not anything really, except something to hold and be a "not there" value. Don't treat it as or expect it to be a 0, or an empty string or an empty list. Those are all valid values and can be geniunely valid values in many circumstances - the idea of a null instead means there is no value there.
Perhaps it's a little bit like a function throwing an exception instead of returning a value. Except instead of manufacturing and returning an ordinary value with a special meaning, it returns a special value that already has a special meaning. If a language expects you to work with null, then you can't really ignore it.
null is evil
There is a presentation on InfoQ on this topic: Null References: The Billion Dollar Mistake by Tony Hoare
Option type
The alternative from functional programming is using an Option type, that can contain SOME value or NONE.
A good article The “Option” Pattern that discuss the Option type and provide an implementation of it for Java.
I have also found a bug-report for Java about this issue: Add Nice Option types to Java to prevent NullPointerExceptions. The requested feature was introduced in Java 8.
The problem is that because in theory any object can be a null and toss an exception when you attempt to use it, your object-oriented code is basically a collection of unexploded bombs.
You're right that graceful error handling can be functionally identical to null-checking if statements. But what happens when something you convinced yourself couldn't possibly be a null is, in fact, a null? Kerboom. Whatever happens next, I'm willing to bet that 1) it won't be graceful and 2) you won't like it.
And do not dismiss the value of "easy to debug." Mature production code is a mad, sprawling creature; anything that gives you more insight into what went wrong and where may save you hours of digging.