When assigning a value from a null-conditional, it makes sense to return a nullable value type. Otherwise, if the object is null, the null-conditional would return the value type's default and you don't want that. Therefore, this is good:
bool? IsTerminated = Employee?.IsTerminated;
However, why does it return a nullable type if I am just checking a condition? You would think that the compiler could figure this out just fine:
if (Employee?.IsTerminated) { /*do something here*/ }
After all, it's just compiling down to this, right?
if (Employee != null && Employee.IsTerminated) { /*do something here*/ }
In order to get it to work, I have to do this:
if ((Employee?.IsTerminated).GetValueOrDefault()) { /*do something here*/ }
Between the extra code and having to wrap the expression in parens, the whole purpose of the null-conditional's short-hand syntax appears to be defeated. Is this the proper way to handle a null-conditional return value or is there another way that doesn't involve accounting for a nullable return value?
If A?.B
contains B
of a reference type, then you'll get a return of type B
. Otherwise, if B
is a value type, the return type is a nullable wrapping around type B
. So, that's a short answer to your question.
With that being said, in your case, since you're getting back a bool
, it's understandable that it'll be wrapped as bool?
. Knowing that, you should simply work with it as you would normally would with bool?
. One way is to simply do an equality comparison to a desired value of bool
.
For example:
if (Employee?.IsTerminated == true) { }
It's a bit shorter and easier to read than:
if ((Employee?.IsTerminated).GetValueOrDefault()) { }
The example comparison works, because null
will never equal to a bool
.