I was confused behind the reasoning of the following:
SELECT * FROM table WHERE avalue is null
Returns x number of rows where 'avalue' is null
SELECT * FROM table WHERE avalue <> true
Does not return rows where 'avalue' is null.
My reasoning (which appears to be incorrect) is that as null
is a unique value (it isn't even equal to null
) means that it should show in the result set as it isn't equal to true
either.
I guess you could argue that by saying column <> value
you imply that the column has a value therefore ignoring the null
values altogether.
What is the reasoning behind this and is this the same in other common SQL DB's?
My reasoning (assumption) is telling me this is counter-intuitive and I wanted to learn why.
Every halfway decent RDBMS does it the same way, because it's correct. The manual:
Ordinary comparison operators yield null (signifying "unknown"), not true or false, when either input is null. For example,
7 = NULL
yields null, as does7 <> NULL
. When this behavior is not suitable, use theIS [ NOT ] DISTINCT FROM
constructs:expression IS DISTINCT FROM expression expression IS NOT DISTINCT FROM expression
These expressions perform slightly slower than simple expression <> expression
comparison.
For boolean
values there is also the simpler IS NOT [TRUE | FALSE]
.
To get what you expected in your second query, write:
SELECT * FROM tbl WHERE avalue IS NOT TRUE;