I've read all comments as of this time and nobody seems to have taken note of his suggestion about bottom vars:
"There's an argument among language designers, should we have bottom values at all? But there's nobody who thinks you should have two of them."
I would like to _strongly_ disagree, you _do_ need two different kinds:
undefined (Not a Number/Not a Result) which traps if you try to use it, and none which means empty/ignore!
In the Mill cpu architecture (http://millcomputing.com/) we have those two different kinds and they make coding both easier/more elegant/more compact, and at the same time faster and more secure!
I am currently on the group that works on the 2018 revision of the IEEE 754 (i.e. floating point) standard. In the original '754 version the "Not a Number" (NaN) type was defined as a way to create a "sticky" error marker in a numeric calculation. I.e. if you accidentally try to calculate 0/0 or Inf/Inf the result will be undefined, the operation might trap or not depending upon how your language runtime is setup, but the result will always be a NaN. There are two kinds of NaNs, Signaling NaN and Quiet NaN, the only difference is that the next time a SNaN is taken as input to an operation it will trap and then be converted to the equivalent QNaN while a QNaN as input will just propagate to the output.
It should be obvious that if your runtime initializes all FP variables to NaN, then any accidental use-before-load should be detected, right?
The problem is that for the 2008 (current) revision of the standard, enough people wanted a totally different behavior when searching for the maximum value in an array, typically used to scale a matrix: They managed to define minNum(a,NaN)/maxNum(NaN,b) so as to ignore any quiet NaN val
ues, always returning the other value. I.e. in those functions they got NaN to behave as None!
The real problem, and the main reason these functions are going away is how the definition above interact with the SNaN/QNaN rules:
maxNum(1.0, QNaN) -> 1.0
maxNum(QNaN, 1.0) -> 1.0
maxNum(1.0, SNaN) -> QNaN
maxNum(SNaN, 1.0) -> QNaN
So if you look for the maximum of 4 values, one of them a SNaN, the result will depend on the order of comparisons, i.e. if you do them pairwise you get this result:
max(1.0, SNaN, 2.0, 3.0) -> maxNum(maxNum(1.0, SNaN), maxNum(2.0, 3.0)) -> maxNum(QNaN, 3.0) -> 3.0
while taking alternate inputs results in
maxNum(maxNum(1.0, 2.0), maxNum(SNaN, 3.0)) -> maxNum(2.0, QNaN) -> 2.0
I.e. we would have been much better off here with a single bit pattern meaning None which would never propagate.