I mentioned the +/- zero thing in another comment elsewhere in this tree, actually! So we're all on board there.
It's not really that signless infinity is a contender for 'consensus' inasmuch as number systems which use signless infinity have utilities different from systems that have signed infinities, just like integer math continues to exist despite the 'improvements' of fractions and decimals.
I wasn't thinking of the highest bit, just the highest value. As I'm guessing you already know, because of the nature of two's complement there's an asymmetry in positive and negative numbers (2^15 - 1 positive values, 2^15 negative values, and zero) resulting in one value that could easily be discarded; assigning this single value to an error would have an additional benefit of catching counter overflow. Certain older computers like UNIVACs actually used another system called one's complement, where the most significant bit was a negative sign, and numbers otherwise counted up from zero—this had the odd result of leaving a "negative zero" in the numbering system (which IEEE floating point numbers also have); this could also have been reassigned to NaN.
Yes, I agree that try/catch blocks are annoying from the perspective of people writing elaborate flat code—but they do force the programmer to actually handle errors instead of letting them propagate. In certain contexts this is vitally important. A programming language that permits NaNs is essentially making the decision that the division's failure is Not A Problem by default, which is a key point of contention: are we developing for a safety-critical application where a failure to test properly could have dire consequences? What if someone forgets to check for NaN values in the speed control system in an automobile? Is that better or worse than the program aborting entirely? (Almost certainly worse, as it's more likely there would be management code for catching and fixing that!)
So I would argue that, say, MATLAB or Lisp should support NaNs, but definitely not Ada, and I guess now I'm unsure about the C languages again.
It was a big story on Slashdot years ago. A similar cascade of discussions resulted. Let's think for a moment.
If you don't want to code to prevent a division by zero error in any situation where NaN isn't handled elegantly, you have an easy alternative, which is a try/catch structure. Adding NaN is basically just a magic error value, like -1 being returned on failure for C functions (example: failing to find a substring returns -1 as the index in many languages.) This practice is considered Generally Bad And Inadvisable by structured programming dogmatists, as it encodes control information inside of a content signal and potentially obfuscates the meaning of the program (and, in languages like Python, where negative indices have meaning, it can cause lots of subsequent problems.)
To keep things consistent, I would argue that NaN doesn't have a place in modern high-level languages that subscribe to structured programming paradigms; a division by zero is an error that occurs when data is improperly initialized or collected, and letting errors like this (a) propagate by ruining subsequent expressions and (b) potentially get displayed to the user is the vice of a PHP programmer who would rather just be told his or her code works rather than know if it was actually doing what was intended.
On the other side of things, I do actually agree with you and would propose that signed integer formats should reserve 0x8000000 (whatever the negative maximum value is, that one extra number you can't fit into the positive side anyway due to the asymmetry of two's complement) for NaN, because exception-handling invariably means unnecessary stack frame manipulation when used in C++, and programmers in low-level languages like the C family should have an alternative, just like they do with string search functions.
Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin