Maybe, but I suspect maybe not. The electromagnetic spectrum (which includes light frequencies) has particular characteristics that do not change much with increases in frequencies, closed system or not.
Granted, but optical signals are not generated by a coil of wire and interleaved metal plates. You can't just tweak a capacitor to adjust the frequency of a laser.
...fire-fighting aircraft should also be drones.
That is a very good point. On the other hand, smoke-jumpers must be airlifted.
The reality of the situation is that the persons taking video of the fire with their fancy flying cameras were probably unaware that they were interfering with the fire-fighting effort.
I mentioned the +/- zero thing in another comment elsewhere in this tree, actually! So we're all on board there.
It's not really that signless infinity is a contender for 'consensus' inasmuch as number systems which use signless infinity have utilities different from systems that have signed infinities, just like integer math continues to exist despite the 'improvements' of fractions and decimals.
I wasn't thinking of the highest bit, just the highest value. As I'm guessing you already know, because of the nature of two's complement there's an asymmetry in positive and negative numbers (2^15 - 1 positive values, 2^15 negative values, and zero) resulting in one value that could easily be discarded; assigning this single value to an error would have an additional benefit of catching counter overflow. Certain older computers like UNIVACs actually used another system called one's complement, where the most significant bit was a negative sign, and numbers otherwise counted up from zero—this had the odd result of leaving a "negative zero" in the numbering system (which IEEE floating point numbers also have); this could also have been reassigned to NaN.
Yes, I agree that try/catch blocks are annoying from the perspective of people writing elaborate flat code—but they do force the programmer to actually handle errors instead of letting them propagate. In certain contexts this is vitally important. A programming language that permits NaNs is essentially making the decision that the division's failure is Not A Problem by default, which is a key point of contention: are we developing for a safety-critical application where a failure to test properly could have dire consequences? What if someone forgets to check for NaN values in the speed control system in an automobile? Is that better or worse than the program aborting entirely? (Almost certainly worse, as it's more likely there would be management code for catching and fixing that!)
So I would argue that, say, MATLAB or Lisp should support NaNs, but definitely not Ada, and I guess now I'm unsure about the C languages again.