you have an easy alternative, which is a try/catch structure.
That's not easy, instead it's a nightmare. If you add try/catch near the point of division, you have to add these try-catch boilerplate code all over your code base. If you add the try-catch at a higher level (near your main()), that too causes headaches because now you have to restart and reinitialize the parts that got discarded after the exception was thrown.
It is precisely to handle these types of problems that the IEEE 754 standard uses NaN or Infinity for div-by-zero (because divisions are extremely common in FP code). That is, it is better to generate and propagate the wrong result (NaN) than it is to filter inputs and other data that cause div-by-zero, or handle exceptions in try/catch.
I do actually agree with you and would propose that signed integer formats should reserve 0x8000000
Yes, the highest bit should be reserved for nullity/NaN. Any arithmetic operation on a nullity variable should result in a nullity. There won't be any drop in performance if the CPU supports it, but the range of integers will be halved (from 2 billion to 1 billion for int32 variables). But that's not a big deal in a world where 64 bit integers are common.