Comment Re:Now THAT's art! (Score 1) 75
Alex Gray
Wasn't that the smart parrot?
Alex Gray
Wasn't that the smart parrot?
I am not concerning myself with representations of mathematical values, except to show the parallels of why it works. IEEE 754 defines a positive and negative infinity, because it has a specific signed bit. Thus, it's easier to define a positive and negative infinity than to produce special code to handle "exceptions"... note also that IEEE 754 defines a positive and negative 0 separately. No, they really do.
My model is a theoretical one that hasn't reached mathematical consensus, and it likely never will. I just note that this is an argument for infinity being signless.
A small price to pay to get a free OS for my gaming PC that won't be used for anything else.
Ah it doubles as a nice Chromebook!
More importantly is what happens when you graph it: the limit of 1/x as x approaches zero is discontiguous. It's positive infinity when descending on the positive numbers, but negative infinity when ascending from the negatives. No one value can represent both!
Let's assume that the set of integers is Z_\inf. K? We can now define negative numbers as the 1's compliment of the number plus 1. 1 = 999...9998. then plus 1 = 999...9999. This plus 1 results in an infinite carry out, and the value 0. Awesome.
Now, let's look at 1/0, we see that from the right it's approaching \inf from the bottom, while we see that from the left, it's approaching \inf from the top. Now, at 0, obviously these two will be coincident, because we're working in Z_\inf, that value is the same value. Namely, -\inf = \inf. But that doesn't make sense, only 0 can be it's own negative!
But we've already known for a long time about Z_n where n is even, -(-128) in Z_256 is -128. -(-65536) in Z_2^16 = -65536. So, there's no trouble in making -\inf = \inf
Basically, 1/0 grows so fast that it manages to wrap around the entire infinite series of numbers. Which is exactly what it does...
That is simply false. There are an infinite number of algorithms that might contain the (sub)expression X/X for which zero is a valid value of X. To assume it's a programming error is sheer unmitigated stupidity that I might expect from a mathematician that has never written a real program in his life.
Dude... you perhaps haven't heard, but computers run entirely upon theoretical mathematics... I know, it's popular to say it's engineering, rather than mathematics, but it's mathematics. It's always been mathematics.
I wasn't thinking of the highest bit, just the highest value. As I'm guessing you already know, because of the nature of two's complement there's an asymmetry in positive and negative numbers (2^15 - 1 positive values, 2^15 negative values, and zero) resulting in one value that could easily be discarded; assigning this single value to an error would have an additional benefit of catching counter overflow. Certain older computers like UNIVACs actually used another system called one's complement, where the most significant bit was a negative sign, and numbers otherwise counted up from zero—this had the odd result of leaving a "negative zero" in the numbering system (which IEEE floating point numbers also have); this could also have been reassigned to NaN.
Yes, I agree that try/catch blocks are annoying from the perspective of people writing elaborate flat code—but they do force the programmer to actually handle errors instead of letting them propagate. In certain contexts this is vitally important. A programming language that permits NaNs is essentially making the decision that the division's failure is Not A Problem by default, which is a key point of contention: are we developing for a safety-critical application where a failure to test properly could have dire consequences? What if someone forgets to check for NaN values in the speed control system in an automobile? Is that better or worse than the program aborting entirely? (Almost certainly worse, as it's more likely there would be management code for catching and fixing that!)
So I would argue that, say, MATLAB or Lisp should support NaNs, but definitely not Ada, and I guess now I'm unsure about the C languages again.
It was a big story on Slashdot years ago. A similar cascade of discussions resulted. Let's think for a moment.
If you don't want to code to prevent a division by zero error in any situation where NaN isn't handled elegantly, you have an easy alternative, which is a try/catch structure. Adding NaN is basically just a magic error value, like -1 being returned on failure for C functions (example: failing to find a substring returns -1 as the index in many languages.) This practice is considered Generally Bad And Inadvisable by structured programming dogmatists, as it encodes control information inside of a content signal and potentially obfuscates the meaning of the program (and, in languages like Python, where negative indices have meaning, it can cause lots of subsequent problems.)
To keep things consistent, I would argue that NaN doesn't have a place in modern high-level languages that subscribe to structured programming paradigms; a division by zero is an error that occurs when data is improperly initialized or collected, and letting errors like this (a) propagate by ruining subsequent expressions and (b) potentially get displayed to the user is the vice of a PHP programmer who would rather just be told his or her code works rather than know if it was actually doing what was intended.
On the other side of things, I do actually agree with you and would propose that signed integer formats should reserve 0x8000000 (whatever the negative maximum value is, that one extra number you can't fit into the positive side anyway due to the asymmetry of two's complement) for NaN, because exception-handling invariably means unnecessary stack frame manipulation when used in C++, and programmers in low-level languages like the C family should have an alternative, just like they do with string search functions.
Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.