Anyway, even if they automate some parts of your job, the part of your job that isn't automated will expand to fill that time.
Indeed, compilers already automate so much of our programming job. I remember having to avoid using multiplication by a constant if speed was important, and choosing all sorts of crazy things, just because they ran faster... now, the compiler automates this for me, and I can write code that is more legible and clear.
This is just yet, another form of optimization, which computers have been doing for us for like at least 10 years already...
I mentioned the +/- zero thing in another comment elsewhere in this tree, actually! So we're all on board there.
It's not really that signless infinity is a contender for 'consensus' inasmuch as number systems which use signless infinity have utilities different from systems that have signed infinities, just like integer math continues to exist despite the 'improvements' of fractions and decimals.
I am not concerning myself with representations of mathematical values, except to show the parallels of why it works. IEEE 754 defines a positive and negative infinity, because it has a specific signed bit. Thus, it's easier to define a positive and negative infinity than to produce special code to handle "exceptions"... note also that IEEE 754 defines a positive and negative 0 separately. No, they really do.
My model is a theoretical one that hasn't reached mathematical consensus, and it likely never will. I just note that this is an argument for infinity being signless.
A small price to pay to get a free OS for my gaming PC that won't be used for anything else.
Ah it doubles as a nice Chromebook!
If all else fails, lower your standards.