Sorry to reply in an overdone subthread, but I think I have a different objection from the usual.
I'm a programmer, like many here, and I spend a lot of time working with strings, numbers and sequences. I see 0.999... as an infinite sequence which must be partially evaluated when it's used as a number. This has several consequences, but the relevant one is that when comparing numbers the obvious algorithm is to sort them by the greatest non-equal digit - so 0.999... is greater than 0.989..., and must be partially evaluated to determine that 0.9999999... is greater than 0.9999998... This understanding of 0.999... as representing an infinite sequence which can be reasoned about has worked for me in all the situations I've come across (admittedly, not many).
Of course using this reasoning also clearly tells me that trivially 0.999... < 1.
And, before someone asks, I evaluate (1/3) == 0.333... as either true based on no detectable difference, true based on a type conversion from 1/3 being identical to 0.333..., or the heat death of the universe while still looking for an answer.
Now, hopefully something interesting after another "I dun't understand math lol" post.
Obviously the 'correct' interpretation of 0.999... == 1 versus my 0.999...
< 1 is based a difference between 'real' mathematics and the bastardised computer mathematics where we don't actually have infinite sequences and have to make do, but still I can usefully define 0.999... as the largest number which is less than 1 and reason about it on that basis.
So why does 'real' mathematics use a definition based on limits rather than the shortcutty but apparently workable definition I imagine a computer using? Is there some kind of difference based on consistency of the model, or usefulness for some kinds of calculation, or just tradition or what?
P.S., sorry if slashcode ate my less than signs, I think I got them all with <