I've written algorithms for machine learning, where I'm constantly doing things like multiplying 0 and infinity. And, depending on the situation it is totally clear what the correct result must be (either 0 or infinity).
Take for example computation of the entropy of a deterministic Bernoulli distribution: You have 1 * ln 1 + 0 * ln 0. The correct result is 0.
Mostly I am relying on the proper handling of infinity and NaN by the floating-point implementation; but sometimes I have to catch cases and correct the result by hand.