You seem to have that backwards. Dividing your five dollars by zero is like dividing it among zero people and suddenly, poof, it's gone!
Right. Exactly my point, except that I used "pieces" instead of "people." Which is to say, when you're dividing equally between n people, the unit isn't actually the people but the count of people. If you're dividing into labelled jars it is exactly the same, and the jars aren't the units either; the count of portions is. You don't divide by the people, you divide by the count of people. The other person claiming apples/oranges is making the same mistake. And the unit issues shouldn't even come up when the issue is divide-by-zero, which doesn't implicate units at all.
It always cracks me up when people misunderstand me, presume I said something that doesn't make sense, and then correct me by paraphrasing what I actually said but presented as a contradiction.
I guess it is no wonder that people can't map real human scenarios that rely on numbers to mathematics very well; they're already off by -2n most of the time. But you got the essential element; when you divide into zero pieces, it means you have zero pieces when you're done dividing. People will insist such behavior is "undefined," but actually it is defined; either as an error condition, an action that is not allowed (an exception) or as infinity. On a computer this is defined differently for floats (infinity) and ints (exception). The idea that it is undefined is simply ideology run amok, and repeated for decades by school teachers. We have multiple competing definitions right now, for different problem domains. The idea that we "can't" have a sensible default for non-academic situations is pretty silly. Luckily the solution is also easy; don't use numbers directly in code, use objects that represent the units and overload the math operators when using those units so that sensible defaults will present themselves. For example, money classes can return 0. Graphics classes can return 0. Physics-simulation-related classes can return infinity. Still, it would reduce the code complexity in most cases to have the computer defaulting to 0 and having it raise exceptions or return infinity for special academic math types.
I think this shows it was perhaps mistaken of early computer scientists to treat computer numbers as if the job of the computer is purely mathematical. If early programmers had been logicians or philosophers instead of mathematicians, we'd probably have pragmatic real numbers for standard use, and special functions or types for academic math.