I've written algorithms for machine learning, where I'm constantly doing things like multiplying 0 and infinity. And, depending on the situation it is totally clear what the correct result must be (either 0 or infinity).
But that is the key point - you're right that, in many situations, it makes sense to assign a particular value to 0/0 (and other pathological cases) - but the thing is that the value that needs to be chosen depends critically on the situation at hand. Sometimes 0/0 might need to be treated as 0, sometimes 1, sometimes 42, and sometimes it is indicative of an error. Only the person writing the program can know which value is appropriate for the circumstance, so they must express it explicitly in the program. If programming environments throw an exception in the case of 0/0, it indicates that the programmer has not considered this choice, and that there is therefore a bug; by contrast, choosing one particular value would make some small subset of implementations work, at the expense of making all the others (which needed different values, or an error) give the wrong answer without any indication of a problem.