For certain kinds of abstract algebras division by zero is even defined, although typically as a special element like infinty, but not 0 (the additive identity element) which would lead to all kinds of peculiar situations: like 0 * 1/0 = 0, so 1/0 has to be regarded as both 1 and 0 at the same time.
BUT if you're dealing with regular numbers or anything that obeys the axioms of an algebraic field, division by zero always represents a failure of the assumptions under which you undertake the calculation. Since it is a failure of assumptions it should always be treated as an exception to normal logic flow. If the correct -- or more accurately speaking the safest -- course of action to take is to assign a value of 0 to a calculation then of course you can do that, but that's still a case of exception handling. Building that as default behavior FORCES a certain response to an exception which of course the language designer can't possibly know is the safest response.
In fact, even implicitly allowing division by zero in a sequence of algebraic manipulations can lead to faulty results even without actually performing the arithmetic operation in question. That's behind several algebraic "paradoxes" that have made the rounds of the Internet over the years, such as the following algebraic "proof" that "2 = 1":
Let a = b
[1] a^2 = a*b // multiply both sides by a
[2] a^2 - b^2 = ab - b^2 // subtract b^2 from both sides
[3] (a-b)*(a+b) = b * (a - b) // factor both sides
[4] a + b = b // divide both sides by (a-b)
[5] b + b = b // substitute b for a on the left side
[6] 2b = b // collect terms
[7] 2 = 1 // factor out b
It all looks kosher, but it's not because there's a division by zero in the *algebra*. I've actually seen programs that give faulty errors because the programmer simplified expressions in ways that commit this exactly blunder. The language and compiler can't catch this because the division by zero occurred in the programmer's head.