In C and C++ it's not defined to throw an error. It's undefined behaviour and the compiler is free to do whatever it wants.
Typically that means doing nothing special, and on x86, the instruction itself traps when zero is the denominator. On powerpc, there's no trap and you get zero. Each processor has a different, defined behaviour.
However, if the compiler can prove that, along any particular path, the denominator is guaranteed to be zero, it's allowed to delete that entire path, as the program is undefined if that path is followed, so it can assume it isn't followed. It's not guaranteed that your program will execute normally up to the divide by zero. A soon as the divide by zero becomes inevitable, the program is undefined and the compiler is free to do absolutely anything.
If any of the above scares you, C and C++ aren't the languages for you. It always amazes me how many people write non performance critical code in C. But when you need speed/efficiency, there's no replacement for C or C++ and a good optimising compiler.
Also, to the original poster, no, defining it to be zero doesn't make sense. PowerPC did indeed do that. But the number desired isn't always zero. It's undefined in mathematics for a reason: in different situations, it logically should be a different value, and there's no way to resolve the contradiction. Hence if you do divide by zero, your program is seriously flawed.