The first big problem with integers is that they are really badly defined in C, so just like you I try to use unsigned as much as possible:
Any underflow turns into a big overflow, so it can be tested for at the same time as the overflow test, and the semantics of power-of-two sized wraparound is pretty solid on all platforms and implementations.
OTOH I don't agree that having proper overflow handling would mostly be a new source of bugs, i.e. on the new Mill cpu architecture we have a full orthogonal set of of all basic operations:
When adding two numbers (belt values) you can specify signed or unsigned, and over/underflow to be handled as saturating, wraparound or trapping, as well as automatically widening.
Look at ADDSW as an example of a Signed ADD that will widen if needed.
Since the Mill carries metadata alongside each belt slot it does not need separate byte/short/word/dword ADD instructions: The size of the operations is defined by the belt slot specified and not in the instruction encoding, so the machine code is polymorphic in data item size.
I.e. you can start with 8-bit values and an 8-bit accumulator, when the sum becomes too large then it is automatically widened to 16 bits or more. This works all the way to 128 bits for all scalar operations.