I just finished updating the checksum routines at my company. Amazingly, they had been using an 8-bit XOR checksum for years (!) on a mission-critical wireless data link. Since this was low-level mission-critical code with lots of proven flight-time, I had to prove conclusively that the new method would, in fact, be better.

TCP uses a 16-bit checksum, so you have 1 in 65536 chance of an error packet being incorrectly validated as being correct. To make matters worse, it uses 1's complement instead of 2's complement, so 0x00 and 0xFF are indistinguishable.

It's not as simple as saying 1 in 2^N chance of error for an N-bit checksum. It depends strongly on the specific checksum algorithm, and to a degree the number of bits, length of data packet, and expected bit error rate.

For example, 8-bit XOR lets 12% of 2-bit errors through undetected, and 16-bit XOR only brings that down to 6%. WAY different than if you expected it to change from 1:256 to 1:65536. But an 8-bit CRC checksum has 1:10^8, and the 16-bit version has 1:10^16 chance of undetected error. Again, WAY different than 1 in 2^N chance.

Surprisingly (to me at least), 1's compliment catches *more* errors than does 2's compliment. (the reason being that a pair of bit inversions involving the MSB remains undetected by 2's complement, but is caught by 1's compliment)

We decided to go with a 32-bit one's compliment Fletcher checksum -- good compromise between performance and error checking for our application.