I spent a brief part of my career writing code for avionics. A serious amount of testing goes into the code before the FAA will certify it to fly; you have to prove that you've executed every line of code, that every line of code does exactly what it is supposed to, and that there are no paths that are never executed. But even with all of the testing we did, we would occasionally get a value we completely didn't expect and crash the demo box. Lucky me, I was just writing code to encrypt ACARS... nothing that actually made the airplane fly (or not fly...).
My husband and I were at AirVenture checking out EFIS sytems for an experimental aircraft that we're building. We managed to crash one of them not once, but three times, just by pushing a few buttons in rapid sequence. Granted, they were experimental and didn't go through all of the testing, but every now and then you also hear about a certified system resetting in flight. In fact, a friend of ours recently had his certified EFIS go into a reboot loop while he was in flight due to a faulty database update; luckily he was flying VFR and had backup gauges, so he didn't need the EFIS. There are procedures in place to handle this, but there are also people present in the cockpit to follow them. This is why fly-by-wire scares me, and why it's still a Very Good Thing that commercial aircraft have co-pilots and manual flight systems as backups. There's just too much that can go wrong to be able to trust everything to fly itself -- sometimes you really need a human in the mix thinking "outside of the box" when the feathers start to fly. I think the Sioux City incident is a major example of that, despite how long ago it was.