I agree that he has far too simplictic view. In practise, error handling code should in many cases be much more complex than the code that is executed in normal cases, because errors can happen in quite many forms (and recovering from them may depend on the input) - like wrong input, out of memory, out of disk space, insufficient (file) permissions, filesystem/disk disappearing/corruption, problems in network connections, lack of some kind resources, hung transaction, user yanking usb device off etc. And these things can change during the execution of program, especially if the error condition is deliberably caused be malicious attacker - like changing file permissions, replacing files if links, renaming files etc. However, in most cases, error handling core isn't given much thought and typically it is not well tested.
And it depends a lot on the situation, what would be the best (but probably still not completely satisfactory) way of handling the error. Should everything but error log be cleaned? Should we clean something, but leave some kind of transaction log for trying the same thing later? What if we get new error during cleaning process? Is it better to try to save as much data as possible or is it better try to erase all temporary files etc? Should we try do the operation again automatically and if so, how soon and how many times should we try to repeat it? Is it possible to use some alternative method to do the same thing? How to get error message that is as meaningfull as possible to the user -- or should we be careful not to reveal too much information about internal state of the program to possible attacker? Should we wait for user input for handling the error?