Or, that the specs meant something very different to the developers than it did to the client. And the client then had to adjust the specification to get the developers to do the work _they actually agreed to do in the first place_. I've been encountering this especially with outsourced projects lately, where "QA the system" means "QA the whole system" to most systems or management personnel, but to the 3rd party QA team it means "test just the new feature". Then when the new feature reaks or hinders another longstanding features, _which should have been reported by QA_, the developers are faced at the last minute with a mad resdesign task that affects _both_ systems and is not stable, to boot. But it passes the very limited test specified to pass that specific bug report, so it is accepted and goes into production.
It's been a difficult few weeks trying to clean up after several messes like that. It pays the bills to do this work, but it's very frustrating to have to clean up _after_ you waned developers and QA of the risks they were taking with the "test only the new feature" approach.