I can't say one way or the other for whatever you bug was, I like to do validation as much as possible in my code until there is a valid performance issue. Several times in the past few years I've had outside developers start using my code only to have their front-ends blow up with some helpful error messages. They would complain that my backend code was causing their frontend code to fail, but I explained that they were making undefined calls and my backend code was just making sure calls were being used in a way consistent with the envisioned data model.
I've gained a reputation for thoroughly checking edge cases that I am now "that guy" people go to first. This can be annoying because 95% of the time it's an issue with the frontend, but the person who made the frontend doesn't do any useful logging and lets default exception handling drop a "object null" error or whatever. Even the frontend devs come to me asking why their program is breaking. JUST PASS SOME VALID PARAMETERS! They're too used to things silently failing, leaving stuff in invalid states, but only causing errors when they attempt to use the invalid state.
My goal is that when using my code, it either works beautifully or fails spectacularly with a clear reason why. The sooner code breaks, the better. None of this code seems to work but something is in an invalid state and someone forgot the check the state. No, it goes BOOM and is never in an invalid state. All states are accounted for and how you can use something in a certain state is enforced.
A simple example is authentication. I've seen people write auth APIs where the programmer is supposed to request the user object, if found, then attempt to validate it. I don't do that. I let you pass in the auth data, and I'll pass back an immutable user object if the validation was successful. With the other way, I've seen programmers who have forget to validate the user object. oops.
Another example is SQL sprocs. Many programmers like to use internal identifiers. Yay incrementing integers, no chance of accidentally flipping around some arguments and still getting a valid response because those identities exist in more than one table. /sarc Nope, UniqueIdentifiers. Virtually no chance of accidentally passing in a UUID to the wrong parameter and not getting an error. But UUIDs fragment the index more quickly... boo freaking hoo. I'll worry about it when it becomes a performance issue.