By all means add more automated testing and hopefully make the QA process smoother, but skipping it entirely (they fired the QA team) is a recipe for disaster, and the only thing they have to measure the effectiveness of this is the number of issues in their issue tracker, which was *fed* by the QA tea.
Which can be replaced by an easy issue submission process embedded in the program itself. Combined with a program trace of the user's actions, which Yahoo already uses as part of its distributed architecture, the developer would be able to reproduce precisely any behaviour seen.
The testers are using the software much like a human would, and they do unexpected human things like the end user would be doing. This particularly helps identify usability issues, which isn't really an automation sort of thing.
But you can automate entry of unexpected data too, without having to generate the data/tests yourself. See QuickCheck and other testing tools inspired by it, which invokes functions with a large set of permutations over the full domain of its parameters, starting with upper and lower bounds, ie. if your function accepts an int, it will invoke it thousands of times int.MaxValue, int.MinValue, and a number of randomly selected values between.
Sometimes single minded march toward matching the spec is a bad idea, because the spec is bad and needs to be revisited.
Sure, but you can't fix spec problems in the testing phase with a QA team that aren't domain experts. Only end users can validate something like this, which is where shorter feedback cycles with end users are key delivering a good result.
Re: humans having a hard time using an interface, this is generally obvious to the developers too while they're writing and testing their own code. They generally just don't have a process available via which they can question the viability of the spec and be taken seriously.