I'm sure it's not in your opinion, but you're sadly oversimplifying or ignoring every use case and ignoring the drivers behind QoS in general. If you want something simplistic and turnkey, there's certainly products out there. Netequalizer springs to mind.
But hey, let's throw in a few simple examples:
HTTP downloads vs. Flash video streamed over HTTP. One is decidedly interactive (even if buffering certainly helps), the other one is decidedly non-interactive (even if faster = neater, naturally).
SIP telephony vs. SIP videoconferencing. Agnosticism per your definition would make the algorithm punish the SIP videocon.
Or, let's take an even simpler example: P2P. Rather than a few very hungry connections, you get a large number of connections pushing less data per connection.
One can always argue that service providers should provide enougb bandwidth so that they won't even have to prioritize data the first place. Nice in theory, hard (or simply uneconomic) in practice. Take a cable provider - with a limited upstream bandwidth per channel, you need some sort of fairness. Simple per-plug fairness works to some extent, but you don't really want to punish the puny amount of upstream data your average HTTP request would generate just because the same user is P2P'ing like there's no tomorrow. Makes for a bad user experience.
When we get to wireless, it gets even messier with the limited and shared upstream and downstream.
I could go on for a whie, but I believe the point has been made. It's not a case of "You simply XYZ" at all.