You appear to be confused about the issue. This is not about capacity and oversubscription. This is about a pathology of queueing.
To be fair, it's about both.
Large queues are a problem, but they can be mitigated by adding more capacity (bandwidth). It doesn't matter how deep the queue can be if it's never used -- it doesn't matter how many packets can be queued if there's enough bandwidth to push every packet out as soon as it's put in the queue.
That said, your point about AQM being a valid solution to congestion is, of course, right on:
To avoid large (tens of milliseconds or more) queue backlogs on congested links, you use Active Queue Management. The idea with AQM is, if you have to queue packets (because you don't have enough bandwidth to push everything out in under 10 or 20 milliseconds), then start dropping packets (or ECN-marking them), so TCP's congestion control algorithms kick in.
Dropping packets before they get put in the queue is known as tail-drop AQM. Tail-drop AQM is actually one of the worst ways to do AQM. RED (marking or dropping packets *before* the queue becomes full) and head-drop AQM are better for latency and throughput. However, even a simple tail-drop AQM can *drastically* reduce latency on an oversubscribed (congested) link. AQM really works, and it works quite well.
TCP attempts to divide traffic for different streams evenly among all the flows passing through it.
Well, no, it doesn't. Each stream tries to fight for its own bandwidth, backing off when it notices congestion (dropped or ECN-marked packets). That means that the first stream that is going over the congested link will use the bulk of the bandwidth, because it will already be transmitting at full speed before other streams try to ramp up. The other streams won't be able to ramp up to match the first stream, as they will constantly encounter congestion, and the first stream won't back off enough to let other streams ramp up to match it. To truly enforce fairness between streams, you need AQM technologies, such as SFQ.
ISPs have a VERY LARGE incentive to do this.
ISPs certainly use AQM on their core routers, but they have an incentive NOT to use AQM where it really matters: on the congested link between your computer and the gateway. In other words, they don't set up proper AQM on the cable modem or DSL modem.
They don't set up AQM there because they have another incentive: maximizing speed-test results. AQM by definition slows traffic down, and slower speed-test results are what customers seem to care about above all else. People don't call support to say they're seeing over 100ms of latency, they call support saying they're paying for 10mbits and they want to see 10mbits on the speed-test site.
I don't have any faith that ISPs are going to fix this any time soon. However, AQM really does make a huge different in the quality of one's internet connection. So much so that the first thing I do when setting up any new shared network (e.g. home or office network) is put a Linux box in between the cable/DSL modem and the rest of the network. There are many AQM scripts out there, but this one is mine: http://serverfault.com/questions/258684/automatically-throttle-network-bandwidth-for-users-causing-bulk-traffic/277868#277868
My script sets up HFSC and SFQ, as well as an ingress filter, to drop packets before they start filling up the large cable/DSL modem buffers. It does a bang-up job of reducing latency; I can hardly internet without AQM in place any more.
You can do the same thing (or at least a similar thing) with some of the SoHo Linux routers running DD-WRT and the like. Most of the scripts for those focus on QoS first and AQM second (if at all), which is a huge mistake. Maybe someday we'll have off-the-shelf SoHo routers that can do *proper* AQM. Now there's a start-up idea if I ever had one.