let me summarize the problem that is being observed: On a given interface, if you have more buffer memory than is needed as packet buffer on the transmit side, it can induce latency. As an example, consider a 1Mb/s link. If you want to have a peak of .1s latency added by buffering at high load, then you want 1Mb*.1=12,500 bytes of buffer. If you have 1MB of buffer, then you have 8 seconds of buffer, therefore triggering the "buffer bloat" issue. Part of the problem is that buffer size would be set based on the top speed a piece of hardware could drive, i.e. if you want a 1Gb/s interface to be able to buffer .1s, then you use it at 100Mb/s, then it has 1s worth of buffer. In most home deployments where you have a router that may have a 1Gbps upstream, maybe 4 100Mb/s physical connections, and a 54Mbps wireless router, you probably have a shared buffer for all the interfaces. The result of this is that when using the 54Mb/s wireless, you can easily have the buffer over saturated, while the buffer size may be just right for the 100Mb/s interfaces.
What is the solution to this? Realistically, the alternative is to drop packets that have resided in the buffer longer than a configured amount of time, which causes it's own performance issues. Net result: TCP would slowdown for a period of time, but would speed up again resulting in a sawtooth behavior. This would result in periodic issues with other protocols as well, i.e. VOIP would have dropped packets every time TCP ramps up again, etc.
Solution: Don't download porn when you are trying to do VOIP calls.