It is incredibly frustrating that this conversation is even happening. Why are our last mile providers just happy with just keeping up all...the...time? Why do we not have such an overkill of bandwidth from end to end, that having any conversation about saturation or limitations on usage is a laughable theoretical possibility? Someone did a cost study a decade or more ago about the argument for fiber to the desktop in buildings with many floors or campus type network spans. The biggest arguments were the distance of a fiber run vs. a copper run, the need for switches on every floor with copper vs. the need for aggregate fiber switches in one location on one floor, the cost of fiber switches and laptop/desktop fiber carbs vs built-in ethernet.
So, why not bring the fiber to every house, once and for all, and as a Federal infrastructure project? We didn't have have a power grid, or a phone exchange system, or an interstate highway system, but we progressed. Time to do it again, in a big way... If you are going to run one piece of fiber to a house, run TWO. Make them redundant carriers. If these companies want us to spend without concern, they need to put a nail in this bandwidth coffin sooner than later. I wish they would just get in the content delivery business only, and let other companies worry exclusively with getting fiber to our living rooms.
I just wish they would make the service and speed so good, that everyone would be at war about who provides the best content, serves the best games, has the best HD, the newest movies, etc... without having to worry about the bandwidth or speed aspect of it any longer.
I would run into this all the time from our cabinet and cloud customers in Level3. The question or accusation was always the same. "How do I know I am getting a 100Mbps or a 1Gbps handoff from you guys?" "I just tested and I can only get X amount of bandwidth, just look at my Cacti graphs and you will see I am not getting the full amount I am paying for".
So after a short explanation of how a single connection to their server will never fill up the pipe, I would have them run the test, or I would login to their server and run the test for them, while having them monitor the Cacti graph. Cachefly and a few other CDNs have a 100mb.bin and 1000mb.bin files that are not compressible and are not affected by WAN accelerators, etc... So, the trick was to start clicking on the files for download to the server as fast as you could, and watch the pipe fill... most often, their RDP, or whatever remote solution they used, would get so lagged, they thought the server was dying. The files would finish downloading and the remote connection would be normal again.
Me: "What does Cacti show now?" Them: "Thanks, I am going to use that test on everything from now on, very cool". I would look at the Cacti graph afterwards and you see what you expect, a steep 90 degree hill of all green for a few minutes and then back to the 1-2Mbps sustained that they are used to seeing. There may be better ways, but I never had a customer argue with me after using this method and seeing the results in Cacti.