By doing the work in parallel, your one PC will hit many servers and the ones that are busy will delay sending you data till they are free. The others will send you data quickly. Its basically like queue theory; multiple tellers (servers) one queue (PC).
Even if you have ONE server to multiple PCs, doing parallel requests will make more efficient use of all the caches between the parties. The middleware or even server will know that PC5 wants the same resource that PC2 is currently requesting so it can choose to keep it in memory rather than re-read the file again. Or if PC1,2,3,4 send all their requests at once to one server, the server knows to keep in memory and serve File1,2 to all followed by File4,5 to all, etc. That is better for the server than to server Files 1-5 to PC1 at once followed by 1-5 again to PC2 at once, etc. The cache still applies when we have multiple servers.
Additionally, requests one at a time, are not an efficient use of bandwidth. In a single teller situation, your bandwidth usage will fall and rise as you switch between requests* and the load the sender is currently facing. With multiple tellers, they can maximize the bandwidth utilization, if one falls off, the others pick up the slack; if a new request comes in, the others back off... Keeping your bandwidth mostly utilized.
As for your point on the browser adapting to a specific type of network connection; this is not a browser or programmer concern. Software is several layers outside of hardware. The complaint should be addressed to the networking driver or TCP/IP stack setup. Networking is highly segregated with multiple layers. The raw communication is at one of the lowest levels and the type of connection you have should be addressed at that point. Unfortunately, use cases such as your are such niches that Microsoft are most retail network card providers just don't care.
* = There are overheads in opening & closing a request; finding & queuing a resource; switching contexts; authentication; authorization; session management etc.