Multiple flows over the same peice of copper is the entire POINT of digital communication and packet switching.
You realize of the packets and flows leaving your house don't go to the same destination. So why the *hell* would they be put into the same queues or policed the same way? Most of the work is done on flows (roughly tcp connections), not on physical network ports. So to use your example:
> concurrently play a streamed game, stream a movie, and make a VoiP call, how does that ISP
The game has probably at least two flows, control and content. The control flow is all about latency. It requires nearly no bandwidth, the gamer doesn't care about jitter, they want the lowest possible latency above all else. Reliability definitely counts - packets should be retried. So you put that flow through the low-latency, low-bandwidth path. "Lowest possible latency" implies high jitter, and that's okay.
The voip call again uses damn little bandwidth, but this time jitter is the most important thing. Reliability doesn't count -
undeliverable packets should NOT be retried. Retrying would actually make the connection worse. For best voice quality, you want the ISP to *delay* each voip packet to make to take just as long as the last one. Otherwise you say "automobiles" and the person on the other end of the line hears "smoautobile". That's easily done by moving your game packet ahead of the voip packet, so that the voip packet doesn't arrive early.
Then you have the Netflix flow. For the Netflix flow, neither latency nor jitter matter. Reliability requirements are moderate - only retry recent packets. Only bandwidth really matters. So those go in the "high bandwidth, high latency" queue, to be delivered after your gaming and voip packets are delivered.