Using modern stateless configureless buffer management algorithms, you can have low latency, low loss, low jitter, and a mostly even distribution of bandwidth. High latency is caused by bufferbloat. Fix the bloat, fix all of the issues you mentioned, but with out of the box simplicity near-zero-configuration(set your bandwidth).
Some interesting statistics that I read about research on bufferbloat is that a very small percentage of flows are "hogs" are any given instant. They tested link rates from 133Mb/s all the way up to 10Gb/s, and they all had the same numbers. Even though they all had hundreds of thousands of active flows, at any given instant, only 100 flows had at least one packet in the link's buffer, and only 10 flows had 2 or more packets in the buffer. This means that you only need to track about 100 "states" of data, regardless the link rate or saturation.
Many fair queuing algorithms have taken advantage of this and just create hash buckets. There are only about 100 states in the buffer at any given time, meaning if you have something like 1024 buckets, the chance of any two network flows colliding is relatively low, but not zero. Another algorithm extended this to include "ways", where each bucket can have up to 8 flows in it, but round robins them. Turns out that adding these few "ways" resulted in zero collisions going into millions of states over a relatively slow congested link.
This all means a link can 100% isolate an infinite number of flows over a link while only using a small amount of fixed memory. Next advancement. Most of the time when a packet enters the buffer, it quickly leaves it. When a packet comes into the buffer, it gets shoved into a bucket+way. If this bucket+way was empty, this packet gets prioritized over all other non-new packets. Because 99.9999% of the time there are zero packets in the buffer for a given flow, this allows the non-hog traffic to get immediately scheduled.
If a data flow suddenly starts to become a hog, that means when the next packet arrives, there will already be an existing packet in the bucket+way, meaning that packet does not get prioritized and instead back-logs. But remember, this situation only applies to about 10 flows at any given instant. All of this means that 99.9999% of all packets immediately dequeue with near zero latency. The "hogs" get their additional packets delayed as they start to backlog, and if the link is too saturated, eventfully the packet is dropped. When this happens, the sender backs off and they are no longer a hog.
Turns out that doing it this way actually results in better link utilization. You can run the link to something like 99% utilization while still maintaining fractional millisecond latencies, and average utilization is higher. win-win-win. Zero downsides other than increased CPU usage. Right now they are able to run these algorithms on standard x86 routers/firewalls almost into the 40Gb range, but are hoping future optimizations to network stacks will allow higher. Assuming these algorithms get well tested, they will started to get implemented in hardware. DOCSIS 3.1 actually makes use of RED by default, which is a non-fair queue FIFO anti-bufferbloat AQM. RED is Codel-like but is friendly to Cisco ASICS.