Look at our typical scenario: two ISPs, let's say 7Mbps DSL and 10Mbps cable and latencies that differ by, let's say, 20ms, with careful reordering you can see single-socket performance of about
If you're us, and you get a packet but you haven't yet delivered the previous one in the sequence, the right thing to do is to hold onto it until you either get the missing packet, so you can deliver what you have in order, or until you've seen a packet come in on all interfaces, and you can declare the missing packet as lost, and then deliver everything you have.
As far as SACK and D-SACK, you don't really want to do that for the 30% of your packets that arrive out of order. From what I've seen in the real-world, those RFCs are not intended for coalescing streams where potentially a lot of the packets are out-of-order (as they would be in the DSL + cable example).
Thanks again for your interest.
The latency situation is much more complicated than you describe. You said that we were adding latency to the highest-latency connection. But that's not right, for software that routes every packet optimally: we're looking at a latency of (high-latency connection - ((highest latency connection - lowest latency connection) / 2)).
But it's more complicated than that too, because Internet connections' latencies are not constants.
We don't add to buffer bloat, we go war against it. Buffer bloat is the enemy of Switchboard. Switchboard wants to know where every packet is, and devices that claim to transmit but instead stick them in a buffer are problems. Switchboard is smart enough to sniff this behavior out, and give less packets to such devices, which hopefully leads to smaller queues, and better behavior, and lower latencies. Which is why in many cases Switchboard helps with latency... a typical 4G card will have 25 ms latency when you ping on it, but when you start a file transfer it will soar north of 800 ms. Switchboard will see this change and start routing you around that backlog in real time.
Function reject.