As mentioned interrupt processing / context switches can add latency and jitter. As a 10G LAN has a very small round-trip delay (RTD), TCP can calculate the re-transmit timeout (RTO) to be a very small number. which makes it very sensitive to small amounts jitter, and can result in packets incorrectly being flagged as lost, this causes re-transmission and for the TCP window size to be zeroed (killing throughput) (some congestion control tuning can help here).
The problem can become severe with 10G networks to a VM over a software bridge, but to the jitter introduced by the bridge. (SR-IOV can help here).
There is a lot you can do to TCP to improve its performance in such circumstances, increased window buffers, use a modern congestion control algorithm, interrupt coalescing, SR-IOV.
For very low latency applications (like HFT) people generally don't use interrupts, they configure their NIC to kernel bypass (user space library with direct access to the NIC hardware, to avoid context switching), map to the optimum NUMA space for the core handing network events, keep their application in CPU cache, align data-structures to cache lines, turn off all hyper-threading and power saving and assign a core to busy spin checking for network events, not very green.