I do 10GbE drivers, and the previous generation of tbolt did not really offer 10Gb/s of usable bandwidth to PCIe devices, it was more like 8Gb/s:
If you recall, tbolt muxes PCIe and Display Port. On the PCIe side, the thunderbolt bridge passed 2 lanes of Gen2 PCIe through to devices. Since Gen2 is "5GT/s" per lane, you'd think you'd have 10Gb/s. But not really, as "10Gb/s" does not take into account PCIe overhead, which can be about 20% of the data transfer rate. So on the original "10Gb/s" thunderbolt, you were lucky to get 7Gb/s transfer rate from 10GbE NIC, once you also add in network protocol overheads.
Having a bus-constrained NIC leads to all sorts of weird problems when receiving data.. With flow control disabled in combination with bursty transfers, you often see far less than the 7Gb/s peak, as TCP hunts around to find the constraint and recover from frequent packet loss events.
It sounds like they've built the new part from 2 lanes of Gen3 PCIe, which should be good for ~16Gb/s of usable bandwidth. This is a very welcome change, as 16Gb/s should be enough for a single-port 10GbE NIC running at full speed, and a disk controller talking to a fast SSD or an external RAID array that can deliver ~750MB/s (bytes) of I.O.
Just don't try to use a bonded 2 port 10GbE NIC, or you're back at the bandwidth constrained problem.