In fairness to Enry, I (in retrospect, not very clearly) tried to make two somewhat similar points and kind of mushed them together). My intent was the following:
1. Only for certain, fairly specific, tasks does doubling 1 subsystem's performance = 'doubling performance'. In the case of mass storage, databases seem to be the particular sweet spot. For most of what laptops are used for, the near-zero latency of an SSD makes a huge difference; but the difference between 'near zero latency, 2 PCIe lanes of bandwidth' and 'near zero latency, 4 lanes' is very unlikely to double performance across the board.
2. What is remarkable, even if 'double the performance' of the storage subsystem doesn't double the performance of the tasks you use it for, is that we now have (and relatively cheap, at that, unlike DDR-based hardware RAMdisks) storage hardware that is good enough that doubling its interface bandwidth genuinely does double its performance. With pretty much any mechanical storage, and some of the earlier SSDs, it barely mattered what the nominal performance of your interface was, because the storage device would let it down. You wanted to avoid PIO, because losing DMA meant more CPU load; and SATA has cables that are less annoying than PATA; but only with big, expensive, HDD arrays or contemporary SSDs does the speed of the interface actually make much difference in terms of performance.