The PCIe devices are faster; but (since they also tend to be either substantially similar to SATA devices; but packaged for the convenience of OEMs who want to go all M.2 on certain designs and clean up the mini-PCIe/SATA-using-mini-PCIe's-pinout-for-some-horrible-reason/mini-SATA/SATA mess that crops up in laptops and very small form factor systems; or tend to be markedly more expensive enterprise oriented devices that focus on IOPS) it isn't clear why you'd expect much improvement on application loading workloads.
SSDs are at their best, and the difference between good and merely adequate SSDs most noticeable, under brutal random I/O loads, the heavier the better. Those are what make mechanical disks entirely obsolete, cheap SSD controllers start to drop the ball, and more expensive ones really shine. Since application makers generally still have to assume that many of their customers are running HDDs(plus the console ports that may only be able to assume an optical disk and a tiny amount of RAM, and the mobile apps that need to work with cheap and mediocre eMMC flash), they would do well to avoid that sort of load.
HDD vs. SSD was a pretty dramatic jump because even the best HDDs absolutely crater if forced to seek(whether by fragmentation or by two or more programs both trying to access the same disk); but there aren't a whole lot of desktop workloads where 'excellent at obnoxiously seeky workloads' vs. 'damned heroic at obnoxiously seeky workloads' makes a terribly noticeable difference. Plus, a lot of desktop workloads still involve fairly small amounts of data, so a decent chunk of RAM is both helpful and economically viable. Part of the appeal of crazy-fast SSDs is that the cost rather less per GB than RAM does, while not being too much worse, which allows you to attack problems large enough that the RAM you really want is either heroically expensive or just not for sale. On the desktop, a fair few programs in common use are still 32 bit, and much less demanding.