Yes, but that is what the XPoint technology is trying to address. The NVMe technology is not designed to operate like ram and the latencies are still very high. Nominal NVMe latency for a random access is 15-30uS. The performance (1.5-3.0 GBytes/sec for normal and 5 GBytes/sec+ for high-end NVMe devices, reading) comes from the multi-queue design allowing many requests to be queued at the same time.
Very few workloads would be able to attain the required request concurrency to actually max-out a NVMe device. You have to have something like 64-128 random requests outstanding to max-out the bandwidth (fewer for sequential). Server-side services have no problem doing this, but very few consumer apps can take full advantage of it.
The NVMe design is thus more akin to being a fast storage controller and should not be considered similar to a dynamic ram controller in terms of performance capability.
Because of the request concurrency required to actually attain the high read capability of a NVMe device, people shouldn't throw away their SATA SSDs just yet. Most SATA SSDs will actually have higher write bandwidth than low-end NVMe devices (particularly small form factor NVMe devices). And for a lot of (particularly consumer) workloads, the NVMe SSD will not be a whole lot faster.
That said, I really love NVMe, particularly when configured as swap and/or a swap-based disk cache. And I love it even more as a primary filesystem. It's so fast that I've had to redesign numerous code paths in DragonFlyBSD to be able to take full advantage of it. For example, the buffer cache and VM page queue (pageout demon) code was never designed for a data read rate of 5 GBytes/sec. Think about what 5+ GBytes/sec of new file-backed VM pages being instantiated per second does to normal VM page queue algorithms which normally only keep a few hundred megabytes of completely free pages in PG_FREE. The pageout demon couldn't recycle pages fast enough to keep up!
Its a nice problem to have :-)