But the bottleneck is not CPU itself for a good many applications.
That's true, but it's also not relevant. For most "apps", the main issues are battery life and responsiveness. Multi-core is increasingly being seen as a tool to increase responsiveness rather than throughput, because the app looks like it hasn't fallen asleep even if it hasn't done the thing that the user asked yet.
If I ask a database to do a sort, it may use parallelism under the hood, [...]
Interesting example. I wrote the sort subsystem for a (non-SQL) DBMS in one of my previous jobs, so... I guess this illustrates that we come from different perspectives on this point. In case you are curious, it was single-threaded, although it was designed to work on a clustered database, so it was parallel in the sense that it did parallel sorting across multiple machines in a cluster (which is what we called it before we called it a "cloud").
That "root engine" may indeed use FP, but the model maker doesn't have to know or care.
Right, and that's the advantage: Pure functional programming ensures that the client doesn't have to care, because workers are guaranteed not to modify anything that they are not supposed to because they are pure functions.
Map/reduce was all the rage a couple of years ago. I think the main advantage was not the map/reduce model, but the realisation that when you have "big data", you take the code to the data rather than taking the data to the code. But on top of that, forcing yourself into a pure functional style means that your code can run anywhere because it doesn't care about the context in which it runs.