I use -- and write -- image processing software. Correct use of multiple cores results in *significant* increases in performance, far more than single digits. I have a dual 4-core, 3 GHz mac pro, and I can control the threading of my algorithms on a per-core basis, and every core adds more speed when the algorithms are designed such that a region stays with one core and so remains in-cache for the duration of the hard work.
The key there is to keep main memory from becoming the bottleneck, which it immediately will do if you just sweep along through your data top to bottom (presuming your data is bigger than the cache, which is typoically the case with DSLRs today.) Now, if they ever get main memory to us that runs as fast as the actual CPU, that'll be a different matter, but we're not even close at this point in time.
So it really depends on what you're doing, and how *well* you're doing it. Understanding the limitations of memory and cache is critical to effective use of multicore resources. You're not going to find a lot of code that does that sort of thing outside of very large data processing, and many individuals don't do that kind of data processing at all, or only do it so rarely that speed is not the key issue, only results matter. But there are certainly common use cases where keeping a machine for ten years would use up valuable time in an unacceptable manner. As a user, I am constantly editing my own images with global effects, and so multiple fast cores make a real difference for me. A single core machine is crippled by comparison.