It all depends on the task.
Of course, but putting it that way wouldn't be as funny. ;-)
Actually, in one of my recent jobs, I spent a day or so going through an app's use of malloc(), which was used a lot to save data structures for a short time, then free() them. I replaced these calls with calls to a pair of routines that "freed" a chunk by putting it on a list of similar-sized chunks, to be re-used later. The result was a program that was maybe 5% bigger in most runs, but ran between 4 a 5 times faster. This reduced the task from typically a day or so down to a few hours. The client was very happy with the result. Sometimes it's worth paying a (competent ;-) programmer to do things like this. And sometimes it' not worth it.
(Yes, the program had been spending up to 80% of its time inside the malloc() and free() routines. They are a lot cruftier and slower than most people would guess. ;-)
One advantage of the increases in speed and memory is that you can use the practical approach of writing quick-and-dirty prototypes. If it's "good enough" and not used much, you might as well leave it that way. If it's used a lot, you can study the code, and find ways of optimizing it.
But that's not nearly as funny as observing that the study and optimization part is rarely done, with results that most of us old-timers know well: It seems like the modern super-fast machines take as long to do most tasks as they old, slow machines did 20 or 30 years ago. The only real change is often just the flashy graphics surrounding the output (and lots of unused white space on the screen).
OTOH, I did just look at the local weather radar images on my laptop, and checked with google traffic for congested areas. Couldn't do those things 20 years ago.