Efficient programming has been done for generations. Old mainframe code was limited to very limited memory and speed, so it HAD to be written to run efficiently. And it takes effort to write efficient code. I've read old FORTRAN code where the author was making a serious effort to save individual bytes, let alone kilobytes. And it WAS efficient and fast.
But to a "business perspective", "efficiency" is about ROI. Now we have GB of RAM and TB of storage, whereas we used to be stuck with kB of RAM (maybe less!) and not much more storage. So today there are more occasions when it's faster and cheaper to throw cheap hardware at less efficient code than to spend money on expensive programmers who are skilled enough to make a program more efficient. And we have our current era of the cheapest available developers slapping together lumps of libraries from vast repos and producing bloated, inefficient code that barely does the job. But ROI is good: It costs less and the job gets done.
There's still a place for "efficient code". It's in number crunching and password cracking. High energy physics analysis and programs like hashcat are places where efficient code is vital. Elsewhere, "it works, but slowly" is often "good enough".
Frankly I'd prefer less "efficient" code if I can have more "secure" code. Not that the two are mutually exclusive, but if limited effort is to be expended then I'd rather have to wait a few more seconds at the ATM than have my savings stolen.
There's a lot of dimensions to this problem and it's a LONG way from getting solved.