Oh boy. There's a lot wrong with this lazy essay. It's obviously geared toward the liberal arts crowd. I mean, it's the New Yorker, not Wired or Scientific American.
* There's no rigorous testing done of GPT-4 in this piece. Any of us who've spent a lot of time with it know its strengths and weaknesses. Yeah, it can spit out boilerplate quickly. Or solutions to common problems that are well-represented in its training data. But in my year-long experimentation with it, including using it while I do small- to medium-size programming projects, it struggles with: coordinating several layers/components of abstraction, keeping things simple (I frequently notice it over-complicating things), genuine creativity -- so much of its output from fiction writing to code is the most generic and bland possible, and of course there are the much-talked-about hallucinations.
* I question how informed this author is. He seems like the dabbler he think he isn't. The "centaur" idea in chess was discredited many years ago. Humans can't beat the strongest chess AI even with assistance. He struggled with "Hello world"? Seriously? Come on, man. I can't take this seriously.
It's a neat tool. I use it for getting syntax in languages/frameworks I'm less familiar with, for boring boilerplate, for getting ideas when stuck. But it absolutely does not have the "qualities of a senior engineer". That's laughable.
Now I'm sure long term we will see AGI. I think we'll see AGI in five years or less. But it's not replacing humans at this time except for the most low value, trivial tasks. It won't even replace a bright intern or new junior software engineer. It simply does not have the ability to self-direct, follow-through on long goals, use common sense and abstract reasoning, etc.
It's a good assistant and power tool -- as long as you know its limits and double-check its work -- but this article jumps the gun in reporting on the "waning" of the craft. We're not there yet.