This story comes up every few years. They are all written off the same template, like how this new generation of tools will allow everyone to write their own apps, and you don't need professional programmers, and here's an example of an 8-year old who made an app in two minutes etc. These stories are written by people who don't have a clue what the working day of a programmer looks like, and imagines something sitting isolated at his desk typing in code all day
The programmer's job is to translate the requirements from other humans into requirements that a machine can understand. For very well-defined domains of applications, it is indeed possible for non-programmers to use tools that can achieve the same thing. That was the same with VB 1.0 or equivalents prior to that. I don't think the scope of possible applications that can be written in this way has broadened much in the decades after. Applications that is just a dialog with very simple logic can be written by drawing the dialog and copy-pasting and adapting small statements etc.
Beyond that there's not been much progress in "auto-writing" applications. The reason is, that the current programming languages are actually fairly efficient and precise ways of explaining to a computer what it should do. The job of the programmer is to translate the "business" requirements into that specification.
Even for a fairly trivial stand-alone program computers can't do this today. Even if that were to happen, consider that much of the programmer's time is not spent writing code within some confined, well-defined environment, just writing line after line. Rather, it is spent understanding the customer-specific requirements, realizing if they are impossible/inconsistent and working out a solution together with the customer, integrating multiple systems and many different levels in those systems using completely different interfaces and paradigms, and figuring out how it is all going to work together.
My experience is that most automatic code-writing and analysis tool stop working when you add this layer of complexity. They may work well for a completely stand-alone Java application with a main() method, but how does it work when you have client code, native code, server code, 10 diff. frameworks, 3rd party libraries some written in other languages, external web services, legacy systems, end-users who think about things in different ways than the internal data models etc, implicit contracts/conventions, leaky abstractions, unspecified time-out conditions, workarounds required due to bug in other systems and difference in end-user setups? The complexity and bugs here is always resolved by human programmers and I suspect it will be that way for a long, long time even if computers manage to be able to write stand-alone programs themselves (which IMHO will be the point where we have something approaching general AI). While you can take individual aspects of these problems and formalize them, the amount of combinations and the diversity in scenarios is so big, that it becomes impractical.
If you have competent programmers, most of the bugs are going to be in these areas, since it is also easy for a competent human programmer to write a stand-alone program that works well on one machine.