Except that the AI also gets better at helping people with less training to do that design and architecture work, which means more people become capable of doing that work, which means the value to the firm declines, and the salary they pay declines with it.
As the overall productivity of the people increase, there are more people who can do the work at each level, so perversely, it encourages companies to pay them less.
I'm not sure that part is true—or at least true only with major caveats. The people I see who skip the basics and try to do design and architecture do it the way LLMs do it: superficial pattern matching. This lets you solve simple problems by gluing together off-the-shelf parts. Without understanding the fundamentals, however, the solutions tend to be stupendously inefficient. I used to be in the 'compute is cheap' camp, and when you compare Java vs. optimized C in a business app, that's generally true. When you're comparing 'touch it once binary data' vs 'lets serialize everything as text, add a GUID for every data point, then bounce it between multiple servers and disks, while logging all of that to kibana' you can turn a $1000/month infrastructure bill into a $1000/hour one.
Those same people will tell me their architecture is more 'scalable' and 'reliable' because it uses all the buzzwords.
To be fair, I do think these tools help people advance faster when used correctly, but I disagree that they catapult people forward the way you suggest. I do think this gap can be closed, but it probably means more school learning to "get on the ladder" since the 'apprentice' jobs have been taken by an AI.
As the overall productivity of the people increase, there are more people who can do the work at each level, so perversely, it encourages companies to pay them less.
Here we agree; I'm not sure what the solution is.