AI code generation can be very useful but unless someone understands the code to a degree of expertise they have no idea if it's trash or not and certainly no idea how to debug it. So yeah I'd say it's fine to ask AI questions and get it to generate boiler plate but only if you can understand what it's outputting and fix it or deal with edge cases. So it augments knowledge, it does not replace it.
What is worse is that when it's wrong it can be very wrong in obvious or non obvious ways. That is not surprising considering how AIs work (ingesting large quantities of data, training, and spewing out tokens based on probability and randomness). They can hallucinate and they can exhibit bias. I have to wonder where they even get their code samples from, but let's say somewhere like GitHub, then obviously they could be trained on some really terrible code and the farther from the beaten track you go, the worse it's going to get.