AI is very good for novices, people who don't know something well.
There is plenty of evidence already that novices using AI will remain novices, rather than develop advanced skills. So yes, as a "novice", you can get to some result quicker by using AI, but the result will be that of a "fool with a tool", and your next work's result won't be better, because you didn't learn anything.
It depends...
So, I'm a very experienced software engineer. Going on 40 years in the business, done all kinds of stuff. But there are just too many tools and too many libraries to know, and you never use the same ones on consecutive projects, that's just reality. What I've found is that telling an LLM to do X using this tool I've never used before and then examining the output (including asking the LLM for explanations, and checking them against the docs) until I understand it is at least an order of magnitude faster than learning it myself. I have no doubt that an expert in that tool would end up questioning some of the choices, because I only end up exploring the parts of it that the LLM chose to use. But that doesn't matter as much as the fact that I have a working solution and I understand how and why it works and am capable of debugging it far quicker than I could learn it myself.
As an example, I'm writing a new crypto library -- not implementing the underlying algorithms, which will actually be executing in secure hardware, just putting a user-friendly API on top and pre-solving a lot of the subtle problems that come up so the users of the API won't have to. Anyway, my implementation is in Rust, for good reasons, but at least some of the clients want C++, so I need to bridge C++ and Rust. After looking up the options and discussing the pros and cons with the LLM, I made a choice (CXX), and told the LLM to write a CXX FFI so the C++ API I wrote can call the very similarly-structured (modulo some C++/Rust differences) Rust API I wrote. The LLM did this in about five minutes, including writing all of the Makefiles to build and link it, and some tests.
It didn't work, of course. But it wouldn't have worked the first time if I'd written it either. So I reviewed the tests the LLM had written, directed it to improve them, then told it to debug the problem and make them pass. It did so, and explained the bugs and the fixes. While the LLM was working, I read the bridge code it had written and looked stuff up in the documentation, occasionally asking questions to another LLM instance. Within 20 minutes it was all working. So, I'm 30 minutes into this FFI task and I already have (a) code that works and (b) tests that prove it. I can also see a bunch of things about he bridge code that I don't like. Some of these things I don't like are actually good, most of them are actually bad, exploring the options (with the LLM's help), tweaking a bit and fiddling with the tests for another hour gets me to something that appears -- with my decades of programming experience but limited knowledge of this tool, to be pretty good.
This is good, because I have some more new tools to learn/use. Today. 90 minutes got me a good-enough-for-now FFI solution (for a pretty large and complex API surface) that's probably not too far from actually being good.
Next up, I need a persistent key/value store with particular performance characteristics, high reliability, a solid track record, a no_std (no standard library) Rust API, and that can run on QNX 7.1. Turns out there is no such beast, but lmdb is pretty close. It has all except the no_std Rust API. But there are some Rust crates that offer thin wrappers around lmdb's C API. lmdb-master-sys, part of heed, looks like the best-maintained and most widely-used of these. So I asked the LLM to take a look at what changes might be necessary to make it work as no_std. The LLM identifies a tiny set of cases where the standard library is used, and they're all trivially-replaceable. So, I make the changes while I ask the LLM to write some unit tests. It works, first time. I send a PR to the maintainer of the library. Total time, about 20 minutes. It would have taken me at least three times that long to figure out how to use the lmdb API to write the tests.
Next up... I'll stop here, but you get the idea. If you need to work with a lot of tools you don't know well -- and at least for me the speed at which I need to jump between tools pretty much guarantees that I'll never know any of them really well -- but you have enough experience and deep-enough expertise to quickly see what an implementation is doing and to understand why and how, LLMs will massively accelerate your work.
They also give you time to post on slashdot, while you wait for the LLM to do stuff. Er, I mean they give you time to catch up on email, do code reviews, watch company training videos, read documentation, etc.
Others' mileage will vary, of course, but I find that using an AI tool significantly increases my overall velocity (probably 1.5X overall) while simultaneously significantly increasing the quality of my output. The quality increase isn't because the LLM is better than me. It's definitely not. But it's way, way faster, especially at doing the grungy work that I tend not to do as thoroughly as I should. For example, writing really thorough commit messages. I totally delegate commit message writing to the LLM now. I review and sometimes tweak, but not often.
And its speed makes some things possible that otherwise weren't. For example, often I'll see some aspect of my code that could make it 10% better with a large refactor and I have to weight the benefit of the small improvement against the time sink of the large refactor. No longer. I tell the LLM to do it. Sometimes I tell three instances of the LLM to do three different things (in different checkouts of the code), then decide which, if any, I want to keep (after significant tightening and improvement, some manual, some by giving detailed directions to the LLM).
The result is that while I might do one of ten of those 10% improvements without an LLM, netting an overall 10% improvement, I'll probably do half of them with the LLM (the other half I'll realize weren't actually good ideas, for reasons that weren't obvious until making the attempt -- or seeing the LLM make the attempt). Net improvement, ~60%.
And as for debugging... wow. Claude is seriously good at debugging. It doesn't always get it right the first time, but between the speed at which it can examine the situation, form a hypothesis, test to invalidate the hypothesis and move on to the next hypothesis and the quality of its hypotheses, it may be two orders of magnitude faster than me. It's especially good if you give it a stack trace to parse. Repeatedly it's found the root cause of fairly deep, grungy bugs in less than five seconds, including the time it took to generate a detailed and precisely-correct explanation of the problem. It then takes me a few minutes to parse and understand the explanation, then validate it against the code and (if necessary) relevant documentation. Claude isn't always right in its analyses, of course. But it's very good.
Anyway, for me LLM assistance for development significantly improves both my productivity and the quality of my output. YMMV.