Infinitely less. I would never even get started with the project without the LLM. I don't know Python syntax well enough. I dislike the language, especially the indentation based logic, especially with my visual impairment now, but I hated it before that happened. Maybe I could learn "Python with braces" some day. But not regular Python. In any case, the only reason I used it is because so many APIs are readily available through Python libraries. It also would take me forever to RTFM.
Truly, I have written a ton of code I never could have dreamed of doing without the help of LLM. I'm not saying iterating over and over is fun. But the result speaks for itself.
That said, it's all been fairly custom personal projects, such as code to manage my very large home network with 350 IP devices, integration between pfSense and Unifi APs, command-line tools to migrate data between the too, operate on clients in bulk, in a way that the vendor web GUIs just can never do. When I polish it, I will put it on Github, as I think these tools may also be useful to others. Many other home users and small businesses combine the use of pfSense router and Unifi Wifi APs. I will not work on any features to support equipment I don't use, such as Unifi gateways, Unifi management switches, etc. And I will not offer support. But with open-source, maybe someone else can fork it if they need to.
The one I did last night with PowerShell monitors the Windows NIC traffic in real-time. And if it's above a set threshold, keeps my power hungry desktop PC from automatically going to sleep. This is so that my LAN backups can complete, the encrypted snapshots can upload to Google drive, etc, as these programs lack useful power management. Helps with traffic from my Torrents VM too ;) The PS script uses native Windows APIs to monitor the NIC stats, and keep the machine from sleeping. This is done through inline C# code making Win32 API calls.
The thing is, I don't know any PowerShell. I also don't know any C#. My main languages are C99, C++98, x86 / x64 assembly. But I was able to get all that stuff to work in languages basically foreign to me, due to excellent testing and debugging skills. The LLM certainly made tons of mistakes, like copying the wrong constants from the Win32 SDK and hardcoding them into the C# in PowerShell. That caused not only the machine to stay awake with network activity, but also the 3 power hungry monitors. All because a 0 was supposed to be 1. The LLM could never find that bug. But it was obvious to me when I read the code that copying & hard-coding stuff from an SDK was very error-prone, and I found the bug pretty much right away once I read it. It all comes from the fact that constants cannot be exported from the SDK. There is just no proper binding mechanism to do so ! No IDL, include file equivalents, etc. Only external API calls and structure definitions. But not constants. And wrong constants obviously produce bugs.
My next step was to ask the LLM which languages do have useful Win32 bindings, besides C, C++ and C#. The answer was Rust. So, I explored converting the working .ps1 script to .rs . So far, I still haven't succeeded in getting it to work, and I spent much more time on that than writing the original Powershell and getting it to work. Rust seems to be a major pain point for LLMs, so far.
I say LLMs, because I used both Gemini and ChatGPT to work on this, alternating between the 2 when I ran into query limits with one or the other. I would have switched to Github copilot if I hit both of their limits. Those are the only 3 LLMs I have tried for coding, so far. I know there are others. I will probably try them some day.
Anyway, the LLMs are extremely useful to me to implement these personal projects. Since I stopped working for a living, for various reasons, mainly COVID WFH changes I could not handle, psych issues, and vision issues, I have written a lot of code - but none collaboratively. Most of it will never become public, but some will. I have a hard time imagining how one would use the LLMs in a more collaborative context. The public ones are not going to respect a company's language style guide. They are going to swallow the code, train on it, and leak it to others, which is a big no-no for most commercial entities. Companies need to be be running their own private LLMs and train their local codebases. They would also need training on public internet data to be useful. That's probably what big tech is already doing. Small tech could likely not afford to. In any case, I don't think I will ever work for somebody else again. But the LLMs really do help get prototype some small stuff very quickly. They still hallucinate a very large % of the time, in coding or non-coding contexts. But the non-hallucinating output remains very useful.