That's debatable. There are improvements, but fewer "Wow!" effects than before. On the other hand there are quire a few interesting developments for new models and methods that may have nice results in 2026.
LLMs are sold as being able to do many things. The one that I am concerned with is coding. It's one of their top use cases. Everyone says they can code. I run simple prompts in claude and it compiles slightly more than half the time. It fully works maybe 10% of the time? It's fun to play with, but doesn't do anything what Sam Altman or the folks at Anthropic, MS, or Google claim their products can. The improvements aren't tangible. I am sure there's some improvement, but when it can't make code compile...and has full access to the classpath, it's a huge issue. Claude regularly hallucinates methods that don't exist and never existed. It just kinda guessed on what others would name a method in the same class. That's a fundamental issue with LLMs. They only guess....better than one would expect, but a guessing machine is pointless for most work you'd pay someone to do.
...and coding in a compiled language is faar easier than writing prose or all the other things people say LLMs can do.
My frustration is we've been hearing for nearly 5 years of these amazing tools that can write code and will put people like me out of work and CEOs publicly stating they're laying people off due to AI efficiency...when it's clearly fraud for many reasons. If it actually worked as promised, the world would be a different place. Instead, little has changed and now the costs of GPUs, SSDs, RAM and electricity are going up to subsidize this ponzi scheme...and no one is charging these CEOs for frauding investors.