Security isn't something you bolt on after the fact, but a new tool could significantly upend what it is you need to secure against. I suspect that's the real issue here.
My experience of Opus is that it's shockingly capable of tearing apart software binaries. I drop a path to a binary in Claude Code and ask it to tell me how a feature works, and it will usually give me a complete breakdown of classes and functions and how they work together. The binary loader information, symbol data, assembly, etc. are all just another language to Claude, so it really doesn't care. It's not hard to imagine a model trained on and geared toward binary data could seriously undermine how "secrets" are hidden in software.. "Hey Claude, could give me the API keys and explain how transactions are signed for this app?"
As far as I've seen, the AI fanatic's answer is "don't care about the code".
I'm not an AI fanatic, I work for a major tech company and have been forced into being "AI Native" to keep my job. I *like* writing the code, and often disagree with how CC does it, but "don't care about the code" is pretty much right and not a fanatic's point of view.
All of these arguments already happened when "high-level" languages like C first appeared. "How can I trust the assembly produced by the compiler?" "I can do register optimization better!" "If I don't practice managing operands on the fpu stack I'll lose that skill!"
English is just the next high-level programming language. If you don't like the code being produced, write a skill or update your personal context.md to explain why. Ask CC to do code reviews that catch and fix the bad patterns.
Maybe not today, but ultimately saying "I need to always read every line" is going to turn out like trying to verify the assembly produced by compilers.
I'm not here to hype AI. After decades wielding terminals and IDEs I'm being forced to use it. I still want to write code that I don't because it will hurt my AI use metrics that count toward my performance. This is what it's like at a major tech company in 2026. But the picture has changed. AI can search through our codebase and find real bugs. Subtle ones.
Posts like this are unhelpful because they paint a picture that there are these limitations that really aren't there anymore. If you're not getting the same results, frankly, you're using it wrong. My guess would be that you've tried one-shotting a bug hunt with no real context.
Instead of comparing AI's output to compiler warnings ask CC (or whatever you like) to build with warnings as errors and fix the build. Then ask it to generate a list of suspected errors, and when the list is trash, tell it which the ones you don't care about and why, then ask it to try again. That's one way you can start to build up context about what you do care about.
How is AI any less authentic than a game? Both are simulations of reality.
A recording of a singer is a "simulation of reality", but it started out with a real person. Knowing that is part of the recording's power. If you replace the singer with something generated by technology and a listener knows it, it's much harder to create the same emotional connection, even when it "sounds good". People want to feel there's a connection with other people. Why would games be so different?
It's easy to fall behind the state of the art on AI tools. I tried various coding assistants over the past few years and never found them helpful until suddenly last November, Claude Code crossed over to being legitimately useful. I now use it throughout the day, occasionally giving it tasks could easily go to a low-level engineer.
An LLM may not be able to reason, but the combination of an orchestrator, agents and MCP tools certainly can. Not at a human level, but if I can ask it to do something, and it decides whether and how to do it based on the knowledge trained into the model, combined with the current state of the codebase, and then gives a valid reason for the choices, I don't know what else to call that if not reasoning.
...Anthropic's CEO sees the purpose of his role to do everything he can to sell the company for many billions, as quickly as possible...
I really doubt this. Anthropic is not some startup. They had $14 billion in revenue in 2025, with worse models than they have now. They're projected to have $70 billion in revenue by 2028 and that seems entirely reasonable for the path they're on. The kind of growth they are having would make them one of the largest tech companies in the world in 5 years. Why would they sell..? And to who..?
It is wrong always, everywhere and for everyone to believe anything upon insufficient evidence. - W. K. Clifford, British philosopher, circa 1876