You’re attacking my credentials and character instead of addressing the engineering substance. Calling someone a “joker” isn’t an argument, it's "ad-hominem". You silly goose! ;-) Now, you seem smart, so I have to assume you already know what ad-hominem is, yet you chose to do it anyway, revealing your character? or?
I’m not going to disclose my employer to satisfy an anonymous commenter, but my claims don’t depend on who I am. They depend on whether the workflow works. The practices I described — iterative prompting, clear specifications, decomposition into smaller tasks, mandatory test coverage, and rejection of outputs that fail objective criteria — are standard engineering control mechanisms. They work regardless of the logo on someone’s paycheck.
When I said we “force” the AI to do things, I was referring to constraint and validation, not literal coercion. We define acceptance criteria. We require meaningful tests. We reject outputs that don’t meet them. That’s the same way we “force” a compiler to produce correct binaries — by defining rules and refusing invalid results.
On unit tests: obviously no serious engineer believes in a trivial 1:1 mapping between lines and tests. The point is comprehensive behavioral coverage. Modern LLMs are unusually good at generating edge-case tests because they don’t get bored. The human’s job is to verify that those tests are meaningful and not tautological.
Your claim that you cannot prompt an AI that produces mediocre code into producing better code is directly contradicted by both practice and basic computer science principles. Output quality improves with clearer specifications, iterative refinement, task decomposition, and automated verification. That is true for humans, compilers, and AI systems alike. This is not rhetoric. It’s workflow.
You also brought up my prior concerns about superintelligence risk as if that’s some kind of knockdown contradiction. It isn’t. Believing that future, more powerful systems may pose existential risks is entirely compatible with recognizing that current systems are useful tools.
Many of the most accomplished AI researchers *in the world* hold both views simultaneously. For example, also signatories on the petition were:
Geoffrey Hinton — Nobel Prize in Physics (2024), Turing Award (2018), pioneer of deep learning.
Yoshua Bengio — Turing Award winner, co-architect of modern deep learning.
Stuart Russell — UC Berkeley professor and co-author of the standard AI textbook used worldwide.
Demis Hassabis — Founder of DeepMind, led AlphaGo and AlphaFold.
Ilya Sutskever — Co-creator of AlexNet and former Chief Scientist of OpenAI.
Eliezer Yudkowsky — Founder of the Machine Intelligence Research Institute and one of the earliest public advocates for AI alignment and superintelligence risk analysis.
These are not fringe commentators. These are central figures in the field.
The presence of uncredentialed people on a public statement does not dilute the weight of credentialed signatories. The strength of an argument does not depend on the weakest person who agrees with it. It depends on evidence and expertise.
Opposing certain directions of research does not make existing tools ineffective. That would be like arguing that concerns about nuclear weapons mean nuclear power plants can’t generate electricity.
If you want to argue against the usefulness of these systems, the appropriate approach is to engage with measurable outcomes: productivity metrics, defect rates, test coverage, iteration speed. Dismissing them by attacking identities or word choice isn’t a technical critique.
The tool amplifies the operator. In the hands of a careless engineer, it can amplify mistakes. In the hands of a careful one, it increases leverage. That’s the real discussion.