Forgot your password?
typodupeerror

Comment Re:hmmm (Score 1) 52

Christ. Only in 2026 would I have to do this.

1) We weren't talking about a popularity contest. Though I do imagine Claude Code definitely had the most mentions by respondents to the, uhhh /me glances at notes, "pragmaticengineer.com survey". OpenClaw has more stars on github than software with tens of millions more in installed base than it.
2) After this comment, I'm not sure you should ever reply to anything technical ever again.

If there's anything that can be learned here, it's that people like you love having opinions. Even when shit like "My sense of it is that this code is not the LLM itself, it is the infrastructure and interface layer between the user and the LLM." has escaped your fingers on a public forum. Did an LLM write that? Be honest.

Claude and OpenCode have near feature-parity. I don't think anything in Claude Code's codebase is going to help it win your github stars. Maybe more flashy lights and buzzwords might, though.

Comment Re:hmmm (Score 1) 52

Context engineering for modern agents. The distinction is what you can imagine. In context engineering, you're dynamically altering the entire context window instead of just prompts added to them.

As for "of value", perhaps the system and agent prompts. That's a pretty fast-moving target. But since Claude has locked down Opus access to just API keys for external tools, those are really just for trying to glean ideas from.

Comment Re:TypeScript? (Score 1) 52

Incorrect.

There is nothing in it that should demand high performance, I agree.
But Claude Code is never waiting on your input. It's waiting on its frame timer to expire so it can render the next frame of its interface. Because decisions... were made... my people of questionable wisdom.

It's a real time React renderer that runs at 60fps.

Comment Re:Sloppary (Score 2, Insightful) 52

1 detail in that list is relevant to the code dump.

The rest, and ChatGPT's opinion on them is just padding your word count:
"This is the brain of the operation." "leveraging its dead code elimination for feature flags and its faster startup times." "there's a command system as rich as any IDE."

And of course, they're also simply not fact-based in the slightest.

The Tool System (~40 tools): Claude Code uses a plugin-like tool architecture.

All modern agents do. My home-rolled Perl ones do, too.

Multi-Agent Orchestration: Claude Code can spawn sub-agents (they call them "swarms") to handle complex, parallelizable tasks.

Yes. This is a core feature of modern agents.

IDE Bridge System: A bidirectional communication layer connects IDE extensions (VS Code, JetBrains) to the CLI via JWT-authenticated channels. This is how the "Claude in your editor" experience works.

If you use Claude Code, and you use an IDE, or you link any agent to your IDE, this is how it works. How is this an insight garnered by the code dump?

Persistent Memory System: A file-based memory directory where Claude stores context about you, your project, and your preferences across sessions.

Again, agents 101, here.

Bun over Node:

If they had chosen Node over Bun, now that would be newsworthy.

React for CLI

Actually cool information. Already known (they've discussed it)- but still, informative for those who haven't kept up.

Zod v4 for validation

Ya, Zod is what you use for schema validation in TS.

~50 slash commands

Again, anyone who has used the tool knows this. How is this an insight of the code dump?

Lazy-loaded modules

Holy fuck, welcome to 2001.

The LLM summary can almost be moderated -1 Off Topic
It has nothing to do with the code dump.
Instead, it could have summarized some of the cool shit people have figured out, like its built in Tomagatchilike

Comment Re:hmmm (Score 2) 52

As a non-programmer and non-expert in AI, how bad is this for Anthropic?

Not at all bad. Their competitors, such as Codex, are already open source. Anthropic is the odd man out being closed. It's just client side "prompt engineering" and IDE integration stuff, click bait headlines not withstanding.

Nothing of real value has been disclosed. It's interesting, but that's about all.

Comment Re:In a Word (Score -1, Troll) 68

LOL. This is great. What you may be missing here is that global warming alarmists are obligated to shout down any suggestion that human development has led to any upward skew in historical temperature data through localized effects, as opposed to global atmospheric effects. Now, we have someone pointing fingers at data centers by measuring temperatures near these developments and reporting increases. "The Cat" has instinctually surmised the crisis.

What's an interweb Planet Saver to do??!!

Comment Re: Reviewing code is more effort than writing cod (Score 1) 77

The code they write is an absolute shit-show for a number of reasons.
You can get work done, if you don't mind the glaring fucking inefficiency of it- time wasted trying to coax it into doing what you want, how you want it- more spaghetti-at-the-wall write-test cycles than you'd expect for a first-year programmer, glaring logic errors that you need to correct (that to be fair, usually come from a lack of fleshed-out instruction on your part- but still, if my instructions are larger than the code, what prize have I won?), but I do find that they're quite alright at having a chunk of code stuffed into their context and coming up with unit tests for everything they notice within it. Sometimes they even surprise me and come up with tests for things I didn't think of.

Comment Re:Manus (Score 1) 33

The ones I have been in don't talk anything like that. And I've been in many.

Not that many apparently.
They talk like that in the board room, they talk like that when it's 2 CEOs out for a drink (and you got drug along, since you're the Chief Engineer), and they talk that way when they're just shooting the shit.
Hanging out with groups of executives in Vegas during conventions leads me to want to fucking kill myself. It's not human conversation. It's weird cosplaying.

The different scopes involve different speaking terms, those with a military bent have one set of recurring terms. Technology based boards, another. Marketing yet another, along with fiduciary involved boards. Some of the groups I have been in have significant overlap.

Board of directors. You're crossing boards and groups, and it has confused you.

Once you have been in a field, you end up getting used to the terms used, and they are logical.

Bullshit.

"Manus is the action engine that goes beyond answers to execute tasks, automate workflows, and extend your human reach." Now that is bullshit. And if someone said that in a board I'm on,, I'd tell them it was bullshit.

And if you said that to the person who said it in the board of directors that I sit on, that would be the last thing you ever said in it, and subsequently, that position.

What boards have you served on to gain that unassailable knowledge?

Board of Directors for a medium sized LLC, and smaller LLCs that we acquired before dissolving.

Comment Re:Not the problem (Score 1) 77

Some models have been overly-sycophantic, however that's the exception- and a gross failure in fine-tuning, not the norm.
ChatGPT 5.2:

Hey, AI, I think the world is flat and rests on the back of an infinite stack of turtles

...
Quick, checkable evidence the Earth isn’t flat
...
Why “infinite turtles” doesn’t work as a physical model
...

That being said- I do agree with the final point: If you're one of those people who has a serious inferiority complex, or some kind of gross insecurity, you're going to swallow up affirmation when models produce it.
But a lot of work goes into trying to make sure they don't.

Comment Re:Not unique to AI (Score 1) 77

you can't trust an AI to truly remember anything you tried to "teach" it if it even got a look at your fixes of their crappy code, because even if it did, the next version of the bot's engine may need to be retrained from scratch as it "forgot" almost everything.

Completely incorrect.
An LLM remembers nothing that doesn't fit into its context.
To that end, we have standardized files that are pumped into the context as a form of "long term guidance/memory". The engine has nothing to do with this.

Plus, it is REALLY hard to get AI to understand general code design philosophies like "3 strikes and you refactor" - it is designed to regurgitate first, not solve problems by increasing the use of shared code.

Also completely incorrect.
It'll do as you ask. If you ask it to refactor at some threshold of attempts at getting the test to pass with an implementation- it will.

I look at some AI results and all I see is tech debt that will eventually kill the product but never get fixed because nobody quite understands the original task it was trying to do when it just did 'copy and mod'.

Tech debt in LLM output is real, and yes- precisely because nobody gives a fuck what it's producing, and thus doesn't really understand it.
However, generative models are not "copying and modding".

Comment Re:They will have to or will go bankrupt (Score 1, Troll) 52

This. This is a legal earthquake and existing law firms will pivot and new law firms will be created to dive into Big Tech social media settlement money. Plaintiffs will be groomed, "expert" witnesses will be retained for years, judges will get cushy property deals and non-show non-profit jobs for their clans.... the whole shebang is spinning up right now.

And "changes" will only mitigate (no preclude) future cases. This is all unprovable mental health stuff and "harm" can be attributed to anyone that's seen a post since facebook et al. was founded.

Slashdot Top Deals

BASIC is to computer programming as QWERTY is to typing. -- Seymour Papert

Working...