Forgot your password?
typodupeerror

Comment Re:What I find amusing is... (Score 1) 38

If you ask Claude about any of these features, it will deny that they exist.

It makes you wonder. Were they removed from the models that are currently running, or was Claude taught to not disclose their existence?

"Claude Code" is just a piece of Node.js software that talks to one of the "claude" LLMs (e.g. "Opus") in the cloud. The LLM model running in the cloud of course doesn't know anything about the proprietary client software you are running, because it wasn't trained on it.

It's not about denial, it's just that the LLM isn't trained on the closed source code of its client any more than it is trained on the Windows source code. That code isn't in the public domain so it isn't available as a reference to the model. All it knows about is what's in the publicly available manual. You can test this by asking opus about claude code features. You'll see it doing a bunch of WebGet requests for the claude code manual.md files.

Comment Re:You sure about that? (Score 1) 125

Computer 1 interprets the program and generates the documentation, saving it to a USB drive.
You unplug the USB drive and move it over to Computer 2.
Computer 2 reads the documentation and generates a new code base.
You can read the documentation and there was no other means of communication.

If you don't think a repeatable process is sufficient "proof" then you aren't being realistic and that's a problem with you, not the law.

Except both computer 1 and 2 are both probably running an LLM model that has been exposed to the source code and so are tainted, unless you train your own model that you know has never seen the source, an endeavor that costs 10s of millions of dollars.

Comment Re:Owner must prove its a derivative (Score 1) 125

The above is subject to misinterpretation. The copyright owner must demonstrate its a derivative and win in court. Owner must prove guilt, publisher does not need to prove innocence.

It a civil case you don't need to prove "guilt", just that it is more likely than not that they looked at your source.

This is why, for example, when there is a major leak or hack against a video game console, emulator developers won't let anyone who's seen the leak work on the project. It exposes them to the accusation that they are derived from proprietary IP. They know that console manufacturers are just itching to sue them anyway, and being a non-cleanroom implementation give them the excuse.

I think it would be pretty easy to argue that just about any open source project that lives on a notable hosting platform has been sucked up for LLM training at this point. For that reason, any competitive proprietary project coded with an LLM can be credibly accused of being a derivative work, as the preponderance of circumstantial evidence would point to the LLM being "tainted" by the OSS project, unless you could demonstrate that it was excluded from the training data.

Comment Re:Not tested in court... (Score 1) 125

We don't allow that with human brains: that's why clean-room implementations are a thing. Why should it be any different for LLMs, which are less transformative than human cognition, if anything. If the model was trained on the data, then I don't think anything spat out by that model can be considered a "clean" implementation, for the same reason you don't let software engineers who have seen your competitor's source code work on your clean room clone of their projects.

Comment Re: Can AI clone lawyers & judges? (Score 1) 125

The coder is trained but without any copy left code.

It costs 10s of millions of dollars to train a big competent LLM. GPT-4 cost ~$74M to train, for example. You can hire a team of human devs who have never looked at the source to do a clean room rewrite the project for a fraction of the cost it would take to develop a "clean" model.

That said, I could see a use for a model that was only trained on MIT-licensed or public domain code.

Comment Re:hohoho (Score 1) 69

No, they're arguing there's ways to use their software to commit an illegal act, which is true of literally anything.

I can't imagine anyone making the argument that using AI tools to rewrite code in another language removes the copyright.

It had better remove copyright, since all AI coding agents are huge engines for regurgitating code harvested from github and stack overflow without any attribution or respect for the original licenses. If this use doesn't elide copyright, then the LLMs themselves have no right to exist.

Comment Re:TypeScript? (Score 1) 65

Cause yeah, popular is how you should always make arch decisions. And also WTH would anyone use Python for a serious system that requires performance?

Claude code is basically just a fancy network client for the cloud-hosted LLM. There is nothing in it demanding high performance. It just needs to be a sandbox around the shell environment and be able to send off prompts, collect user input, and carry out intents returned from the LLM. 99.9% of the time the CLI app is idle waiting for user input, idle waiting for LLM network I/O, or idle waiting for a cli tool invocation to return.

The more important requirement is for it to be something cross platform and easy to iterate on since this is a rapidly evolving space.

Comment Re:Glad I don't smoke (Score 1) 103

The impression I get from them is charging at a remote station is fairly rare and so convenience is just not that big a deal.

Right, which is why charging networks can get away with bullshit like requiring an app to charge. Frequent flyers only get annoyed once and then just use the app. The rare road tripper has no choice. If charging stations were more ubiquitous, the we could give them the finger and use one that took CCs, but they aren't, so we can't. Also, the largest player in the US, Tesla, has what was up to recently a proprietary network that only worked with Tesla cars, so you HAVE to use an app at most of their locations, and they've unfortunately normalized this shit for the rest of the industry.

Comment Re:It is going to happen so propose a useful solut (Score 1) 193

Browsers can be installed by a user. Nothing stopping your kid from downloading his own copy and configuring it as he sees fit. Put that function in the kernel and it's more difficult (but not impossible) to circumvent.

So download a browser that lies about the kernel flag? The remote service can't interface with the kernel directly, it's counting on a browser to accurately report the state of the flag.

The only way you could trust the flag state as report by the client, is if it was signed by a trusted third party, at which point, it's irrelevant if it's a kernel feature or not. All you need is for the browser to pass along the signed bolus of data, e.g. PKCS#11 or similar.

Comment Re:Glad I don't smoke (Score 1) 103

If enough people complain, I'm sure charging makers would include credit card readers.

I don't see why. You are a captive consumer and the demand is inelastic. If you need to charge now, and the nearest DC fast charger is 30 miles away, you are going to install the app. The company doesn't need to be responsive to your complaints.

Comment Re:It is going to happen so propose a useful solut (Score 1) 193

Governments are starting to require people verify their ages with an actual picture ID either primarily or via a trusted third party verifier. How does does a flag sent by the OS that the user sets to whatever they want satisfy that requirement?

Comment Re: Business opportunity! (Score 1) 183

It's not a question of a default admin password anymore. Many people never update their router firmware to patch security holes or run old out-of-support routers that don't receive updates anymore. The risk is more often that someone is running a router that has remote exploitable exploits.

Fortunately those people are increasingly on ISP-managed equipment now, and more tech savvy folks throw their ISP router in the trash and run OpenWRT or PF/OPNSense or similar.

Comment Re:Which ones aren't made in China? (Score 1) 183

Also, the Chinese government has no real interest in me, and far less ability to lay hands on me than my own government. I don't want to be spied on by anyone, but if my choices are TPLink/CCP vs Cisco/NSA, I'll take China, thanks. Anything sensitive is end-to-end encrypted anyway, so sure they can do some traffic analysis, but I'm not too worried about my financial data or anything. They already got my life's story when they broke into OPM back in '15 and stole my SF86 anyways.

Slashdot Top Deals

"If the code and the comments disagree, then both are probably wrong." -- Norm Schryer

Working...