Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re: AI is just an untrained novice! (Score 1) 58

It most certainly does.

I cannot speak for whatever platform you're using, but LLM contexts are absolutely self-referential.
It's called In-Context Learning, and it's the very base of what makes these things useful as a chat bot.

I suspect your issue probably lies in whatever is managing the LLM's context for you.

Comment Re:AI is just an untrained novice! (Score 1) 58

lol- that's pretty insane behavior.

Were you using some kind of agent/tool managing the context (I think Claude users generally use Claude Code?) or were you using a direct chat interface?
Errors like that generally smell like compressed/missing context. This happens a lot in cases where there's a divergence between what you think should be in the context, and whatever application is front-ending the LLM decides actually goes in the context.

Comment Re:Just shoddy... (Score 1) 58

Codex's sandbox bypass flag: --dangerously-bypass-approvals-and-sandbox
I feel like it's pretty unambiguous.

If the tool this person was using wasn't equally as clear, then fuck that tool.
That doesn't make this person any less of a naive fool- but still fuck that tool.

If it is that clear within this tool.. well, then I still wouldn't be surprised if he disabled the sandbox and then still posted when his toddler LLM nonconsensually made a man out of him.

Comment Re:Spaces in filenames (Score 3, Insightful) 58

Smarter mitigation:
ubuntu@primary:~$ rm /*
rm: cannot remove '/bin': Permission denied
rm: cannot remove '/boot': Is a directory
rm: cannot remove '/dev': Is a directory
rm: cannot remove '/etc': Is a directory
rm: cannot remove '/home': Is a directory
rm: cannot remove '/lib': Permission denied
rm: cannot remove '/lost+found': Is a directory
rm: cannot remove '/media': Is a directory
rm: cannot remove '/mnt': Is a directory
rm: cannot remove '/opt': Is a directory
rm: cannot remove '/proc': Is a directory
rm: cannot remove '/root': Is a directory
rm: cannot remove '/run': Is a directory
rm: cannot remove '/sbin': Permission denied
rm: cannot remove '/snap': Is a directory
rm: cannot remove '/srv': Is a directory
rm: cannot remove '/sys': Is a directory
rm: cannot remove '/tmp': Is a directory
rm: cannot remove '/usr': Is a directory
rm: cannot remove '/var': Is a directory

Permissions, as it turns out, are a thing.
In Windows, it might not be trivial to prevent the main user of the machine to delete everything in the root of a file system... don't know, I'm about 25 years out of date in my Windows knowledge- however, every LLM tool I've worked with has strong sandboxing because it's well known that LLMs can be fucking psychotic.
If you bypass that sandbox, you've just let a fucking toddler with bizarrely impressive code generation skills have direct authenticated access to your computer. What follows is predictable, and on you.

Comment Re:AI is just an untrained novice! (Score 1) 58

With zero ability to learn or know if they were right or wrong.

This is wrong.

Learning happens in-context. It's easily demonstrable.

Where you are accurate, is that the model itself doesn't learn outside of its context- so a new session has unlearned everything.
There are long-term "memory" solutions in play for that, but that's still evolving functionality.

Comment Re:Stupid person uses bad tool to do damage.... (Score 2) 58

I'm not familiar with this exact tool- but every tool I *am* familiar with that attaches an LLM to a tool with the ability to make changes on your computer is sandboxed and requires specific flags to disable the sandbox, a la Codex's --dangerously-bypass-approvals-and-sandbox flag.
The danger isn't being covered up in this instance- it's right in your face. LLMs are not predictable. Do not let them touch your fucking computer without a sandbox.

If you use that flag, I'm afraid I can't blame the tool. It was caveat emptor, and you rolled snake eyes.

However, if this one really is just "--turbo" or some shit, then ya, I'll absolutely give the tool some of the blame.

Comment Re:How will NYC enforce this? (Score 0) 41

Why are US lawmakers so bad at their jobs that these kinds of work-arounds are a thing? Don't they get experts to write the laws, and then fire them if they screw up this badly?

I've noticed that a lot of US tech companies don't understand how this isn't normal, and when they discover that the same work-arounds don't work in Europe, they get very upset.

Comment Re:The century without a summer (Score 2) 51

To be fair, they don't. They want to slightly reduce the sun's output, so that it offsets the greenhouse effect we have caused. So no famine, just less catastrophic climate change (which causes famine) than we would otherwise experience.

The two biggest dangers are that we screw it up and dim too much, with no way to undo it, and that we use it as an excuse to keep polluting.

Slashdot Top Deals

"I prefer the blunted cudgels of the followers of the Serpent God." -- Sean Doran the Younger

Working...