Forgot your password?
typodupeerror
Security AI Privacy

How AI Assistants Are Moving the Security Goalposts 41

An anonymous reader quotes a report from KrebsOnSecurity: AI-based assistants or "agents" -- autonomous programs that have access to the user's computer, files, online services and can automate virtually any task -- are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants -- OpenClaw (formerly known as ClawdBot and Moltbot) -- has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted. If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic's Claude and Microsoft's Copilot also can do these things, but OpenClaw isn't just a passive digital butler waiting for commands. Rather, it's designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done. "The testimonials are remarkable," the AI security firm Snyk observed. "Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who've set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they're away from their desks." You can probably already see how this experimental technology could go sideways in a hurry. [...]
Last month, Meta AI safety director Summer Yue said OpenClaw unexpectedly started mass-deleting messages in her email inbox, despite instructions to confirm those actions first. She wrote: "Nothing humbles you like telling your OpenClaw 'confirm before acting' and watching it speedrun deleting your inbox. I couldn't stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb."

Krebs also noted the many misconfigured OpenClaw installations users had set up, leaving their administrative dashboards publicly accessible online. According to pentester Jamieson O'Reilly, "a cursory search revealed hundreds of such servers exposed online." When those exposed interfaces are accessed, attackers can retrieve the agent's configuration and sensitive credentials. O'Reilly warned attackers could access "every credential the agent uses -- from API keys and bot tokens to OAuth secrets and signing keys."

"You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen," O'Reilly added. And because you control the agent's perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they're displayed."
This discussion has been archived. No new comments can be posted.

How AI Assistants Are Moving the Security Goalposts

Comments Filter:
  • So I'd use such an agent to extract the relevant evidence from my google account and delete the rest of the stuff. The better to not pay granny wolf google. (That's supposed to be Little Red Riding Hood joke.)

    Actually I think a lot of that data is actually stuff the google has decided to store about me and now they want me to pay for the privilege of spying on myself? Sorry, no sale. Anyone else getting nagging warning about how their google account is suddenly almost full? (I already stomped on a big chunk

  • by gurps_npc ( 621217 ) on Monday March 09, 2026 @05:09PM (#66032194) Homepage

    Giving them authority to do ANYTHING is incredibly stupid.

    Would you give a teenager your car's pink slip, your home's title, the password to your bank accounts and tell it to file your taxes?

    Nope. Do not give any AI any permissions to do more than send your messages.

    Frankly, I do not think it is safe to give them the information the internet ones collect without my knowledge or consent.

    • Would you give a teenager your car's pink slip, your home's title, the password to your bank accounts and tell it to file your taxes?

      I'm not an accountant, but if you're using your car and home titles, I think you're already doing your taxes wrong. :-)

      • Would you give a teenager your car's pink slip, your home's title, the password to your bank accounts and tell it to file your taxes?

        I'm not an accountant, but if you're using your car and home titles, I think you're already doing your taxes wrong. :-)

        Sounds like you understood their metaphor just fine. Nobody should be using those things for that activity.

  • Or, more precisely,

    sudo $(dd if=/dev/random of=/dev/stdout count=1)

    Step 3: profit!

  • by gweihir ( 88907 ) on Monday March 09, 2026 @05:27PM (#66032222)

    With technology they do not understand. Is this a security risk? Yes, massively so. Can it be fixed? To the best of our knowledge only by not running these agents except inside heavily isolated sand-boxes. Which kind of defeats their purposes. But LLMs cannot ever be really reliable ans that is what is needed for any security-critical mechanism. Too many people are just bright-eyed naive and expect things from their shiny new fetish that it cannot deliver.

    In other words, bad idea is bad idea.

    • by nightflameauto ( 6607976 ) on Monday March 09, 2026 @05:58PM (#66032278)

      With technology they do not understand. Is this a security risk? Yes, massively so. Can it be fixed? To the best of our knowledge only by not running these agents except inside heavily isolated sand-boxes. Which kind of defeats their purposes. But LLMs cannot ever be really reliable ans that is what is needed for any security-critical mechanism. Too many people are just bright-eyed naive and expect things from their shiny new fetish that it cannot deliver.

      In other words, bad idea is bad idea.

      This is the biggest problem with the current AI prophets' promises. People in this day and age are stupid enough to believe the peddlers of the new snake oil outright, rather than viewing this new thing as an avenue of research that must be tested before being given the keys to "do the things." For some reason, people just believe when it's new tech, without proof. And beyond that, with all kinds of proof that the things being promised aren't just not yet real, they may very well be impossible to achieve with the methods currently being used.

      From the start there have been those of us saying that the fear we have of current gen AI is not so much the AI itself. It's that humans are going to put these non-thinking machines in charge of important decisions which will lead to terrible outcomes. And even as we see some of those terrible outcomes becoming real, people are still believing in the infallibility of these machines.

      The only possible bright side in all of this is that at some point they will fuck up so horribly that people will have no choice but to realize machines are not akin to god. Let's just hope that the fuck up that finally wakes people up isn't the one that also ends humanity. Unfortunately, we just may be dumb enough to give them that capability before we snap out of it.

      • by gweihir ( 88907 )

        I agree to all of that.

        What I really do not get is why so many people are so defective in that way. It does not seem to be a question of education, upbringing or information availability. People believe stuff against mountains of solid, verifiable evidence. It seems to be some kind of fundamental mental disability. Yes, we come from tribal roots and there, keeping the tribe together is the overriding concern, no matter the stupid things that may require. But we also have consciousness and intelligence that

        • I agree to all of that.

          What I really do not get is why so many people are so defective in that way. It does not seem to be a question of education, upbringing or information availability. People believe stuff against mountains of solid, verifiable evidence. It seems to be some kind of fundamental mental disability. Yes, we come from tribal roots and there, keeping the tribe together is the overriding concern, no matter the stupid things that may require. But we also have consciousness and intelligence that allows us to raise above that. It seems most people do not even want to leaver their primitive roots behind.

          At this point it seems to have as much to do with greed being seen as a positive as anything else. And there's nothing more greedy than current generation AI. They want all the data. They want all the electricity. They want all the water. They want all the economic resources. They want *EVERYTHING*, and I think a lot of people see that amount of greed and think there must, absolutely *HAS* to be, some massive payoff coming from it. Greed can't be a bad thing, therefore, feeding greed will do good things.

          At

          • by gweihir ( 88907 )

            Interesting. While this is totally messed up, the idea is plausible to me. Greed, FOMO and "This time technology must finally bring us the Golden Age!" (Why should it?) put together with the usual scammer mindset in the parts of the AI community that caused numerous AI hypes before. The main difference is that the players are much larger this time and that the likes of Microsoft, Google and others have run out of ideas for innovation and hence jumped on this one.

            Obviously, the evidence that this massive pay

            • Interesting. While this is totally messed up, the idea is plausible to me. Greed, FOMO and "This time technology must finally bring us the Golden Age!" (Why should it?) put together with the usual scammer mindset in the parts of the AI community that caused numerous AI hypes before. The main difference is that the players are much larger this time and that the likes of Microsoft, Google and others have run out of ideas for innovation and hence jumped on this one.

              Obviously, the evidence that this massive payoff will not manifest is mounting. And none of the previous AI hypes ever delivered more than really small parts of the promises made. But, given the extreme effort LLMs require, we may not even get most of these small parts this time.

              The biggest problem I see so far is that every time we slam into the wall of "no returns on investment," somebody comes along and screams about how we just need to give more to make it all work out. Faster consumption will lead us to computer god! And for whatever reason, we're letting these assholes get by with it and continue to suck down resources, despite the lack of payoff.

              I hope we turn this freight train around before it ploughs over us, but they're already trying to push the idea that it *will* take

              • by gweihir ( 88907 ) on Tuesday March 10, 2026 @11:43AM (#66033398)

                It's gonna get way weird here in the next couple years.

                Probably, yes. What the whole thing points out is the strength of the "Sunk cost fallacy". Many (most?) people cannot break out of that and struggle to correct bad past decisions they made. Often this results in them not stopping things when they still could have done so with limited damage. Instead they try to force success. Obviously, that never works and some hard limit eventually kicks them in the face. But people do not even seem to learn from that.

                As to taking over most / all white collar work, I see marked difference here in Europe. I currently mostly teach applied CS students and none of them seem to have any fear of getting replaced. In fact, many have part-time work while doing their studies, because it is so easy and employers are happy to support that. And they are all careful with AI use because they understand that negatively impacts learning. And most are by now pretty annoyed by all the empty and misleading claims about LLMs. LLM jokes are on the raise.

                For my personal work, yes, I can take some LLM and ask me to make me a slide-set for a class. But the quality is laughably bad and the most important things (topic selection and telling the participants what are the really important parts and contexts) will be messed up. Instead, it will be some incomplete "best of" list made without insight and presented without important context. That is not what I am employed for. If that were all it took, the students could just learn by themselves from some mediocre book on the topic. Hence what I see is pretty much the opposite of my job bein threatened. Yes, maybe they can eventually replace paper-pushers with no decision-making powers with LLMs. But even that may fail. First, mistakes may well cause liability in even in that area. And second, the effort to keep LLMs going may well be too high to make that worthwhile.

                • It's gonna get way weird here in the next couple years.

                  Probably, yes. What the whole thing points out is the strength of the "Sunk cost fallacy". Many (most?) people cannot break out of that and struggle to correct bad past decisions they made. Often this results in them not stopping things when they still could have done so with limited damage. Instead they try to force success. Obviously, that never works and some hard limit eventually kicks them in the face. But people do not even seem to learn from that.

                  As to taking over most / all white collar work, I see marked difference here in Europe. I currently mostly teach applied CS students and none of them seem to have any fear of getting replaced. In fact, many have part-time work while doing their studies, because it is so easy and employers are happy to support that. And they are all careful with AI use because they understand that negatively impacts learning. And most are by now pretty annoyed by all the empty and misleading claims about LLMs. LLM jokes are on the raise.

                  For my personal work, yes, I can take some LLM and ask me to make me a slide-set for a class. But the quality is laughably bad and the most important things (topic selection and telling the participants what are the really important parts and contexts) will be messed up. Instead, it will be some incomplete "best of" list made without insight and presented without important context. That is not what I am employed for. If that were all it took, the students could just learn by themselves from some mediocre book on the topic. Hence what I see is pretty much the opposite of my job bein threatened. Yes, maybe they can eventually replace paper-pushers with no decision-making powers with LLMs. But even that may fail. First, mistakes may well cause liability in even in that area. And second, the effort to keep LLMs going may well be too high to make that worthwhile.

                  Here in America, we go for "almost good enough" over quality if quality costs more. Not sure what that means for LLM adoption and where it will lead us, but so far it's looking like a shit-show that's slowly replacing the horizon.

                  • by gweihir ( 88907 )

                    Yes, maybe they can eventually replace paper-pushers with no decision-making powers with LLMs. But even that may fail. First, mistakes may well cause liability in even in that area. And second, the effort to keep LLMs going may well be too high to make that worthwhile.

                    Here in America, we go for "almost good enough" over quality if quality costs more. Not sure what that means for LLM adoption and where it will lead us, but so far it's looking like a shit-show that's slowly replacing the horizon.

                    In Europe, "as simple as possible, but not simpler" and "never do things cheaper than possible" are still king. I guess it has some impact on profits, but the benefits for quality quite clear.

                    But yes, the LLM craze has "engineering cheaper than possible" written in all over it. And

    • by tlhIngan ( 30335 )

      The problem is one that got AT&T back in the 60s when they used in-band signalling. Some people read the documentation and then made devices to hack the phone network. Then it was published in Esquire and everyone and their dog were making long distance calls for free.

      AI assistants are the same way - the guard rails, the context and the actual prompt are all part of the same block of text, so these AI agents are just as vulnerable to prompt attacks.

      Even better, they are vulnerable to hallucinations so p

      • by gweihir ( 88907 ) on Monday March 09, 2026 @07:25PM (#66032472)

        Yes, indeed.

        But the other problem is that while in telephone networks, you can use out-of-band signaling, and, after decades of attacks and problems, this is now the standard. For LLMs, that is not possible. There is only one channel and everything goes into it. Maybe there will eventually be other AI types that can separate data and control reliable, while using similar knowledge-bases to LLMs. But LLMs will not be it. The very principle they operate on is that everything is seen as language. Without that, the idea stops working.

    • I thought this would be a fun experiment. The OpenClaw agent will run on a Mac mini 24GB I'm buying for this experiment.

      I'll run the agent in a container....the 24GB mini should also be able to run a very light local model for it too.

      I have an older Mac Pro (2019)...that I'm planning to try to run llama.cpp or maybe ollama...that is somewhat metal friendly, so I can run a model like CPP-OSS-20B...and just expose it as an api. I was planning to use something like Tailscale to route all communications thro

  • by Anonymous Coward
    They've burnt them to the ground.
    • by ranton ( 36917 )

      They haven't done anything to the goalposts. They have just ignored them.

      • by evanh ( 627108 )

        Run straight off every sideline, ploughing through bystanders, ripping off clothing, flushing every toilet, breaking down locked doors, scratching paint jobs, running into traffic, ...

    • 'We have this neat script that sort of works some of the time, now if only we could make it useful somewhere - let's unleash it on everyone's personal data.'

      - what could go wrong?
  • by Anonymous Coward

    "The testimonials are remarkable," the AI security firm Snyk observed. "Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who've set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they're away from their desks."

    AI firm says AI is great.

    It's an old saying but applies here: "If it sounds too good to be true, it probably is."

  • You don't put an oven "in charge" of dinner. You use it, as a tool.
  • ....I encourage my targets to use AI assistants for everything.
  • Something is wrong.

    If you have too much to manage in your digital life...
    something is wrong.

    If you don't understand your own digital life...
    something is wrong.

    If you think you need something to manage your own life, look in the mirror and ask yourself WHY?

    - seriously, it's like people don't want to live their own damn lives.
    • by Bongo ( 13261 )

      It's agents all the way up.

      But yes, great point. Why are we using all this stuff to make life more complicated and creating lots of worthless tasks.

      • It's agents all the way up.

        But yes, great point. Why are we using all this stuff to make life more complicated and creating lots of worthless tasks.

        The digital economy runs on the same concept as the actual economy: forever growth, always. And the tech firms have found a way to create that forever growth, right as humans were beginning to hit the wall and say they've had about enough tech mucking up their lives. Now they can let the machines take over all the digital nonsense that we used to be required to do to keep the tech companies flush, and they can slowly subsume themselves in digital busywork, agents feeding agents feeding agents forever and ev

  • has no limit!
  • by BeaverCleaver ( 673164 ) on Monday March 09, 2026 @09:56PM (#66032636)

    This whole concept is so crazy. Even "non-techy" people know about identity theft, not sharing passwords etc. The news has been full of amusing "AI" fuckups since the models became public. Most of those stories are riffs on the 1980s "computer fucked up" news stories from the 1980s. Even Joe Sixpack knows that computers can't be trusted.

    So why the hell is anyone letting an LLM near anything useful or important? How are vendors marketing these "assistant" tools? At this point I can only conclude that it's a deliberate plot by OpenClaw et al to mine users' data so they have something else of value before the "AI" bubble pops. Either that, or OpenClaw et al are just a front for some three-letter agency, just like those "Anom" phones from a few years ago. Or Crypto AG from years before that.

    • Identity theft can have terrible consequences!

      Just moments later:

      Let me hand over my identity to an LLM based agent. What could possibly go wrong?

  • I get mine to call me Moles FTW
  • 10billion tools lol ridiculous openblah.
  • There's no way to sugarcoat it: AI's massive data theft must be controlled. AI is the 2nd most harmful invention humanity has ever made.
  • We all remember, how openclaw agents tend to take names with a crustacean reference, like MJ Rathbun (named after zoologist) taking aim at Scott Shambaugh [theshamblog.com].

    And now a dude called "Krebs" (German word for Cancer) shows up in the scene ...

    PS: to the humor impaired: I know who Brian Krebs is ;-)

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Tuesday March 10, 2026 @07:44AM (#66033004)

    At the law firm where I'm the sole software expert we already discussed this problem. This is Germany/EU, so we (gladly) have strict data protection laws. The crew I'm with right now is a bunch of lawyers who are actually comparatively innovative and use AI daily, for legal drafts and other things.

    We've already discussed the problem and the option of an anonymizing proxy that converts critical client data into spoof data and back, once the AI is finished doing it's thing. With well formatted data this is actually quite easy and can prevent anything critical being fed into AI or external services where they don't belong. So it's a good thing that we also are discussing switching our documents to markdown and our legal document repo to Git. ... Did I mention how refreshing it is to have deciders that actually know a thing or two about computers?

    The AI doesn't care if our client is called Pipi Longstocking from Cloud Coocoo Land. And we don't care if we can safely cross-reference that identity with our real client. And we would have reasonable data-safety that the authorities can't complain about.

    We might start implementing something like this this year. Perhaps even into our legal management product I'm building right now.

    Either way: Anonymizing Proxy.
    That's what I came up with and that's what I would call it and AFAICT it sounds like a solid concept.

    What do you guys think?

The absent ones are always at fault.

Working...