Forgot your password?
typodupeerror
AI Technology

'Clawdbot' Has AI Techies Buying Mac Minis 66

An open-source AI agent originally called Clawdbot (now renamed Moltbot) is gaining cult popularity among developers for running locally, 24/7, and wiring itself into calendars, messages, and other personal workflows. The hype has gone so far that some users are buying Mac Minis just to host the agent full-time, even as its creator warns that's unnecessary. Business Insider reports: Founded by [creator Peter Steinberger], it's an AI agent that manages "digital life," from emails to home automation. Steinberger previously founded PSPDFKit. In a key distinction from ChatGPT and many other popular AI products, the agent is open source and runs locally on your computer. Users then connect the agent to a messaging app like WhatsApp or Telegram, where they can give it instructions via text.

The AI agent was initially named after the "little monster" that appears when you restart Claude Code, Steinberger said on the "Insecure Agents" podcast. He formed the tool around the question: "Why don't I have an agent that can look over my agents?" [...] It runs locally on your computer 24/7. That's led some people to brush off their old laptops. "Installed it experimentally on my old dusty Intel MacBook Pro," one product designer wrote. "That machine finally has a purpose again."

Others are buying up Mac Minis, Apple's 5"-by-5" computer, to run the AI. Logan Kilpatrick, a product manager for Google DeepMind, posted: "Mac mini ordered." It could give a sales boost to Apple, some X users have pointed out -- and online searches for "Mac Mini" jumped in the last 4 days in the US, per Google Trends. But Steinberger said buying a new computer just to run the AI isn't necessary. "Please don't buy a Mac Mini," he wrote. "You can deploy this on Amazon's Free Tier."
This discussion has been archived. No new comments can be posted.

'Clawdbot' Has AI Techies Buying Mac Minis

Comments Filter:
  • by memory_register ( 6248354 ) on Wednesday January 28, 2026 @09:17AM (#65954202)

    I think many people will jump at having an agent they control and own. I want the convenience of an agent without giving Sam Altman access to my data.

    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday January 28, 2026 @10:58AM (#65954382) Journal
      It's possible that people would; but this system is not that. It stores a variety of data locally to automatically generate the prompts that give a fresh session the illusion of continuity in the face of restrictive context windows, and ; but farms out the bot part the usual suspects(though the project creation sees to prefer Anthropic; so it's not Sam specifically who is getting the data).

      It's 'local' in the sense that a real mail client sucks less than webmail.
      • You can use it with a local model, it's a pain in the ass to setup, but some people have it running with for example "minimax m2.1 4bit", it needs a model that handles tool running properly.

        • Heard of it today. Tried it. It really is focused on using not local LLMs, that is for sure. Last week I started playing around properly with LM Studio and its MCP support. My local LLMs can now search the internet, can track time, access the file-system, access git and gitlab, write n8n + playwright + Terraform scripts, RDP and some other things I find handy.

          And that works with any of the local LLMs with tool support. There is still a knowledge gap between local and cloud LLMs, but the local ones become a

    • by gweihir ( 88907 )

      Only that this is not an agent you "control and own". It uses either one of the commercial LLMs or you have to run your own in addition. This thing is just a glue-layer.

      Funny how doing a tiny bit of research can give you critical information.

      • by Rei ( 128717 )

        A glue layer is otherwise known as an agentic framework. Hence the GP's comment about "having an agent that they control and own".

    • Unfortunately that is not how this works. ClawdBot/MoltBot still uses "remote LLMs" to do the heavy lifting, so you still need to reach out to Sam and Company. The security problems around ClawdBot are very serious though, and someone just "trying it out" is likely to get themselves into very serious trouble. I wrote a short post about it here, but all kidding aside -- this is very dangerous if you don't know EXACTLY what you're doing. https://www.linkedin.com/pulse... [linkedin.com]
  • Of course, if you depend on your bot for anything of consequence for home automation, perhaps the concept of being dependent on internet access for it to function wouldn't be ideal. A number of folks have a local version of home assistant for home automation that functions just fine without the internet. It would be a shame to hamstring its "minder" if you lose internet connectivity or AWS has a bad hair day.

    Best,

  • by geekmux ( 1040042 ) on Wednesday January 28, 2026 @09:38AM (#65954222)

    Steinberger said buying a new computer just to run the AI isn't necessary. "Please don't buy a Mac Mini," he wrote. "You can deploy this on Amazon's Free Tier."

    Uh..

    ..gaining cult popularity among developers for running locally, 24/7..

    When THE primary reason your product is making headlines today is the localized capability, it tends to speak to bit of a disconnect in the founders recommendation here.

    Now where did I put that Beowulf Pi Cluster For Dummies book..

    • When THE primary reason your product is making headlines today is the localized capability, it tends to speak to bit of a disconnect in the founders recommendation here.

      Exactly. Not to mention people will get bored with pouring over graphs of API calls to make sure they stay within the free tier and don't go over. Hey, maybe they could get the AI to do that? Seems suitably loopy...

  • Boy I feel old (Score:5, Interesting)

    by 50000BTU_barbecue ( 588132 ) on Wednesday January 28, 2026 @09:52AM (#65954252) Journal

    What is the purpose of this? What problem am I solving?

  • Prompt Injection as a Service

    • It does seem that way, judging from this video on the "Low Level" channel [youtube.com].

      • by gweihir ( 88907 )

        Funny thing is that there is no known way to fix prompt injection.

        • by Rei ( 128717 )

          Sure there is, structured input.

          • by gweihir ( 88907 )

            Nope. That just makes it a teeny bit harder. Unless you go to extremes, but then the LLM basically stops being useful.

            This is not SQL injection. SQL injection can be fixed with structured input, because all input to SQL is highly structured already.

            • by Rei ( 128717 )

              No, it does not simply make it "harder". The LLM only looks at the instruction-tagged section for instructions. It doesn't look at other tags for instructions.

              • by Rei ( 128717 )

                And just to head you off: no, you can't "tag it yourself". Tags are denoted by special tokens. The tokenizer does not convert any supplied text into these tokens.

                • by gweihir ( 88907 )

                  You obviously have no clue how security actually works. This is harder to bypass, but it just needs a bit more work. Which can nicely be automated on the attacker side and then it does not need more work anymore.

                  • by Rei ( 128717 )

                    That's like claiming you can still do SQL injection on parameterized SQL queries if you just do "a bit more work".

    • by gweihir ( 88907 )

      It's the logical evolution! Hop on to this ship now! Otherwise you may be left behind on shore alone and can only watch all the others drown.

  • Checks me into flights? In 1991 I checked in by fax, not getting the information that the flight had been moved an hour. If I hadn't gone to the airport early to take a bath, I would have missed it. Instead I sat all the way from Singapore to Copenhagen smelling like a (not recently) dead animal. I will check myself into flights, thank you, spare me your fancy modern faxes and AI.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      If I hadn't gone to the airport early to take a bath

      WOT?

    • If I hadn't gone to the airport early to take a bath,

      Take a bath at an airport?!?!?

      Putting aside the very idea of bathing at an airport being weird....WTF would you bathe AT an airport?

      Hell, I hate to just sit and take a shit at an airport and only do that if I can't possibly wait.....cant' imagine bathing? Soap? Shampoo? And where is a tub?!?!

  • When the article says local AI, I'm thinking that the I runs on the Mac Mini(or other local hardware).

    But, you interact with it through WhatsApp? It uses AI backends like ChatGPT and Claude? It does stuff on website APIs?

    How is this local? It sounds like it's just a friont-end. What is it's use case that other AI solutions don't have? It seems like Ollama is more loacl than this.

    • It can run local models or (and most commonly) uses Anthropic/OpenAI api calls to their models.

      If you have a high end mac mini you could probably do fairly well with local models.

      • Or if you have a modern SFF AMD box, which will have a processor with onboard GPU, AI features, and unified memory. There's a broad selection on AliExpress.

    • Re:Local? (Score:4, Interesting)

      by MtHuurne ( 602934 ) on Wednesday January 28, 2026 @10:40AM (#65954344) Homepage

      Mac Mini is a popular device for local inference, as it has a built-in GPU which shares the system RAM, so you can run relatively large models for its price.

      I assume Clawdbot/Moltbot can work with any inference backend with an OpenAI-compatible API, so it's up to the user to choose between local inference or using a subscription to someone else's LLM service.

      What the value is of running an agent locally when all your data is in cloud services is a good question, but I guess it could also use self-hosted data sources, if you have those.

      • Don't think I'm the only one, but I make a serious effort to not store my data in the cloud. As I am of the opinion that my data is my problem and doesn't need to become someone else's problem on their computer(s). You can put as much lipstick and jewelry on that proverbial pig, but the cloud is nothing else than someone else's computer.

  • This program is demanding absolutely precise and error free operation from something that do include an RNG generator.
    It's the epitome of not understanding the tech.

  • Source: Business Insider.
    That means the author got a mate to plant a story on the drivel-driven site Business Insider.

    Which appears to be enough to get on Slashdot in 2026.

    That's all, nothing to see here.
    • Clawdbot is pump and dump software: http://tautvilas.lt/software-pump-and-dump/
      Avoid.

      And do NOT use Business Insider as a source in future.
  • Not local inference (Score:4, Interesting)

    by Dan East ( 318230 ) on Wednesday January 28, 2026 @10:43AM (#65954348) Journal

    Just to be clear here, Moltbot does not run AI inference locally. You connect it to your standard AI services (ChatGPT, Gemini, etc), which do the actual AI processing. What Moltbot does is connect those things to other things, like to Whatsapp.

    In fact, even if you do have your own local inference engine running, like a llama model, Moltbot can't work with it currently. It ONLY works with the big AI services.

    It really is just glue to connects things together, and is so lightweight it even runs on a Raspberry Pi with 2GB of ram. So I'm not sure what all the Mac Mini hubbub is about. The ability to run this on Amazon's Free Tier shows just how lightweight and little processing it does (it's just formatting and moving chat messages from one thing to another basically).

    To earlier commenters saying that Peter Steinberger is missing the entire point of running locally when he recommends AWS - you aren't understanding what Moltbot is doing. If you're already committed to using online services for the fundamental AI inference itself, it doesn't matter that Moltbot is running in the cloud too.

    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday January 28, 2026 @11:22AM (#65954418) Homepage Journal

      Just to be clear here, Moltbot does not run AI inference locally. You connect it to your standard AI services (ChatGPT, Gemini, etc), which do the actual AI processing.

      From the project page, the first point of pride is:
      Runs on Your Machine
      Mac, Windows, or Linux. Anthropic, OpenAI, or local models. Private by default--your data stays yours.

      So no, that's only an option.

      • that 'or local models' part should be in fine print it's so small you can't see it. Few people have the hardware capable of running the big models locally; and when they see the price to buy said hardware, for most - that ain't happening.
        • Mac Minis are solid at running models locally.

        • Few people have the hardware capable of running the big models locally

          There are many "large" models that run perfectly fine on a modest gaming rig. Not every model needs to be 1T+ parameters. Even some "big" models like Llama 4 Scout or Claude 5 can run on a heafty gaming rig.

          The key about this kind of thing is, if you're deploying something locally you're usually doing it for a reason. When you have a reason you don't need a general purpose attempt to do everything including generate pictures of a kitchen sink AI model. Special purpose models are small, run locally, and ofte

        • Any schmoe with a modern processor and a fair amount of RAM can run fairly sizable models on their CPU with decent performance. The RAM is the sticking point today, but if they bought just a few months ago, no problem.

          I have a 5900X and models are only a little slower on that than my 4060 Ti 16GB. Plus I have 64GB so I can actually run larger models there than on my GPU.

          A used Nvidia compute card with 24GB VRAM is available pretty reasonably on eBay, for less than I paid for this GPU.

        • If you wanna lay out a bit of $$ for Mac Studio minis....you can link up to 5 of them high speed and run some BIG ass models.

          See HERE on YouTube [youtu.be].

        • by EvilSS ( 557649 )
          You don't need crazy hardware to run local models. gpt-oss-20b will run on a 24GB Mac Mini. Qwen3 8B 8 bit MLX will run on a base model mini. For the agenic stuff a lot of people are using Moltbot for either of those would be fine.
      • I was basing my comment about local model support on this:
        https://github.com/moltbot/moltbot/issues/2838 [github.com]

        Clawdbot currently supports model providers, but configuring local inference engines such as vLLM and Ollama is not straightforward or fully documented. Users running local LLMs (GPU / on-prem / WSL) face friction when attempting to integrate these providers reliably.

        Adding official support for vLLM and Ollama as first-class providers would significantly improve local deployment, performance, and developer experience.

        So it sounds like it is in the realm of possibility, but being neither documented nor straightforward sounds beyond the reach of most normal users.

      • I went through the installation and configuration wizard from Moltbot today. The text on the website may say that there is support for local LLMs, but there is no such option during installation or configuration. Only options for 'copy your API key from Claude/OpenAI/Brave/OpenRouter/etc here'.

        So it is nice and all that their website makes those claims, but the software sure doesn't. Now I'm quite sure that it is possible to use a local LLM, but I expect it to be much more of a hassle than it is worth.

        My ex

    • > you aren't understanding what Moltbot is doing

      This is true, but that's because the summary says the exact opposite of what you're saying, and I suspect it's you, not the summary, that's right given that a "useful" spicy-autocomplete system generally requires more set up than "Just install this easy to download package on a discarded Mac mini".

      Urgh! Slashdot!

    • by gweihir ( 88907 )

      What a nice way to leak data and get attacked!

    • by EvilSS ( 557649 )
      What are you even talking about? They have official documentation on using Ollama, LMStudio, vLLM, and others not to mention OpenRouter. Plus if it talks to OpenAI, then it can talk to most local models since many of the runtimes use the OpenAI API spec. As for why the Mac Mini, a bone-stock mini will give you about 8GB of high speed RAM (leaving 8gb for the system) for around $550. 24GB model gets you at least 16GB to play with. You can run a lot of decent local models on either one.
  • They auto closed a report that they are subject to a supply chain issue. Installing based on the readme on their official repo results in installing a package that is very much NOT moltbot: https://www.npmjs.com/package/... [npmjs.com]
    • by Rei ( 128717 )

      Yeah, while there's a lot of excitement about Clawdbot, I've also seen a number of complaints about the... idiosyncratic decisions of the developer. I understand that there's fork projects underway.

    • They are. I'm not sure what is feeding your ignorance today but many AI models are exceptionally good at what they do, and AI agents are typically special purpose ringfenced in purpose.

      No they can't do everything and if you try to get it to do everything then the problem is you rather than the AI model. Just like the guy delivering my paper won't take my garbage either.

      Yeah ChatGPT and those generic do everything models are fucking useless. AI agents, quite the opposite especially when they are running a mo

Speed of a tortoise breaking the sound barrier = 1 Machturtle

Working...