'Clawdbot' Has AI Techies Buying Mac Minis 66
An open-source AI agent originally called Clawdbot (now renamed Moltbot) is gaining cult popularity among developers for running locally, 24/7, and wiring itself into calendars, messages, and other personal workflows. The hype has gone so far that some users are buying Mac Minis just to host the agent full-time, even as its creator warns that's unnecessary. Business Insider reports: Founded by [creator Peter Steinberger], it's an AI agent that manages "digital life," from emails to home automation. Steinberger previously founded PSPDFKit. In a key distinction from ChatGPT and many other popular AI products, the agent is open source and runs locally on your computer. Users then connect the agent to a messaging app like WhatsApp or Telegram, where they can give it instructions via text.
The AI agent was initially named after the "little monster" that appears when you restart Claude Code, Steinberger said on the "Insecure Agents" podcast. He formed the tool around the question: "Why don't I have an agent that can look over my agents?" [...] It runs locally on your computer 24/7. That's led some people to brush off their old laptops. "Installed it experimentally on my old dusty Intel MacBook Pro," one product designer wrote. "That machine finally has a purpose again."
Others are buying up Mac Minis, Apple's 5"-by-5" computer, to run the AI. Logan Kilpatrick, a product manager for Google DeepMind, posted: "Mac mini ordered." It could give a sales boost to Apple, some X users have pointed out -- and online searches for "Mac Mini" jumped in the last 4 days in the US, per Google Trends. But Steinberger said buying a new computer just to run the AI isn't necessary. "Please don't buy a Mac Mini," he wrote. "You can deploy this on Amazon's Free Tier."
The AI agent was initially named after the "little monster" that appears when you restart Claude Code, Steinberger said on the "Insecure Agents" podcast. He formed the tool around the question: "Why don't I have an agent that can look over my agents?" [...] It runs locally on your computer 24/7. That's led some people to brush off their old laptops. "Installed it experimentally on my old dusty Intel MacBook Pro," one product designer wrote. "That machine finally has a purpose again."
Others are buying up Mac Minis, Apple's 5"-by-5" computer, to run the AI. Logan Kilpatrick, a product manager for Google DeepMind, posted: "Mac mini ordered." It could give a sales boost to Apple, some X users have pointed out -- and online searches for "Mac Mini" jumped in the last 4 days in the US, per Google Trends. But Steinberger said buying a new computer just to run the AI isn't necessary. "Please don't buy a Mac Mini," he wrote. "You can deploy this on Amazon's Free Tier."
This feels like the future (Score:5, Insightful)
I think many people will jump at having an agent they control and own. I want the convenience of an agent without giving Sam Altman access to my data.
Re:This feels like the future (Score:5, Informative)
It's 'local' in the sense that a real mail client sucks less than webmail.
It can run with a local model (Score:2)
You can use it with a local model, it's a pain in the ass to setup, but some people have it running with for example "minimax m2.1 4bit", it needs a model that handles tool running properly.
Re: (Score:2)
Heard of it today. Tried it. It really is focused on using not local LLMs, that is for sure. Last week I started playing around properly with LM Studio and its MCP support. My local LLMs can now search the internet, can track time, access the file-system, access git and gitlab, write n8n + playwright + Terraform scripts, RDP and some other things I find handy.
And that works with any of the local LLMs with tool support. There is still a knowledge gap between local and cloud LLMs, but the local ones become a
Re: (Score:2)
Only that this is not an agent you "control and own". It uses either one of the commercial LLMs or you have to run your own in addition. This thing is just a glue-layer.
Funny how doing a tiny bit of research can give you critical information.
Re: (Score:3)
A glue layer is otherwise known as an agentic framework. Hence the GP's comment about "having an agent that they control and own".
Re: (Score:2)
Mac mini vs AWS free tier (Score:2)
Of course, if you depend on your bot for anything of consequence for home automation, perhaps the concept of being dependent on internet access for it to function wouldn't be ideal. A number of folks have a local version of home assistant for home automation that functions just fine without the internet. It would be a shame to hamstring its "minder" if you lose internet connectivity or AWS has a bad hair day.
Best,
Knowing your (local) audience. (Score:5, Interesting)
Steinberger said buying a new computer just to run the AI isn't necessary. "Please don't buy a Mac Mini," he wrote. "You can deploy this on Amazon's Free Tier."
Uh..
..gaining cult popularity among developers for running locally, 24/7..
When THE primary reason your product is making headlines today is the localized capability, it tends to speak to bit of a disconnect in the founders recommendation here.
Now where did I put that Beowulf Pi Cluster For Dummies book..
Re:Knowing your (local) audience. (Score:5, Informative)
I haven't looked at what this thing is but why can't it be run on ordinary PC hardware? Either CPU or GPU, nvidia, etc? Why a a Mac?
FYI, cause I looked... It has a gateway component (IE: Server part) and companion apps (IE: the part that brings it to your device). It's all written in Typescript and runs on Node (node.js). The gateway runs on Windows, Mac, or Linux (maybe others?). They have a MacOS bar app companion, and mobile apps for iOS and Android. Windows and Linux companion apps are planned but don't exist yet - thus the push for a Mac (or at least that seems like the obvious reason).
Re:Knowing your (local) audience. (Score:4, Insightful)
My guesses:
1) Unix-based, non-Windows OS
Yeah, of course you could also set up a Linux machine, but just buying a Mac is much easier.
2) Small form factor and nice product design
It occupies very little space on your desk and doesn't look ugly.
3) Established ecosystem
Many of these users already use Apple devices, so the Mini nicely integrates with them.
4) Relatively low price
It's not cheap, but not prohibitive.
5) Trendy
They're following the trend.
Re:Knowing your (local) audience. (Score:5, Insightful)
My guesses:
1) Unix-based, non-Windows OS Yeah, of course you could also set up a Linux machine, but just buying a Mac is much easier.
2) Small form factor and nice product design It occupies very little space on your desk and doesn't look ugly.
3) Established ecosystem Many of these users already use Apple devices, so the Mini nicely integrates with them.
4) Relatively low price It's not cheap, but not prohibitive.
5) Trendy They're following the trend.
Having a M4 Mac Mini, I have to note that is a damn nice little computer. A hella deal for the price. fast to boot and run - the Adobe Creative Suite flies on it compared to the Intel Mac I traded in. And Apple's trade-in program - I think it was 600 bucks I laid out in the end. Yes, Unix base is important to me, since I also use Linux, the similarities are nice, and I spend a bit of time in Terminal.
And it is a cute little thing.
If there is one thing I don't care for, it is the placement of the power switch. Underneath the machine, in the left rear. As I told an Apple representative, they produce these pretty machines, but must employ an evil genius to place the power switches in bizarre or obscure places.
Otherwise, it is nicely performing, aircraft aluminum, eye candy with a tiny footprint.
And not Windows based.
Re: (Score:2)
I honestly can't think of a reason to ever turn off a computer that stays plugged into the wall. I'm sure you have your reasons, but I expect that 99% of buyers press the power button exactly once.
Re: (Score:2)
I honestly can't think of a reason to ever turn off a computer that stays plugged into the wall. I'm sure you have your reasons, but I expect that 99% of buyers press the power button exactly once.
It's an old habit, from the days when I had to turn them off when not using them. If the mini wasn't so fast on boot, I might reconsider it, but it hauls ass to boot.
Re: (Score:3)
You can run the gateway on anything.
Re: (Score:3)
If you're only going to run small models at home, your best option by far is a modern NVidia gaming GPU. The problem comes when you want to run a large model at home. And there's really only two good "home scale" options for this: macs like the M3 Ultra / Mac Studio, and the NVidia DGX Spark (1 or 2 linked together). You simply can't run these large models (even quantized) o
Re: (Score:2)
Ugh, Slashdot messed up the italics.
Just to clarify: on the macs, the GPU operates on system memory. It has pretty awful FLOPS (~26 TFLOPS), but what matters for LLM inference is that its latency is low and bandwidth is high, and you can get versions with up to 512GB, for very sizable models
DGX Spark (formerly called "Digits") is a tiny desktop box from NVidia with 128GB (you can chain two together for 256GB). It has 4x the FLOPS than the macs (still well less than a modern gaming GPU!) but 1/3rd the memo
Re: (Score:3)
There's also the impact of the rampocalypse.
As expensive as Apple originally priced the ram used on Mac Minis, it looks darn right reasonable right now with the surge in DDR5 prices.
Re: (Score:2)
The M series chips use a Unified Memory Architecture that effectively lets you use main system memory as GPU memory. The CPUs, GPUs, and Neural Engines all have direct access to the same memory. No need to be pumping bits between memory subsystems over PCI like the Intel ecosystem is still doing. Since most of these models are restricted to running on a certain amount of GPU memory you are either buying a high end NVIDIA board that alone is more than the price of a Mac mini or using the M series chips with
Re: (Score:2)
When THE primary reason your product is making headlines today is the localized capability, it tends to speak to bit of a disconnect in the founders recommendation here.
Exactly. Not to mention people will get bored with pouring over graphs of API calls to make sure they stay within the free tier and don't go over. Hey, maybe they could get the AI to do that? Seems suitably loopy...
Boy I feel old (Score:5, Interesting)
What is the purpose of this? What problem am I solving?
PIaaS (Score:2)
Prompt Injection as a Service
Re: (Score:2)
It does seem that way, judging from this video on the "Low Level" channel [youtube.com].
Re: (Score:2)
Funny thing is that there is no known way to fix prompt injection.
Re: (Score:2)
Sure there is, structured input.
Re: (Score:2)
Nope. That just makes it a teeny bit harder. Unless you go to extremes, but then the LLM basically stops being useful.
This is not SQL injection. SQL injection can be fixed with structured input, because all input to SQL is highly structured already.
Re: (Score:2)
No, it does not simply make it "harder". The LLM only looks at the instruction-tagged section for instructions. It doesn't look at other tags for instructions.
Re: (Score:2)
And just to head you off: no, you can't "tag it yourself". Tags are denoted by special tokens. The tokenizer does not convert any supplied text into these tokens.
Re: (Score:2)
You obviously have no clue how security actually works. This is harder to bypass, but it just needs a bit more work. Which can nicely be automated on the attacker side and then it does not need more work anymore.
Re: (Score:2)
That's like claiming you can still do SQL injection on parameterized SQL queries if you just do "a bit more work".
Re: (Score:2)
It's the logical evolution! Hop on to this ship now! Otherwise you may be left behind on shore alone and can only watch all the others drown.
Not for me (Score:2)
Re: (Score:3, Insightful)
If I hadn't gone to the airport early to take a bath
WOT?
Re: (Score:2)
Never heard of an airport bidet...that's only slightly less strange sounding that the OP wanting to somehow bathe at an airport....
Re: (Score:2)
Take a bath at an airport?!?!?
Putting aside the very idea of bathing at an airport being weird....WTF would you bathe AT an airport?
Hell, I hate to just sit and take a shit at an airport and only do that if I can't possibly wait.....cant' imagine bathing? Soap? Shampoo? And where is a tub?!?!
Re: (Score:2)
And where is a tub?!?!
Out on the tarmac...
Local? (Score:2)
When the article says local AI, I'm thinking that the I runs on the Mac Mini(or other local hardware).
But, you interact with it through WhatsApp? It uses AI backends like ChatGPT and Claude? It does stuff on website APIs?
How is this local? It sounds like it's just a friont-end. What is it's use case that other AI solutions don't have? It seems like Ollama is more loacl than this.
Re: (Score:3)
It can run local models or (and most commonly) uses Anthropic/OpenAI api calls to their models.
If you have a high end mac mini you could probably do fairly well with local models.
Re: (Score:2)
Or if you have a modern SFF AMD box, which will have a processor with onboard GPU, AI features, and unified memory. There's a broad selection on AliExpress.
Re:Local? (Score:4, Interesting)
Mac Mini is a popular device for local inference, as it has a built-in GPU which shares the system RAM, so you can run relatively large models for its price.
I assume Clawdbot/Moltbot can work with any inference backend with an OpenAI-compatible API, so it's up to the user to choose between local inference or using a subscription to someone else's LLM service.
What the value is of running an agent locally when all your data is in cloud services is a good question, but I guess it could also use self-hosted data sources, if you have those.
Re: (Score:2)
Don't think I'm the only one, but I make a serious effort to not store my data in the cloud. As I am of the opinion that my data is my problem and doesn't need to become someone else's problem on their computer(s). You can put as much lipstick and jewelry on that proverbial pig, but the cloud is nothing else than someone else's computer.
RNG computing (Score:2)
This program is demanding absolutely precise and error free operation from something that do include an RNG generator.
It's the epitome of not understanding the tech.
Source: Business Insider. (Score:2)
That means the author got a mate to plant a story on the drivel-driven site Business Insider.
Which appears to be enough to get on Slashdot in 2026.
That's all, nothing to see here.
Re: (Score:2)
Avoid.
And do NOT use Business Insider as a source in future.
Not local inference (Score:4, Interesting)
Just to be clear here, Moltbot does not run AI inference locally. You connect it to your standard AI services (ChatGPT, Gemini, etc), which do the actual AI processing. What Moltbot does is connect those things to other things, like to Whatsapp.
In fact, even if you do have your own local inference engine running, like a llama model, Moltbot can't work with it currently. It ONLY works with the big AI services.
It really is just glue to connects things together, and is so lightweight it even runs on a Raspberry Pi with 2GB of ram. So I'm not sure what all the Mac Mini hubbub is about. The ability to run this on Amazon's Free Tier shows just how lightweight and little processing it does (it's just formatting and moving chat messages from one thing to another basically).
To earlier commenters saying that Peter Steinberger is missing the entire point of running locally when he recommends AWS - you aren't understanding what Moltbot is doing. If you're already committed to using online services for the fundamental AI inference itself, it doesn't matter that Moltbot is running in the cloud too.
Re:Not local inference (Score:4, Informative)
Just to be clear here, Moltbot does not run AI inference locally. You connect it to your standard AI services (ChatGPT, Gemini, etc), which do the actual AI processing.
From the project page, the first point of pride is:
Runs on Your Machine
Mac, Windows, or Linux. Anthropic, OpenAI, or local models. Private by default--your data stays yours.
So no, that's only an option.
Re: (Score:2)
Re: (Score:2)
Mac Minis are solid at running models locally.
Re: (Score:2)
Few people have the hardware capable of running the big models locally
There are many "large" models that run perfectly fine on a modest gaming rig. Not every model needs to be 1T+ parameters. Even some "big" models like Llama 4 Scout or Claude 5 can run on a heafty gaming rig.
The key about this kind of thing is, if you're deploying something locally you're usually doing it for a reason. When you have a reason you don't need a general purpose attempt to do everything including generate pictures of a kitchen sink AI model. Special purpose models are small, run locally, and ofte
Re: Not local inference (Score:2)
Any schmoe with a modern processor and a fair amount of RAM can run fairly sizable models on their CPU with decent performance. The RAM is the sticking point today, but if they bought just a few months ago, no problem.
I have a 5900X and models are only a little slower on that than my 4060 Ti 16GB. Plus I have 64GB so I can actually run larger models there than on my GPU.
A used Nvidia compute card with 24GB VRAM is available pretty reasonably on eBay, for less than I paid for this GPU.
Re: (Score:3)
See HERE on YouTube [youtu.be].
Re: (Score:2)
Re: (Score:2)
I was basing my comment about local model support on this:
https://github.com/moltbot/moltbot/issues/2838 [github.com]
Clawdbot currently supports model providers, but configuring local inference engines such as vLLM and Ollama is not straightforward or fully documented. Users running local LLMs (GPU / on-prem / WSL) face friction when attempting to integrate these providers reliably.
Adding official support for vLLM and Ollama as first-class providers would significantly improve local deployment, performance, and developer experience.
So it sounds like it is in the realm of possibility, but being neither documented nor straightforward sounds beyond the reach of most normal users.
Re: (Score:2)
It sounds like it's doable using a proxy. Most normal users won't be doing this at all.
Re: (Score:2)
I went through the installation and configuration wizard from Moltbot today. The text on the website may say that there is support for local LLMs, but there is no such option during installation or configuration. Only options for 'copy your API key from Claude/OpenAI/Brave/OpenRouter/etc here'.
So it is nice and all that their website makes those claims, but the software sure doesn't. Now I'm quite sure that it is possible to use a local LLM, but I expect it to be much more of a hassle than it is worth.
My ex
Re: (Score:2)
> you aren't understanding what Moltbot is doing
This is true, but that's because the summary says the exact opposite of what you're saying, and I suspect it's you, not the summary, that's right given that a "useful" spicy-autocomplete system generally requires more set up than "Just install this easy to download package on a discarded Mac mini".
Urgh! Slashdot!
Re: (Score:2)
What a nice way to leak data and get attacked!
Re: (Score:2)
They are subject to a supply chain issue (Score:1)
Re: (Score:2)
Yeah, while there's a lot of excitement about Clawdbot, I've also seen a number of complaints about the... idiosyncratic decisions of the developer. I understand that there's fork projects underway.
Yep, because "AI Agents" are ready for prime time (Score:2)
Oh sweet summer children ...
Re: (Score:2)
They are. I'm not sure what is feeding your ignorance today but many AI models are exceptionally good at what they do, and AI agents are typically special purpose ringfenced in purpose.
No they can't do everything and if you try to get it to do everything then the problem is you rather than the AI model. Just like the guy delivering my paper won't take my garbage either.
Yeah ChatGPT and those generic do everything models are fucking useless. AI agents, quite the opposite especially when they are running a mo