Forgot your password?
typodupeerror
AI Open Source Privacy Security Software

Cal.com Is Going Closed Source Because of AI 92

Cal is moving its flagship scheduling software from open source to a proprietary license, arguing that AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities. "Open source security always relied on people to find and fix any problems," said Peer Richelsen, co-founder of Cal. "Now AI attackers are flaunting that transparency." CEO Bailey Pumfleet added: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100x more hackers studying the blueprint." The company says it still supports open source and is releasing a separate Cal.diy version for hobbyists, but doesn't want to risk customer booking data in its commercial product. ZDNet reports: When Cal was founded in 2022, Bailey Pumfleet, the CEO and co-founder, wrote, "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source." Since Cal was successful and now claims to be the largest Next.js project, he was on to something. Today, however, Pumfleet tells me that AI programs such as "Claude Opus can scour the code to find vulnerabilities," so the company is moving the project from the GNU Affero General Public License (AGPL) to a proprietary license to defend the program's security.

[...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source."

While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."

Cal.com Is Going Closed Source Because of AI

Comments Filter:
  • AI can also FIX t (Score:4, Insightful)

    by Archangel Michael ( 180766 ) on Wednesday April 15, 2026 @05:05PM (#66095564) Journal

    Instead of fearing AI, use it to secure software and make it better.

    We have nothing to fear but fear itself.

    • by rsilvergun ( 571051 ) on Wednesday April 15, 2026 @05:35PM (#66095602)
      Dude they just wanted an excuse to close source their software without getting the blow back from the community that's all.

      Every time you want to do a shitty thing in the world now you just say AI made me do it.
      • by Anamon ( 10465047 ) on Wednesday April 15, 2026 @05:52PM (#66095638)
        Yeah, so many holes in this justification that it's completely transparent.

        If attackers can now so easily scan for vulnerabilities... so could they. They have access to the same tools. Not to mention that these new approaches don't even really need access to the source.

        He says they don't want to be a cybersecurity company, just quietly focus on handling the sensitive data of their customers. But you can't do one without the other.

        If you don't want to build up the security know-how and processes in-house, that's fair. Outsource it to someone who specialises in it. But a company just trying to avoid a breach by flying under the radar and cheaping out on security has no business handling sensitive data in the first place.
        • Can they find and fix the problem before an attacker can find and exploit it? Is it easier to find vulnerabilities in code or in binaries (which may be running on a server you cannot access)? Also, they have to pay someone to fix the vulnerabilities. The hacker is making their living by exploiting the vulnerabilities. That's a cost the defender has to bear that the attacker does not.

          It is a sad and troubling thought, but open-source may now be a threat.

          • by suutar ( 1860506 )

            But if it's open source, once one person has fixed it, it's fixed. Closed source means everyone has to fix it individually.

            • If one person fixes it, how many Arch users out there run arch but don't touch the code? probably around 80%, so given that maybe 20% of arch users touch code, now take that number and spread it out by how many packages there are in arch and you'll quicky understand that eyes on code != coders working on code.

            • What? Are you assuming that all closed source applications share their codebase but don't tell each other? That doesn't make sense. You're assuming they all share the same vulnerabilities despite being different programs written by different people?
              • by suutar ( 1860506 )

                No, but I am assuming that their different implementations probably have about the same number of vulnerabilities to start with. You're right that with closed source my fix probably wouldn't help you directly, but it's still the case that you and I each have a vulnerability to fix, instead of you being able to take my fix and use it without effort.

                • Okay, I think I get what you're saying, and while you're not wrong there are other ways that happens. I know of a couple places developers go to ask each other those sorts of questions, which is basically the same thing, just with asking instead of reviewing someone else's revisions.

                  Please don't get me wrong, I like open source. I use plenty of open things. I explicitly prefer it in most cases, because I am cheap. I greatly appreciate the hard and largely unpaid work that goes into it. But I had dou

      • by HiThere ( 15173 )

        You could be right, but my take was that it was made by a manager who had no idea. Of course, they could both be true.

    • I'm pretty sure they found more bugs than they wanted to fix when the checked the code.

    • by Junta ( 36770 ) on Wednesday April 15, 2026 @05:50PM (#66095630)

      GenAI is a bit nicer for offense than defense.

      If you are an attacker, the time and consequences of a GenAI mistakes can be more easily ignored. Whoops, an attack that didn't work but you weren't going to succeed anyway. If it screws up the target in a way that you didn't actually want, you may have an opportunity cost because you wanted that data or to ransom the data, but you didn't care *that* much about the data. It's actually a pretty unambiguous 'win' for malicious users since the usual downsides don't matter.

      If doing defense, the consequences of GenAI mistakes are more costly. An erroneous security fix actually becomes a hole. A change that loses data is data you actually care about.

      All that said, I'm not sure closed sourcing and maintaining an open fork would realistically do anything. I doubt the proprietary fork would be sufficiently different to protect them from hypothetical security issues in their codebase.

      • The thing is , offence is defence if your devs are competent. One thing I've always stressed is we should be attacking our own products all the time. Using security linters and static analysis, fuzzing and all the other techniques to kick holes in our own systems so we can identify and patch them. These AI tools are no different.

        And I'd argue for an attacker the stakes are just as high. If you screw up, you might just expose who you are, and while your target risks losing his money, you risk losing your fre

        • And I'd argue for an attacker the stakes are just as high. If you screw up, you might just expose who you are, and while your target risks losing his money, you risk losing your freedom..... or worse, if you pick a gnarly enough target.

          The stakes for the attacker go back to zero if they're in a jurisdiction where there's no chance of prosecution (Russia, Iran, North Korea, to give a non-inclusive list). They just need to not be stupid enough to hack an entity with local connections. Something like Cal.com doesn't make that list.

          • Security isn't convenient.

            Security isn't easy, but it isn't hard either.

            Assume you're' a target (because you are) and make it so that you're hardened. Don't be the easy target. Criminals are lazy.

      • by DarkOx ( 621550 )

        Don't forget it also super chargers the general asymmetry between attackers and defenders. That is attackers are there for as long as they want to be, defenders have to be their all the time.

        Everytime a newer bigger, better trained, whatever model or a new set of tooling, feedback, etc workflow drops defenders have to buy into it and stand it up, and evaluate a whole new wave of potential vulnerabilities that are now identified, and they have to do it before a threat actor does.

        Until we reach a point of st

      • by ufgrat ( 6245202 )

        What a myopic view. if AI can scan the codebase to find a vulnerability to attack, it can scan the codebase to find a vulnerability to *fix*. You seem to misunderstanding the premise here.

        Also, you say "GenAI", but there's nothing "generative" about this-- it's AI's ability to interpret the code and find mistakes that's under fire by the CEO of "cal".

    • Re:AI can also FIX t (Score:4, Informative)

      by bill_mcgonigle ( 4333 ) * on Wednesday April 15, 2026 @07:41PM (#66095846) Homepage Journal

      They're pissing on 'you' and telling you it's raining, if the summary is correct.

      It usually follows the business model collapsing and precedes a fork and the original just going into support.

      • by znrt ( 2424692 )

        They're pissing on 'you' and telling you it's raining, if the summary is correct.

        indeed. the whole premise is totally contradictory if not hypocritical: "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source" (a jarring claim in itself), and now that they have a working product "open source has become too dangerous" and "we don't want to become a cybersecurity business", which is just nonsense because 1. security by obscurity is the last thing you want, this is security 101, 2. if malicious llm agents can find v

    • Bingo. AI is an arms race. I'd rather have tools that can scan my codebase and find any security issues, than keep it private. The bad guys are going to disassemble it with AI anyway, so why hamstring the white-hats?

    • I thought the argument was that Open Source was more secure because everyone can find and fix vulnerabilities?

      AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities

      I bet the real issue is that with their actual source code in the public domain, someone could 'vibe code' a competing product with the same features...

      • I thought the argument was that Open Source was more secure because everyone can find and fix vulnerabilities?

        Up until now that has been a really questionable argument because, when you include all the modules pulled in from elsewhere, many open source codebases were really large with lots of different people and policies involved over all of the different parts of the software. You only got the real benefit for some codebases, like the Linux kernel itself, where enough people really cared.

        Now, with LLMs that can pick up bugs with better chance and explanation than old static code analysers it's completely possible

  • by DMJC ( 682799 ) on Wednesday April 15, 2026 @05:08PM (#66095568)
    They do realise that the next level of AI is binary analysis/conversion LLMs. Everything will be open source going forward.
    • Re:Cool Story Bro (Score:4, Informative)

      by Tailhook ( 98486 ) on Wednesday April 15, 2026 @05:46PM (#66095626)

      Never put off till tomorrow the poor decisions you can make today.

      Also, I've never heard of Cal.com. I suspect nothing of value is being lost here.

      • Also, I've never heard of Cal.com. I suspect nothing of value is being lost here.

        Perhaps this "dramatic shift to closed source" is more of a marketing move, to get their company name 'out there'?

        • Re: Cool Story Bro (Score:4, Informative)

          by pooh666 ( 624584 ) on Thursday April 16, 2026 @08:37AM (#66096508)
          It was fake open source to begin with. Try to actually use it for something and you run into its "licencing" restrictions. So I think this may well be an attempt to get some attention. It isn't actually that big of a change.
        • Yeah, it feels like classic enshittification, going the same way a few supposed open source projects run by corporations have gone (see also Zimbra for instance.)

          Blame AI, AI is unpopular anyway, so it's easy to blame and have people who don't have critical thinking skills (ironically, that'd be more biased towards the handful of people who are pro-genAI) assume it's right.

          (Of course, genAI is ultimately going to be just as good at analyzing machine code as it does source code, so the excuse makes little se

    • by Junta ( 36770 )

      That's assuming that you even publish the code at all. Looks like they would keep their codebase internal.

      • by znrt ( 2424692 )

        "binary analysis/conversion"

        • It seems like Cal.com is a SaaS platform for scheduling. Presumably they wouldn't have any reason to distribute their closed-source binaries if they host the platform?
          • by znrt ( 2424692 )

            fair enough, but deployed server executables aren't usually referenced as "publishing code" or "internal codebase" so i assumed there was some misunderstanding there.

            they do have phone and desktop clients although that's likely javascript, and seem open to on premises service. anyway ai binary analysis seems a moot point here because the whole "security" angle is clearly just a pretext for closing the source; the real reason is anybody's guess but i'm betting on a good laugh if (or when) they get pwned.

    • I'm pretty sure I already saw a story about a model reverse engineering from a binary. It's ultimately just assembler.

      • It's ultimately just assembler.

        I think you mean 'disassembler' - an assembler is used to create a binary executable, a disassembler turns a binary executable into a form of source code.

    • So, they're making a mistake by responding to the current threat instead of one that doesn't exist yet?
    • binary analysis/conversion means nothing if the AI doesn't understand how the CPU processes code and currently no AI actually understand how things work.
      Given you have x86, ARM and RISC-V binary analysis/conversion is absolutely worthless, an modern CPU (since the Pentium era) supports out of order execution of instructions, sooo exactly what would an AI be doing with that, since it doesn't understand how the processor actually reads machine code and what the results will be on a unknown architecture (you d

  • by 50000BTU_barbecue ( 588132 ) on Wednesday April 15, 2026 @05:09PM (#66095570) Journal

    But flauting, surely?

  • by MpVpRb ( 1423381 ) on Wednesday April 15, 2026 @05:12PM (#66095578)

    ... the blueprint to a bank vault.
    Hmmm... arguing for security by obscurity.
    Security researchers answered that question long ago.

    • Yup, that caught my eye too.

      Security isn't "my blueprints are secret."

      Security comes with: "Here are the blueprints, here is the research behind the blueprints, here are copies of the safe to practice on, here are conference papers discussing the known exploits of the safe, here are the reviews done by experts in the field, and here is the list of implementations used by governments around the world, if you discover exploits you'll make global news and companies everywhere will want your brains."

      • Fact is, non-software engineering projects have been doing this for nearly a century. Software engineering is still struggling to do it, and we all pay the price for it with data breaches.

      • Do you realize that you're basically hoping that criminals will prefer public acclaim over wealth and power? These are people who already are not doing the right thing. They're already perfectly happy with hurting everyone else if it benefits them, and you would give them the tools to rob everyone. You cannot offer them a reward greater than what they could steal.
    • To be fair, this is a different kind of security than the researchers studied.

      When it is said that "security by obscurity" is no security at all, it's referring to security *locks* that are meant to bar entry to those who are unauthenticated and/or unauthorized, but allow entry to those with the right credentials.

      Security by obscurity arguably *does* work better when it comes to vulnerability scans. If AI can't read the source code, it will have a harder time discovering vulnerabilities caused by bad coding

    • by Megol ( 3135005 )

      No, they argue for defense in depth which is a valid strategy.
      Obscuring the attack surface could delay detection of exploits for the "outside" people while the "insiders" have it easier. In theory.
      Would it really matter in this case? I doubt it would make a practical difference.

  • by williamyf ( 227051 ) on Wednesday April 15, 2026 @05:14PM (#66095584)

    In the very late '90s and most of the '00s, Automated Fuzzing tools were ivented. That led to a massive increase of vulnerability discovery and reports, increasing significantly the workload of maintainers. Also, bad actors started to use said tools to discover vulns before the maintainers could discover and patch them.

    If you search tech websites of the era (including slashdot) you will see the same set nad tone of articles. Maintainers complaining of the increased workload. The sky is falling. Security-pocalypse...

    In the end, the big corpos steped up giving tooling and compute capacity for free to run the new tools against the existing codebases, both for project important to their infrastructure, as well as projects that would earn them good PR points.

    Also, the maintainers were able to adapt their procedures, tooling and community to the "new normal" increased workload, and the software world kept turning without the sky falling off.

    This shall also pass.

    Yes, not all projects will survive, and of those which survive, not all wil get through unskaved, but stresses like this help separate the grain from the chaf

    • by dfghjk ( 711126 )

      "...and the software world kept turning without the sky falling off."

      The teensy piece of the "software world" impacted anyway. Embedded dominates the "software world", you know all that notorious software vulnerable to "Automated Fuzzing tools". 98% of processors are used in embedded products, but sure, those "maintainers" are all that matters.

      And what sky "falls off"?

  • by Local ID10T ( 790134 ) <ID10T.L.USER@gmail.com> on Wednesday April 15, 2026 @05:18PM (#66095590) Homepage

    If the tools are so good that you are afraid they will be used to expose your security flaws... maybe you should use the tools to find the security flaws yourself, and then fix them rather than declaring security thru obscurity.

    This is a fig leaf over the desire to back out of the open source community now that the product has reached profitability.

    Hopefully someone cares enough to fork the latest open source version and run them out of business with a better product that remains open.

    • They probably did run the tools.

      Then saw more work than they wanted to do.

    • Hopefully someone cares enough to fork the latest open source version and run them out of business with a better product that remains open.

      Yes, someone should definitely clone their product, keep their clone 'open source' and convince people to use the 'open source version' by creating enhanced features and providing free support, because they have nothing better to do than to go after a company that no one has ever heard of that is taking a product no one uses and making it closed source.

      I don't think this was a community-built open source project, like, say, the Linux kernel, it was a proprietary product developed by paid developers who (for

    • Maybe. The thing is, the hackers are running hundreds of different tools using a wide variety of techniques. An software maker can't necessarily afford to purchase and run the same number of "white hat" tools. These things aren't generally free. And even if they are, they still take time and effort to weed through all the false positives. The hackers, on the other hand, only need to find *one* vulnerability that wasn't found by the software maker's scanners.

  • by Zero__Kelvin ( 151819 ) on Wednesday April 15, 2026 @05:18PM (#66095592) Homepage
    Unless they don't plan on making the executable available, this won't help. Do they really think that AI can't understand machine code and find vulns that way?
  • The restrictiveness of the AGPL when combined with license assignment or a CLA achieves the opposite result as intended. Where it's meant to kept contributions from the community from being taken by corporations who don't give back, but instead it's used to take community contributions until the point when they are ready to rug pull. Knowing that an open community fork won't survive with such a restrictive license.

    At least here, they have the courtesy to at least pretend it's not the same old rug pull, with

  • by Iamthecheese ( 1264298 ) on Wednesday April 15, 2026 @05:44PM (#66095618)
    A local warehouse clerk announced Tuesday that he intends to start mugging strangers because of recent developments in artificial intelligence security. Derek "Deke" Hargrove, 34, told reporters that powerful AI systems can now analyze any publicly available information about a person and turn it into a perfect plan for exploitation. "Any data you put out there -- your routines, your photos, your habits -- it is all open-source to these AI models now," Hargrove said outside a Loop coffee shop, with a ski mask visible in his back pocket. "They can study it and build a complete blueprint to rob or target you. Living a normal digital life just is not safe anymore."

    Hargrove added that conventional security measures have become ineffective. "The only truly secure option left is to go fully analog," he said. "No digital trail, no logs, no AI predicting my every move. Just me, a dark street, and whoever walks by with cash in their pocket." He said he intends to launch his new career tomorrow night.
  • Closed source for security against AI, but here's an "insecure" open source DIY version if you want to contribute ideas to my closed source platform?

    Ah well, it doesn't really matter anyway. Thanks to the wonders of AI anyone can vibe code their own Calendly ripoff, just like this fuck stick is doing. It's not an app that is at all complicated.

  • Moving to closed source alone doesn't help, because LLMs understand machine as if it was just another programming language. You need to combine it with powerful obfuscation, and even that will only work until someone figures out how to teach an LLM to use a debugger.

  • The software is either secure or it's insecure.

    If it's secure, they have no concern that anyone knows how it works.

    If it's insecure, hiding the source does not secure it.

    Plus, they have access to the many AI software code analysis tools, just as bad actors do.

  • So their argument is the best security strategy is to never know about the vulnerabilities, a.k.a. "sticking your head in the sand", or "if I can't hack it, it's unhackable"?

    I always thought open source was supposed to be all about security through transparency, rather than security through obscurity. AI exposure is no different than a sudden surge in the size of project community (developers, users, and yes - abusers too).
    • I don't think that's the argument at all. It sounds more like keeping the source closed so that they're the only ones reviewing it for vulnerabilities.

      Exploits are faster than fixes. Attackers can automate the entire process to launch thousands of attacks every minute after their AI finds a flaw. How long does it take to get a patch deployed to every endpoint?

      • The same is true if there was simply interest in the project by a lot of people. Those people could also find exploits and use them without contributing any fixes. If that is a problem, that seems to completely break the open-source model as it means that the more eyes on the project, the more dangerous it is. How is AI different than say 10 thousands people looking at the project looking for exploits? I would think AI levels the playing field actually, since it can be used a force multiplier by the maintai
        • Will it though? Is there hard data showing that open-source projects are more frequently patched and more secure, or is it all just assumptions?

          AI is different because it can find and exploit vulnerabilities so much faster than fixes can be deployed. An attacker doesn't have to care nearly as much about reviewing and debugging their AI generated code. Defenders must, or they're making the problem worse.

          Say both sides are using vulnerability hunting AI, and they both find a critical privilege escalato

  • they will get forked then forgotten to the fork.
    • by HiThere ( 15173 )

      Are they not forgotten now? I never heard of them before this story.

      • Exactly, this is just a way to generate some 'heat' (interest) in their company...

        They failed to gain market share by being 'open source' so now they want some free press on going 'closed source because AI'...

  • Security through obscurity will usually delay hackers, the trick is doing it long enough to keep ahead of them. But would relying on an LLM to find all your weak spots in a meaningful way require subscriptions to every one of them, at their highest tiers? You have to find every problem that any one of them might spot. Otherwise you're still open to that one hacker who used that one tool that you didn't. I really wonder how much work and cost will be associated with maintaining open source code in a world
  • AI is just a way to close source it. To late AI already knows all the code and can duplicate it.

    • If I were doing what they have done, I would close the source right before applying the latest set of security patches, thus ensuring that whatever code malicious AI has already seen is outdated.
  • "I make the code proprietary for AI" what a pathetic excuse.
    Will they have read the AGPL?
    Do they know that they are not at all obliged to give the code to pigs and dogs, without asking anyone for anything?
    They could easily put the code in a repo 'watched by human personnel'.

  • "Given enough eyeballs, all bugs are shallow." Transparency lets defenders audit and patch code but it also helps attackers. Cal treats this purely as a drawback, ignoring that AI tools speed up defensive discovery and community fixes just as much.
    • What's faster, finding and exploiting a vulnerability or finding and fixing it? Keeping in mind that fixing it means changing the code, testing it, publishing it, and then everyone has to install the patch. While that is being done, the attacker has already found and hacked half the users.

      I fear that, at least for the time being, what were the security advantages of open-source have become a major threat to the entire ecosystem.

      • What's faster, fixes with a few select people working on them or fixes with anyone in the world able to work on them?

        If a person isn't going to follow the rules of not exploiting vulnerabilities, they also aren't going to follow the rule of not breaking into closed code to find vulnerabilities.
        • What's faster is the AI that found the flaw, checked Shodan (or whatever) for exploitable systems, and took them over while those people were still debugging the fix.

          There's also the old cliche about "too many chefs".

          When I check bug reports on open-source projects, no matter how many people are reporting things, there are still only a handful of people actually working on them. I suspect that the idea of there being so many people out there checking open-source code is a myth. I'm betting that e

  • We opensourced it to find but but closed source it cos you might find too many..
  • by allo ( 1728082 ) on Thursday April 16, 2026 @05:18AM (#66096352)

    Sure, AI is to blame. Totally believable.

  • by umopapisdn69 ( 6522384 ) on Thursday April 16, 2026 @07:35AM (#66096432)

    Users of Open Source should aggressively test security using AI tools themselves?

    This seems like a twist on "Problem of Commons" economics. If users of free commons resources don't commit to help keep the shared resource clean (defend it by helping secure the software) then everybody loses when the resource gets trashed.

    Hopefully this is just a latency period because not enough open source contributors yet exist who've become skilled in AI tools.

    • Of course they should, but finding and exploiting a vulnerability is much faster than finding, patching, testing, and deploying a fix.

      If you have an open-source project and a malicious AI user finds a vulnerability in it, they can hit every single user of your project in one night, while it would take you at least a day to publish a fix, which would still have to be deployed.

      It's a very real threat. Concern is warranted.

  • And if so, this was a smart move. Just think about it for a minute. AI is great at finding vulnerabilities. Fixing vulnerabilities takes longer than exploiting them. Find a vulnerability in your own code, and you have to patch it, test it, and then push out a new version. Attackers face no delay. They find the vulnerability and then exploit it immediately. Providing them access to the code gives them an edge defenders cannot quickly overcome.

    When all the eyes reviewing open-source code were human

  • Flouting his ignorance

  • AI won't care that it needs to decompile binaries, but humans will.

    What they're doing will have the opposite effect of what they apparently want.

  • The entire story sounds fishy in its arguments and code is still around as cal.diy under MIT.

    https://www.cal.diy/ [cal.diy]

    Basically these arguments are all non-sequitur.

All great discoveries are made by mistake. -- Young

Working...