Out team of ~8 (pentestesting & VA) were unanimous about Copilot being crap and Claude being the top dog. So some higher ups OK'd a Claude Teams package for work. To bypass the CorpSec tards, we use it from our lab environment that has its own unmonitored link and IP range.
Anthropic/Claude is just so far ahead of OpenAI/ChatGPT and MS/Copilot it's not funny.
Was anyone harmed by this error?
Good question. Seeing as how the government doesn't send apology letters to everyone whose data they "accidentally" hoovered up and potentially abused in any number of inventive and worrying ways, how are they to know what harm they may have suffered as a result?
Regardless, they've all been harmed statutorily as there was no probable cause for any warrant to issue for their data. The Constitution doesn't have a "no harm, no foul" clause.
I saw an interesting post a few years ago whose thesis was that Starbucks isn't a coffee company; it's a poorly-regulated bank, masquerading as a gift card company, which happens to own some coffee shops on the side. Someone broke down all of the company's public reports to demonstrate that the vast majority of their income derives from investing the money customers pre-load onto gift cards (whether they ever spend it, or not). The amount of cash that Starbucks holds "on deposit" through gift cards rivals the assets of some of the larger banks. I wish I could find the post again.
Nothing happens.