Forgot your password?
typodupeerror

Comment Re:Looks like panic to me (Score 1) 74

True. And it does not look like they even have a snowflake's chance in hell to ever get to profitability without some major breakthrough. And even with that, they will have collapsed long before. The numbers for the competition do not look that much better though, it is just way more obvious for OpenAI.

  The whole idea of general LLMs is massively overhyped and cannot deliver on the hype. Large players (Google, Microsoft, potentially Nvidia) may survive because they have enough reserves and other revenue, but not even that is assured.

Comment Re:The Horse is Already Gone (Score 3, Insightful) 19

Quantum hardware may never be up to the task. They cannot even factorize 35 at this time (https://eprint.iacr.org/2025/1237). The whole thing is a mirage and a bad idea that refuses to die.

Incidentally, even if they ever become able to do tasks of meaningful size, QCs are completely unsuitable for reversing hashes and that is what cracking passwords needs.

Comment Re:Right outcome, wrong reasons (Score 0) 57

And it is common practice. And has been for a long time. If you want to do business with the government and you can't certify that your suppliers comply with applicable rules and regulations, you either stop using them. Or give up the business opportunity. Welcome to the Federal Procurement Process.

It's a hostage situation because Anthropic is trying to insert its TOS as a poison pill into others supply chains. The Pentagon doesn't have to comply with them. But as a potential vendor, you may be exposed to tortious action. Anthropic is setting you up as a blackmail victim. Something, by the way, that counterintelligence is VERY interested in.

You are living in bizarro land.

I've been living in the DoD (now the DoW) supplier business for decades. And yes, it's bizarro land. But it's the law. Federal contracts are not some sort of UBI for crybaby companies.

Comment Right outcome, wrong reasons (Score 0) 57

Not 'punishment'. But 'not fit for use'. That is, in fact, what Anthropic says.

Anthropic says its artificial intelligence product, Claude, is not ready for safe use in fully autonomous lethal weapons or the mass surveillance of Americans.

OK. Then you don't win the bid. Assuming that the DoW worded their acquisition RFQ properly. Also, if a third party uses Claude and wishes to bid on a DoW supply contract, Anthropic's resistance to being involved in such business may put that potential third party supplier in legal risk. The DoW has a right to proactively warn future partners about such a conflict. Hence the "supply chain risk".

One of the amicus briefs described these measures as "attempted corporate murder." They might not be murder, but the evidence shows that they would cripple Anthropic.

Anthropic is taking potentially unwilling parties hostage. Anthropic has no right to impose its desires on these parties. That's restraint of trade. A violation of the Sherman Antitrust Act and a felony.

Slashdot Top Deals

Porsche: there simply is no substitute. -- Risky Business

Working...