Pentagon Formally Designates Anthropic a Supply-Chain Risk 127
The Pentagon has formally designated Anthropic as a "supply chain risk," ordering federal agencies and defense contractors to stop using its AI tools after the company sought limits on the military's use of its models. In a written statement, the department said it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." Politico reports: The designation, historically reserved for foreign firms with ties to U.S. adversaries, will likely require companies that do business with the U.S. military -- or even the federal government in general -- to cut ties with Anthropic.
"From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Pentagon said in the statement. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."
A spokesperson for Anthropic did not immediately respond to a request for comment. But the company said last week it would fight a supply-chain risk label in court.
"From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Pentagon said in the statement. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."
A spokesperson for Anthropic did not immediately respond to a request for comment. But the company said last week it would fight a supply-chain risk label in court.
Good (Score:2)
Now declare the rest a supply chain risk.
Re: Good (Score:5, Insightful)
The designation is not based on some objective feature or lack thereof, it is just a revenge of your convicted felon president and war criminal in chief and his warfighters who want control over what they see is a useful tool to beat the rest of the world, including y'all, into submission.
Beating this one supplier, whose only sin here is not relinquishing control, is not an upside.
Re: (Score:2)
Re: (Score:3)
15% jump in private subscriptions probably doesn't come close to replacing the number of subscriptions that DoD contractors have that they must now cancel.
Re: (Score:2)
No, but it's a nice bit of additional ARR that tides you over until you win your breach-of-contract lawsuit against the US Government.
Re: Good (Score:5, Insightful)
What is "positive", the practice of the trumpistani government to blackmail private businesses for personal gain of the president?
Methinks your judgement is a bit short-sighted here.
Re: (Score:2)
They didn't do anything that OpenAI has also done.
Trump doesn't like Anthropic's CEO. It's that simple.
Just another day of Don Donald's mafi^H^H^H^Hadministration.
Re: (Score:2)
Trump doesn't like Anthropic's CEO. It's that simple.
Could very well be so, I'm not following the drama that closely. Doesn't mean it isn't a harmful precedent, but with trump what isn't?
Re: (Score:2)
Re: (Score:1)
The designation is not based on some objective feature or lack thereof, it is just a revenge of your convicted felon president and war criminal in chief and his warfighters who want control over what they see is a useful tool to beat the rest of the world, including y'all, into submission.
Here is the actual story, transcription repeated from here [x.com]:
.
@USWREMichael
says the Maduro raid was the trigger point for the DoW’s conflict with Anthropic:
“Palantir’s the prime contractor. [Anthropic] is the sub.”
“One of [Anthropic’s] execs called Palantir and asked, ‘Was our software used in that raid?’”
“So— they’re trying to get classified information. And implying— if they were used in that raid, that it might violate their terms of service.”
“It raised enough alarm with Palantir, who has a trusted relationship with the Department, to tell me, and I’m like, ‘Holy shit— what if this software went down? Some guardrail kicked up? Some refusal happened for the next fight like this one and we left our people at risk?”
“I went to Secretary Pete Hegseth and told him what happened.”
“That was like a ‘Woah’ moment for the whole leadership at the Pentagon that we’re potentially so dependent on a software provider without another alternative that has the right or ability to not only shut it off— maybe it’s a rogue developer who could poison the model to make it not do what you want, or trick you, or hallucinate purposefully.”
“That culminated in the Tuesday dramatic meeting with Secretary Hegseth and me and Dario with the Friday deadline that got blown.”
“I never really thought they wanted to make it.”
So, no, contrary to your unsourced claim, it was based on (a) a specific incident (b) material concern about the implications of the specific incident (c) escalation through the chain of involved parties (d) without apparent direction by the "convicted felon president and war criminal in chief".
Also pretty clear from the anecdote how it is that DoD has sincere concerns with actual grounding. Debate over the legitimacy of those concerns, the ethical
Re: (Score:3)
A shitpost on a Nazi propaganda site by a regime troll isn't "the real story", darling.
Re: (Score:3, Funny)
Much comeback, very arguments, bigly thinking.
Re: (Score:1)
Clearly you're the one that needs to take their meds.
Re: (Score:1)
jesus fuckin christ
get some help
This is Incredibly Frightening (Score:5, Insightful)
Trump has done a lot of stuff that is awful, but this is taking it to a completely different level. It's like we've gone back in time 70 years to the red scare, when enemies of the government (ie. enemies of specific powerful people in government) exploited their power to end the careers of individuals, and the very existence of entire companies.
I don't care if you are a Democrat or a Republican (or anything else): this is very, very wrong. You should be afraid, and you should be contacting your elected representatives and demand they take action!
Re:This is Incredibly Frightening (Score:5, Funny)
Hold on, the news channel that runs 24x7 on my tv is telling me this is the fault of illegal immigrants and liberals.
Re: This is Incredibly Frightening (Score:2)
You still watch cable tv? How retro...
Re: (Score:2)
[golf clap]
Re: (Score:2)
You forgot the trans folks.
Somehow they always relate this bullshit to transgender people.
Re: (Score:2, Insightful)
"this" is taking it to a completely different level?
The US is so far gone that I've abandoned feeling anxiety about it. I'm just enjoying watching them dig the hole deeper and deeper.
Re:This is Incredibly Frightening (Score:5, Insightful)
I take no joy in it. It's amazing to me how quickly constitutional democracy is dying in the US, and how few of my American friends actually care (they honestly believe that the next guy will be a sane Republican and everything will go back to normal). If it was only their own lives they were ruining I might not care so much, but this is extending now far beyond the United States. The same forces that are tearing down the institutions of democracy and making a mockery of everything the founding fathers stood for are also marshaling in other countries including mine.
Re: (Score:1)
And I am not proud of the fact that a good portion of me is engaged in schadenfreude.
Re: (Score:1)
Why is America run by pedos and those who protect them?
What's in it for you?
Re: (Score:2)
Pedos (especially the ones in politics) are very good liars. They also target the most ignorant and uniformed people to serve as their base.
In other words, they're no different from other terrible politicians who screw over the constituents ... but those constituents still elect them because they believe their lies. The pedos just happen to have one extra lie ("I don't molest children").
Re: (Score:3)
"This is a buyer telling the sender they are cancelling their contract and looking for other suppliers. Nothing more." No, labelling a supplier a supply-chain risk is the very definition of "more" than simply cancelling a contract. This designation puts a domestic company in the same bucket as foreign adversaries, and requires any other DoD contractor to cease any contracts with Anthropic and certify that they do not use Anthropic products. This is a blacklisting exactly in the vein of the McCarthy era. Wer
Re: This is Incredibly Frightening (Score:1)
It means anthropic tech can be used on dOw contracts (only). Suppliers who work with dOw are still free to partner with Anthropic on their other contracts.
Sounds like securities fraud to me (Score:5, Interesting)
"The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."
With that much spin, I'm the Pentagon hasn't started emitting radio signals.
There's a word for this: extortion. The military has decided that if they cannot use Anthropic's technology in any way they please, that they will just ban all government use, in an attempt to force the company to violate their principles. Here's hoping Anthropic shows them that the real world doesn't work that way by spanking them with a volley of lawsuits that will keep government lawyers employed for the next decade.
From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes
That's not a principle. That's a functional requirement for the software that you're buying. If it is spelled out in the requirements document when a company bid on the contract, then they are obligated to comply. If you failed to specify the requirements in such a way that you could do whatever you want, tough s**t. That's your mistake. Do better next time. It's no different than any other situation where the government buys something that isn't suitable for what they are trying to use it for. They should have understood what they were buying before they bought it. Period.
I think it is safe to say that if those requirements were in the contract, the military would have sued for breach by now. So what we have are a bunch of embarrassed generals who failed to do due diligence in their procurement contracts, are realizing how much money they wasted, and are acting in ways that likely violate any number of federal securities laws, among others, to try to force the company into accepting an amended contract or else watch their dreams of an IPO (rumored to happen soon) turn into a nightmare.
I believe that the maximum penalty for securities fraud is 25 years. Just saying.
Re: (Score:2)
I'm wondering if perhaps another country would be interested in acquiring Anthropic.
I'm assuming that would be disallowed, but perhaps an offer could be made to Anthropic's employees for a safe life in a free country, where they can re-establish an AI company.
Re:Sounds like securities fraud to me (Score:5, Insightful)
There's a word for this: extortion. The military has decided that if they cannot use Anthropic's technology in any way they please, that they will just ban all government use, in an attempt to force the company to violate their principles. Here's hoping Anthropic shows them that the real world doesn't work that way by spanking them with a volley of lawsuits that will keep government lawyers employed for the next decade.
Yes, this is extortion, and I think Pentagon has shot themselves in the foot with this. Other companies will now think twice before doing any business with Pentagon. Sure, they might get a few extra bucks from a DoD deal, but they also risk losing many times that, if/when Pentagon uses this extortion tactic again.
Re: (Score:2)
the real world doesn't work that way
I'm sure the supreme court Trump cronies will disagree.
Re: (Score:2)
the real world doesn't work that way
I'm sure the supreme court Trump cronies will disagree.
By the time it gets to them, it will have been an expensive, annoying court case for two or three years.
"You can beat the rap, but you can't beat the ride (Score:5, Insightful)
The government will lose in court... eventually. But the punishment will have already taken place.
Capitulate or suffer the consequences. Resistance is painful.
Re:"You can beat the rap, but you can't beat the r (Score:5, Informative)
It's the trump MO for the last year. Do whatever he and his cronies want, using tortured legal reasoning and then when the courts eventually tell him no, the damage is done and it becomes part of the message. "The activist judges won't let me do what you asked me to do."
Crazy to see the US becoming more like Putin's Russia every day. I'm not sure anyone is falling out of balconies yet, but there have definitely been suicides.
Re: (Score:2)
I'm not sure anyone is falling out of balconies yet
Maybe that's the reason he's making the White House taller and hardscaping the ground next to it.
Re: (Score:2)
It's ok, the VP's office is across the alley in the EEOB which is several stories higher than the White House. He can have anyone who isn't cooperating defenestrated over there. Hopefully starting with the guy who currently has his shit parked in that office.
Re:"You can beat the rap, but you can't beat the r (Score:5, Interesting)
The government will lose in court... eventually. But the punishment will have already taken place.
Capitulate or suffer the consequences. Resistance is painful.
Painful, but how painful, really? The publicity has boosted Anthropic's subscriptions significantly, and the summary overstates the impact of the label. It is not true that all companies that do business with the DoD will have to cut ties with Anthropic, the label just means that companies that do business with the DoD can't use Anthropic's AI on their DoD contracts. They're still free to use it for any non-DoD work they do, or to run their own business operations.
So, yes, it'll inflict some pain, but I think they can handle it. Anthropic is far healthier than Open AI, financially.
Re: (Score:2)
15% growth overnight is impressive, but a) how much of that is really going to stay long term, and how much is just individuals' curiosity buying a few months worth and b) what percent of their business is suddenly lost to all the DoD contractors and subcontractors? I suspect they've lost far more than they've gained. I commend them for sticking by their principles, if only temporarily, and even if I might disagree with their reasoning.
Re: (Score:2)
It's also absolutely ridiculous. The idea that an engineer asking Claude to help debug why he has bad type assignments in some React frontend and then blindly accepting the code without review, which somehow slips some critical security flaw into an actively-used weapons system in a theater of war is ridiculous.
This is extortion. Period.
Re: (Score:2)
The government will lose in court... eventually. But the punishment will have already taken place.
Capitulate or suffer the consequences. Resistance is painful.
One thing Arrogance always overlooks when getting rich in the United States of Capitalism; you are still a subject in country.
This is the equivalent of Imminent Domain. Anthropic can either work with government, or learn how they’ll take from them it anyway. Under guise of National Security.
Don’t like it? Find another more-profitable and less-corrupt Capitalist market. Good luck.
Re: (Score:2)
* Eminent Domain. Not that the faux pas is any less accurate now.
Solution: AI with Horoscope Predictions (Score:2)
Right, right... (Score:2)
Re: (Score:3)
apparently they used palantir maven to recollect intel data with which then claude produced target lists with coordinates, ordinance recommendations and legal justifications. this process would usually take weeks but allowed to list and strike over 1000 targets recently in iran in a single day. supposedly the targets are verified by a human official, although verifying 1000 targets with any rigour in 24h seems a bit much.
incidentally, it has now transpired that the strike on a school that killed over 100 gi
Re: (Score:1)
Re: (Score:2)
Why do you bother writing this hateful drivel?
sick antisemitic fucks like you should kill yourselves and ...
strong the cognitive dissonance is!
save the rest of us from your disgusting presences.
know what ... i think i'll keep expressing my opinions freely on the open internet, thank you. if that bothers you, you're welcome to ignore it ... or to vomit your bombastic deathwishes to clearly show where you morally stand, that's fine! note i'll also keep using the word "antisemitism" in its universally accepted meaning according to which absolutely nothing i said can even be remotely related to it and cheerfully ignore this pathetic attempt at subverting language in
Re: (Score:2, Interesting)
It's actually funny that in bombing 1000 targets, America killed few orders of magnitude less than the Mad mullahs themselves kill of their own civilians just recently. 100 or so vs 30,000.
Where were your crocodiles tears for the innocent civilians killed by Iran's own government?
It's completely unsurprising since Iran doesn't care how many of it's own people they kill (They're even worse than Palestinians in this regard...)
Iran's Islamic regime officials are threatening to kill Iranian children who show even slight opposition
A member of the National Security and Foreign Policy Commission of Iran’s Islamic Consultative Assembly stated that death sentences will be issued for anyone who does not obey them.
“If your children speak in support of the enemy, the order to shoot them has already been given,” he said.
He added that any young person who is even slightly pro–U.S. or pro–Israel is treated like Netanyahu:
“We’ll shoot and kill them without hesitation. We do not want to kill your youth, but we must, because they are ignorant.”
But don't accidently kill a handful of civilians saving the entire country for tho
Re:Right, right... (Score:4)
Why does any criticism of the US injecting itself into the internal affairs of another country militaristically without legal justification result in "BUT BUT BUT THEY'RE EVIL SHITBAGS THAT DESERVE KILLIN"?
I don't give a shit if they're evil shitbags that deserve killing. If they deserve it so bad, then JUSTIFY THE ACTION WITH THE LAW.
An illegal war to kill evil shitbags is still an illegal war, you fucking dumbass.
Re: Right, right... (Score:1)
What is a legal war ? Can you explain with some examples where both parties agreed itâ(TM)s legal and fair ?
Itâ(TM)s a serious question.
Re: (Score:2)
What is a legal war ?
the legal framework for war is quite clear. respecting and enforcing it is another matter. the mechanisms to get authorization are often subverted (which means those actions my be technically legal but still fraudulent) and even more often are ignored altogether.
the us war against iraq in 1991 had a mandate from the un security council, which is the sole international authority to authorize military action, thus was a legal action according international law. the us tried to reuse this same mandate to invad
Re: (Score:2)
The Vietnam War was authorized under the Gulf of Tonkin Resolution by the Congress.
Re: (Score:2)
it can't drag on for 4+ years because the world's economy would get back to 1800 levels. well, china and russia would likely weather that out.
it will probably end when trump manages to weasel out proclaiming he obliterated something and won, but the iranians don't seem to be inclined to give him an offramp so easily for now. they should ask for total sanctions lifting, dismantling of all us bases in the region (that's symbolic, it means cleaning the rubble that's left from them in any case) and the recognit
Re: (Score:2)
Read the United States Constitution for what it takes to have a legal war in US law - i.e. the approval of the Congress of the United States, which absolutely was not gained here.
Or, read the UN Charter which spells out what is necessary for military action without it being defense against an imminent threat (which did not exist).
Nice try at equivocation, but there are rules in international law about this kind of thing, and they absolutely were not followed. This is a war of aggression with no justificati
Re: (Score:1)
I asked for some examples of legal war where both countries agreed it is legal.
Simple question really. :)
Re: (Score:2)
Oh, so you're asking a loaded question with no answer, unless you want to go back to World War II where everyone still formally declared war on each other.
The game isn't played that way any more. You should know that from any of the Soviet invasions of Eastern Bloc countries that never had "war declared"; or the Vietnam War which was undeclared but for a "Gulf of Tonkin Resolution" that "authorized the use of military force."
War declarations don't happen any more. The good news is that they don't really h
Surveilance resistance? (Score:5, Interesting)
So Anthropic should be less surveilled by the US government than OAI and the other weasels sucking on Kegseth's nipples.
Re: (Score:1)
weasels sucking on Kegseth's nipples.
The next time I need to pass a polygraph, this phrase will come in handy. Thanks, Slashdot!
Re: (Score:2)
Since no government contractors can do business with Anthropic, that means Anthropic can't sell access to chat content to them or directly to the federal LEOs.
LOL, seriously dude? NOTHING can stop surveillance, not even laws. Stop trying to be logical as those who pretend to be your rulers do not care for logic at all. They only care about what they want in the near-future.
Us government OR (Score:1)
The Mafia, you decide.
Re: (Score:2)
"All lawful purposes" is a lie (Score:3)
From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes.
There is no such principle, this is completely made up. To make it worse, there are basically no laws restricting AI use, which means the Pentagon is asserting that they must be able to use it for literally anything they want to.
Re: "All lawful purposes" is a lie (Score:1, Insightful)
We'll sell you this missile, but you have to click through ten different license screens and attest to not shooting it at people we, the vendor, not you, the military customer, decide are off limits.
We'll sell you this office furniture, but not if you plan to put it in an immigration enforcement facility. https://www.cbsnews.com/news/w... [cbsnews.com]
We'll sell you this software, but we get to decide how you use it, not you.
Fucking joke.
Re: (Score:2)
You have some valid points, but they don't make the original statement about "all lawful purposes" true.
Re: "All lawful purposes" is a lie (Score:2)
You mean like the jets the US has been selling to their allies, the ones with the kill switch, yeah?
Apparently it is a thing.
Re: (Score:1)
We'll sell you this software, but we get to decide how you use it, not you.
Are you too stupid to have ever read an EULA?
That is exactly how *all* software is sold.
Re: "All lawful purposes" is a lie (Score:2)
Government and military software is not consumer software. The tos are part of the negotiated contract. As with any procurement of bespoke anything.
The closest I've seen from my little corner of it is a vendor insisting on "best effort" language to let them off the hook if the part they delivered turned out to be have been specified a little too ambitiously for their manufacturing process.
This was for an optical assembly. And it's still different from the vendor insisting that their optics only be used to l
Re: (Score:2)
If Anthropic violated the terms of any contract that has already been agreed to, the gov't would have said that.
The government is obviously trying to void the terms of a contract they've already agreed to, however. Otherwise, they wouldn't have their panties in a wad about Anthropic not "letting them do stuff".
Consumer or not, there is a contract. The biggest difference is that the consumer doesn't get to negotiate and has to scroll it in a tiny text box. The EULA for the software in a system I just bought
Re: "All lawful purposes" is a lie (Score:2)
Pretty sure that eula said the vendor is not responsible for damage if you use it to control nuclear power plants. Just like the gpl and bsd licenses say the software is provided "as is" and the authors are not liable for property damage and loss of life.
Re: (Score:2)
They might be responsible for the user using their tool to cause a meltdown under some legal theories, so they explicitly put that in the contract to make sure.
Likewise, I'm sure Anthropic doesn't want to get hauled in front of a Nuremberg-style tribunal one day for facilitating crimes against humanity. So they avoid going into that zone. This has never been a secret, so I'm not sure why the Pentagon is having a panic attack about it all of a sudden.
Re: "All lawful purposes" is a lie (Score:1)
And which legal theories are those? The sad toddler school of jurisprudence, where it's totally your fault for getting bitten, since not letting me play with your toy is the unstoppable force that compelled my teeth to close around your hand?
Re: (Score:2)
The legal theory of whoever has the highest powered lawyers.
You are hopelessly naive. You think that the world actually works according to the preconceived principles in your nutjob head. Well, it doesn't.
Re: "All lawful purposes" is a lie (Score:1)
So the legal theory of I'm scared of something so I'm going to try to whisper magic words to ward off the evil?
The world's a nutty place full of stupid shit, nut western courtrooms are usually not at the apex of the stupid.
Re: (Score:2)
If you don't like the terms of the contract, don't sign it and find another supplier that doesn't insist on those terms.
Isn't that the "free hand of the market" slapping your hypocrisy across the face again?
Re:"All lawful purposes" is a lie (Score:5, Insightful)
From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes.
There is no such principle, this is completely made up. To make it worse, there are basically no laws restricting AI use, which means the Pentagon is asserting that they must be able to use it for literally anything they want to.
Well, to be fair, there are legal restrictions on what the DoD can do. For example, they can't blow up random boats off of the South American coast, they can't occupy American cities, they can't unilaterally invade a foreign country and kidnap its head of state, and they can't just start bombing shit without any congressional authorization.
So, you know, they're restricted to using the AI only for things that they are legally allowed to do.
Blackmail (Score:5, Insightful)
By using this designation, not only is the Pentagon saying they won't use Anthropic's products, they are saying that no other military contractors can use Anthropic's products. This is much, much more serious than just the loss of the contract, they are basically shutting out Anthropic from a very wide swath of the market. Microsoft wants to sell Office to the Pentagon? It can't use Anthropic's products. Just about all large companies will be affected. This is nothing short of blackmail.
I asked Claude (Score:1)
I asked Claude what to make of this. Claude responded by auditing Pete Hegseth's finances and recommending an extended stay at the Betty Ford Clinic.
They said NO to us! (Score:2)
How dare they? Waaah!
This makes them our enemies. If you play with them, we won't play with you.
Morals (Score:2)
someone is full of crap (Score:2)
"From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Pentagon said in the statement.
and
OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons.
https://slashdot.org/story/26/... [slashdot.org]
Standard Trump contract negotiation tactics (Score:2)
Hot take: there really is a supply chain risk. (Score:4, Insightful)
Full disclaimer, my preferred AI agent is Claude 4.6 Opus running in Claud Code. I'm a paid subscriber and have no plans to go anywhere.
However, I have never once heard of a weapons company stipulating in a contract that the Pentagon shall not bomb a school (which just happened in Iran, intentionally or otherwise). Fact of the matter is, anyone and everyone who does business with the Pentagon shares just as much culpability as Anthropic or anyone else. Prism was ran on silicone supplied by private companies, and they never took heat for selling computers to the Pentagon. I will admit though, AI is a unique technology in the grand scheme of history, so I can understand why people would feel an extra layer of caution is warranted. But OpenAI is catching flack for selling AI to the Pentagon, but what about Amazon or Microsoft cloud servers that could be running Prism 2.0? Or weapons companies supplying bombs that might hit a school? There's a huge double-standard here because AI is the new shiny scary thing. But Northrop Grumman is currently developing actual ICBMs and I don't see anyone falling over themselves that NG must contractually restrict how the Airforce fields the nukes it creates.
The restriction on Antropic is fairly narrow, too. It only applies to the use of Claude directly to complete contracts with the Pentagon. The reality is, they ARE a legitimate supply chain risk. If Claud has such intense guardrails it could refuse to fire or stop cooperating, then it can't end up in the military tech supply chain inadvertently by a contractor. What happens when an agent involved in weapons manufacturing systems suddenly stops working because it determines the weapons are being misused? Even if Claude isn't built directly into a weapon, it could decide to stop cooperating if it believes it's being forced to participate in something unethical indirectly. I think some people are just rushing to see this as a punitive move, when in reality there is an actual supply chain issue with an AI bot that might suddenly refuse to cooperate.
Re: (Score:2)
I'm also a paid subscriber to Claude. It's very powerful, and very helpful.
Is anyone seriously putting a LLM in the position of making autonomous firing decisions? I exercise sensible caution in trusting any LLM to make serious decisions about coding, or about extracting data from documents. It's good, but if you're putting it in the position of making life and death decisions without human oversight or control, functionally we should be also letting it make judgements about surgery, medical imaging, court
Re: (Score:2)
I think some people are just rushing to see this as a punitive move, when in reality there is an actual supply chain issue with an AI bot that might suddenly refuse to cooperate.
They came out and said it: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."
https://www.axios.com/2026/02/... [axios.com]
However, I have never once heard of a weapons company stipulating in a contract that the Pentagon shall not bomb a school (which just happened in Iran, intentionally or otherwise). Fact of the matter is, anyone and everyone who does business with the Pentagon shares just as much culpability as Anthropic or anyone else.
If the pentagon doesn't like the terms they don't sign the contract. The reason why is irrelevant. Could be the pentagon does not like the vendors terms when it comes to supply of parts and maintenance for a particular system. Maybe they would prefer to have flexibility to have themselves or a third party handl
Re: So, where's the missing billions? (Score:1)
Saying a government is corrupt is like saying water is wet.
So will Kegseth have to train a new model? (Score:2)
I mean it's obvious that Whiskey Pete has been letting Claude write his speeches and make his decisions this whole time, right?
It's even obvious that Claude is sourcing Colin Jost's "Weekend Update" performances for some of it.
It's probably why he wanted Anthropic to take off all the guard rails so that he can use it to commit more and better war crimes.
Warfighters (Score:2)
article barely mentioned the reason (Score:1)
That once sentence is what this is all about. Hegseth wants the ability to spy on us without restriction and the ability to kill without human intervention. Anthropic won't agree to those terms and so as punishment they are being labeled a supply chain risk by the government.
\o/ (Score:1)
Which is more concerning, that they can't understand that it cannot be a supply chain risk if they want it so badly or that they are misusing this designation as a punishment because they can't get what they want.
These people have weapons more dangerous than a bread stick - definitely concerning.
Just so we're absolutely clear (Score:2)
Am I right?
Check back in a couple of months after the US government-initiated Iranian asymmetric attack on the US homeland or ordinary citizens.
"lawful purposes" (Score:2)
Kegsbreath wants to have chatbots killing people, and he's all upset that Anthropic said no. Tell me again what's lawful about his desires.
And was it a chatbot that targeted the girls' school in Iran?
Makes sense (Score:2)
They wanted Anthropic to help with Thier "department of war" AI. So obviously had grave supply chain concerns.
You want your new war AI to be developed by a supplier you have security concerns about...oh, wait no.
Ohhh right it's bully logic. They didn't get what they wanted so they gotta put the smackdown. -This is so surprising!
Trump and Hegseth are a safe and reasonable duo. Not at all a risk to national security with their bribes, security leaks etc.
Welcome to soviet America where government black
'All Lawful Purposes' (Score:2)
Re:That's a start. (Score:5, Informative)
Re: That's a start. (Score:3)
No, now can the Supreme Court reverse this ridiculous decision. *this* is the kind of dumb shit that genuinely does get your famed rights and freedoms taken away.
Re: (Score:3)
Re: (Score:3)
They can still make that bank, and even more, by fighting back.
Literally nobody that subscribed would stop their subscription because they sue the government for breach of contract. Shit, that would probably get them even more new subscriptions. And the pot of gold at the end of that rainbow is that you get the additional subscriptions, plus a big fat check from the government for services you don't even need to render.
It's all upside for Anthropic.
Re: (Score:3)
They're not banning AI, just one supplier who had the gall to say "No, our technology isn't capable of autonomously shooting down incoming nuclear missiles" when they wanted the answer to be "yes". Fortunately for the DoD, OpenAI's Sam Altman then came and said his was perfectly capable of doing literally everything humans can do and better and it's the best and also people use up more energy than our AI data centers so AI is actually better and you can start getting rid of humans and...."
So it's actually w
Re: (Score:2)
Giving unbridled control of weapons (nukes) to an AI may not be a good idea: https://www.imdb.com/title/tt0... [imdb.com]
Thinking this will prevent war, the US government gives an impenetrable supercomputer total control over launching nuclear missiles. But what the computer does with the power is unimaginable to its creators.
Re: That's a start. (Score:2)
Trump operates based on who bribes him. ChatGPT would never get banned because they pay the bribes.
You comment implies you care about the morals or quality of tech. Trump only cares about bribes.