Anthropic attempted to spin this as being against mass surveillance and autonomous weapons but apparently they also tried to prevent their AI from being used for all kinds of use cases for the Department of War over months of negotiations. Not just cases of autonomous weapons, which are the future of war, but they also wanted to prevent their model from being used even in planning stages for any strikes and any data collection. The question is, why are they suing so they can sell to the Department of War when they clearly do not want them to use their software for anything that department does? And now they are seen as a risk to even suppliers of the Department of War, as those same terms of service can interrupt supply chains and software providers. Imagine anthropic cuts off use of AI to a gun manufacturer who was using their AI in quality control, and suddenly the supply chain stops. Or they modify the AI model to detect the usage as violated a terms of service and have it not work. If Anthropic and truly wanted to provide services to the government then agree with the all lawful use cases terms and be done with this, rather than trying to control the government itself. Congress can decide what we can use these things for. If I sold hammers and didn't want it used for construction of weapons of war then I just wouldn't sell them hammers, not profit then hamstring the buyer into not being able to use the hammer except for things I wanted them to use it for.