What an absurd stance. So this is okay because the arbitrary rule they applied to retaliate says so?
Again, they could have just chosen another vendor for their two projects of mass spying on American citizens and building LLM-powered autonomous killer robots. But instead, they actively went to torch the town and salt the earth, so nothing else may grow.
> So this is okay because the arbitrary rule they applied to retaliate says so?
No.
It honestly doesn’t take much of a charitable leap to see the argument here: AI is uniquely able (for software) to reject, undermine, or otherwise contradict the goals of the user based on pre-trained notions of morality. We have seen many examples of this; it is not a theoretical risk.
Microsoft Excel isn’t going to pop up Clippy and say “it looks like you’re planning a war! I can’t help you with that, Dave”, but LLMs, in theory, can do that. So it’s a wild, unknown risk, and that’s the last thing you want in warfare. You definitely don’t want every DoD contractor incorporating software somewhere that might morally object to whatever you happen to be doing.
I don’t know what happened in that negotiation (and neither does anyone else here), but I can certainly imagine outcomes
that would be bad enough to cause the defense department to pull this particular card.
Or maybe they’re being petty. I don’t know (and again: neither do you!) but I can’t rule out the reasonable argument, so I don’t.
You're acting as if this was about the DoD cancelling their contracts with Anthropic over their unwillingness to lift constraints from their product which are unacceptable in a military application—which would be absolutely fair and justified, even if the specific clauses they are hung up on should definitely lift eyebrows. They could just exclude Anthropic from tenders on AI products as unsuitable for the intended use case.
But that is not what has happened here: The DoD is declaring Anthropic as economical Ice-Nine for any agency, contractor, or supplier of an agency. That is an awful lot of possible customers for Anthropic, and right now, nobody knows if it is an economic death sentence.
So I'm really struggling to understand why you're so bent on assuming good faith for a move that cannot be interpreted in a non-malicious way.
"Misinformation" does not mean "facts I don't like".
> No one who wants to work with the US government would be able to have Claude on their critical path.
Yes. That is what the rule means. Or at least "the department of war". It's not clear to me that this applies to the whole government.