I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. It's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
The disconnect here for me is, I assume the DoW and Anthropic signed a contract at some point and that contract most likely stipulated that these are the things they can do and these are the things they can't do.
I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.
Am I missing something here?
EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:
> Two such use cases have never been included in our contracts with the Department of War
So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.
My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are
After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".
And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.
> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.
I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.
So they are such a risk to national security that no contractor that works with the federal government may use them, but they're going to keep using them for six more months? So I guess our national security is significantly at risk for the next six months?
Martin from GitHub here. This type of behaviour is explicitly against the GitHub terms of service, when we catch the accounts doing this we can (and do) take action against those accounts including banning the accounts. It's a game of whack-a-mole for sure, and it's not just start-ups that take part in this sketchy behaviour to be honest. I've been plenty of examples in my time across the board.
The fundamental nature of Git makes this pretty easy for folks to scrape data from open source repositories. It's against our terms of service and those folks might want to talk with some lawyers about doing it - but as every Git commit contains your name and email address in the commit data it's not technically difficult even if it is unethical.
From the early days we've added features to help users anonymise their email addresses for commits posted to GitHub. Basically, you configure your local Git client to use your 'no-reply' email address in commits and that still links back to your GitHub account when you push: https://docs.github.com/en/account-and-profile/reference/ema...
I think that's still probably the best route. We want to keep open source data as open as possible, so I don't think locking down API's etc is the right route. We do throttle API requests and scraping traffic, but then again there have been plenty of posts here over the years from people annoyed at hitting those limits so it's definitely a balancing act. Love to know what folks here think though.
Wow, and the only restrictions Anthropic asked for are (1) no mass domestic surveillance and (2) require human-in-the-loop for killing [1]. Those seem exceptionally reasonable, and even rather weak, lol :|
I'm convinced that these "AI Layoffs" are these companies trying to save face from the absurd overhiring that they did in 2022 and 2023 because apparently they thought that these no-interest loans/free money would just last forever.
No one really "knows" how to grow businesses so the easiest way to spend a lot of money quickly is hiring lots of people, whether or not they are "necessary". Then this free money dries up, interest rates go back up, and now they're stuck with all these employees that they didn't actually need.
Some companies like Google and Microsoft just accepted that assholes like me will call their CEOs incompetent and fired lots of people in 2023, but I think other CEOs were kind of embarrassed and held off. Now they can use AI as a scapegoat and people won't act like they were idiots for hiring twice as many people as they needed.
Also, I got declined by Block a year ago. Glad I was now.
There's an obvious theme with lawmakers in California—they pass laws to regulate things they have zero clue about, add them to their achievement page, cheer for themselves, and declare, "There! I've made the world a better place." There are just too many examples. For instance:
- Microstamping requirements for guns—printing a unique barcode on every bullet casing (Glock gen3 cannot be retired, thus, the auto-mode switch bug cannot be patched...)
- 3D printers should have a magical algorithm to recognize all gun parts in their tiny embedded systems
- Now, you need to verify your age... on your microwave?
At this rate, California should just go back to the Stone Age. Modern technology is simply not compatible with clueless politicians who are more eager to virtue-signal than to solve any actual problems or even borther to study the subject about the law they are going to pass. There will be more and more technology restrictions (or outright bans on use) in California because it's becoming impossible to operate anything here without getting sued or running afoul of some overreaching regulation.
Folks saying this offer is in bad faith or not generous enough dont seem to understand how low the bar is here for rewarding maintainers.
I maintain Express.js and Lodash, as well as a number of express direct deps (as a TC member of both Express and Lodash).
OSS has been my fulltime focus for over a year (aka Im unemployed). In 2025 I made $10 from open source, in the form of an amazon gift card for fixing a bug in another random open source project (I think they have VC money).
Call it skill issue on my part, sure valid. But having a form that says “give us your email and handle, we can easily verify your contributions, and in exchange you get $200/month of value and we ask nothing of you” is the most generous gift Ive seen.
Is it enough to fix the well known power dynamics of OSS? Of course not. Is it cheap PR for Anthropic? Yes, as is every other corporate OSS fund initiative. Im not going to give them a standing ovation and a key to the city bc they cleared the extremely low bar.
My point is that, regardless of motives, from this maintainer’s perspective this is a kind offer which is respectful of me and my time. If you fall into the camp that training on OSS is stealing, I can see why youd think that this is a slap in the face. I personally do not see it that way, as my work is a conduit for me to serve millions Ill never meet, and what they do with my labor is not a personal concern. I do what I do because the process itself has value to me.
We'll see how much the AI aspect is true by whether they're thinning out teams equally, or just axing whole initiatives. My impression of Block was that it was mostly a one-trick pony (okay, two if you include CashApp) with a bunch of side initiatives that never seemed to pan out, so I'm expecting it to be more of the latter, with this being more of an admission that they're now in "maintenance mode".
Either way, I think this is how it's gonna be. Regardless of whether AI significantly increases productivity (40%? come on), layoffs will be preemptory. Executives will see the lack of productivity boost as being due to lack of pressure, and imagine engineers are just using the AI to make their own lives easier rather than to work more efficiently. You can't really double output velocity because your users will see it as too much churn, so the only choice is to lay off half the workforce and double the workload for those who stay. "Necessity is the mother of invention." They'll overlook the fact that the work AI tools provide only encompasses 10% of your job even if they're 100% efficient.
"They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." from Dario's statement (https://www.anthropic.com/news/statement-department-of-war)
Google, OpenAI Employees Voice Support for Anthropic in Open Letter. We Will Not Be Dividedhttps://notdivided.org/
-----
The Department of War is threatening to
- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"
- Label the company a "supply chain risk"
All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.
The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.
They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.
We are the employees of Google and OpenAI, two of the top AI companies in the world.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
I was reading halfway thru and one line struck a nerve with me:
> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So not today, but the door is open for this after AI systems have gathered enough "training data"?
Then I re-read the previous paragraph and realized it's specifically only criticizing
> AI-driven domestic mass surveillance
And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War
IMO this looks largely like another circular investment. Amazon's investment is tied to OpenAI using AWS for their Frontier product and I assume Nvidia's conditions are that OpenAI continue buying hardware from them. Then there's SoftBank though given that those are the same guys that invested heavily in WeWork, I assume this is just very brash bullishness on their part.
From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.
On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.
> After giving them a fair shot, I think I can now honestly say that Brave and DuckDuckGo are better than Google for >90% of searches
I've had DuckDuckGo as my primary search engine for years and I couldn't disagree with this more. DuckDuckGo is fine for quickly getting to well known sites where I can't remember the URL, but it's objectively worse for trying to find everything from Reddit threads to Recipes. Their depth of indexing sites like Reddit feels dramatically worse lately and recipe search will predictably give me the same list of SEO spam blogs regardless of what I type in.
DuckDuckGo also seems to be doing the YouTube search thing that everyone hates where after the first several results it just starts throwing semi-related things at you instead.
I still add "!g" to my DuckDuckGo queries when I don't have time to mess around or if the first page of results is obvious SEO spam.
The other main point in this blog post isn't really about Google at all, it's just what happened when the author set up a a new e-mail address and didn't sign up for a lot of sites with it:
> Leaving Gmail also gave me the opportunity to start implementing better digital hygiene. I no longer give my primary email to fly-by-night sites, and I'm deliberate with what things I'm signing up for.
I thought there was going to be some substance to this post but it reads like someone congratulating themselves for a choice they made and then trying to backwards justify it.
This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.
> The central promise—that distributed digital fabrication would bring manufacturing back to America, that every city would have micro-factories, that 3D printing would decentralize production—simply didn’t materialize.
I never heard that. It didn’t seem like 3D-printing ever showed sings of displacing existing ways of manufacturing at scale, did it? Units per hour and dollars per unit was never its strength. It was always going to be small things (and if anything big grew out of it, those would naturally transition to the more efficient manufacturing at scale).
Vibe coding, on the other hand, is competing against hand coding, and for many use cases is considerably more efficient. It’s clearly replacing a lot of hand coding.
BTW, I think a lot of people were/are greatly overestimating the value of coding to business success. It’s fungible from a macro perspective, so isn’t a moat by itself. There’s certainly a cost, but hardly the only one if you’re trying to be the next big startup (for that, the high cost of coding was useful — something to deter potential competitors; you’ll have to make up the difference in some other way now).
Also, software is something that already scaled really well in the way businesses need it to — code written once, whether by human or LLM, can be executed billions of times for almost nothing. Companies will be happy to have a way to press down the budget of a cost center, but the delta won’t make or break that many businesses.
As always, the people selling pick-axes during the gold rush will probably do the best.
Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
> *Isn’t it unreasonable for Anthropic to suddenly set terms in their contract?* The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.
> *Doesn’t the Pentagon have a right to sign or not sign any contract they choose?* Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.
Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
[1]: https://news.ycombinator.com/item?id=47145963#47149908