Hacker Newsnew | past | comments | ask | show | jobs | submit | bestcommentslogin
Most-upvoted comments of the last 24 hours.

The disconnect here for me is, I assume the DoW and Anthropic signed a contract at some point and that contract most likely stipulated that these are the things they can do and these are the things they can't do.

I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.

Am I missing something here?

EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:

> Two such use cases have never been included in our contracts with the Department of War

So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.

My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.

[1]: https://www.anthropic.com/news/statement-department-of-war


I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.

Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.

It's a waste of your effort to apply rational argument to the actions of a group that are in it for a shakedown.

I admire Anthropic for sticking to their principles, even if it affects the bottom line. That’s the kind of company you want to work for.

So they are such a risk to national security that no contractor that works with the federal government may use them, but they're going to keep using them for six more months? So I guess our national security is significantly at risk for the next six months?

Wow, and the only restrictions Anthropic asked for are (1) no mass domestic surveillance and (2) require human-in-the-loop for killing [1]. Those seem exceptionally reasonable, and even rather weak, lol :|

[1] https://www.anthropic.com/news/statement-department-of-war


There's an obvious theme with lawmakers in California—they pass laws to regulate things they have zero clue about, add them to their achievement page, cheer for themselves, and declare, "There! I've made the world a better place." There are just too many examples. For instance:

- Microstamping requirements for guns—printing a unique barcode on every bullet casing (Glock gen3 cannot be retired, thus, the auto-mode switch bug cannot be patched...)

- 3D printers should have a magical algorithm to recognize all gun parts in their tiny embedded systems

- Now, you need to verify your age... on your microwave?

At this rate, California should just go back to the Stone Age. Modern technology is simply not compatible with clueless politicians who are more eager to virtue-signal than to solve any actual problems or even borther to study the subject about the law they are going to pass. There will be more and more technology restrictions (or outright bans on use) in California because it's becoming impossible to operate anything here without getting sued or running afoul of some overreaching regulation.


Folks saying this offer is in bad faith or not generous enough dont seem to understand how low the bar is here for rewarding maintainers.

I maintain Express.js and Lodash, as well as a number of express direct deps (as a TC member of both Express and Lodash).

OSS has been my fulltime focus for over a year (aka Im unemployed). In 2025 I made $10 from open source, in the form of an amazon gift card for fixing a bug in another random open source project (I think they have VC money).

Call it skill issue on my part, sure valid. But having a form that says “give us your email and handle, we can easily verify your contributions, and in exchange you get $200/month of value and we ask nothing of you” is the most generous gift Ive seen.

Is it enough to fix the well known power dynamics of OSS? Of course not. Is it cheap PR for Anthropic? Yes, as is every other corporate OSS fund initiative. Im not going to give them a standing ovation and a key to the city bc they cleared the extremely low bar.

My point is that, regardless of motives, from this maintainer’s perspective this is a kind offer which is respectful of me and my time. If you fall into the camp that training on OSS is stealing, I can see why youd think that this is a slap in the face. I personally do not see it that way, as my work is a conduit for me to serve millions Ill never meet, and what they do with my labor is not a personal concern. I do what I do because the process itself has value to me.


Posting it here as a top-level comment as many people asked why boycott just openAi:

-----

openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:

* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.

* also, he warned that "ads would be the last resort" for LLM companies.

Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.

While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.


"They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." from Dario's statement (https://www.anthropic.com/news/statement-department-of-war)

I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].

I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.

I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47174423

[2]: https://news.ycombinator.com/item?id=47149908


The president of peace btw.

I'm baffled at the lack of calls to boycott the Fifa world cup in US.

And at the double standards applied to Russians and Israelis in their wars of aggression.

I guess Israel can play the "October 7th" card at least which was an insane horror.


IMO this looks largely like another circular investment. Amazon's investment is tied to OpenAI using AWS for their Frontier product and I assume Nvidia's conditions are that OpenAI continue buying hardware from them. Then there's SoftBank though given that those are the same guys that invested heavily in WeWork, I assume this is just very brash bullishness on their part.

From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.

On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.

When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.


1.5 hours after this was posted, Sam Altman stated openai will work with the DoW.

So much for this waste of a domain name. https://x.com/sama/status/2027578652477821175

"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. "


The take home message from this is that the only way for any country to be secure is to have nuclear weapons.

> After giving them a fair shot, I think I can now honestly say that Brave and DuckDuckGo are better than Google for >90% of searches

I've had DuckDuckGo as my primary search engine for years and I couldn't disagree with this more. DuckDuckGo is fine for quickly getting to well known sites where I can't remember the URL, but it's objectively worse for trying to find everything from Reddit threads to Recipes. Their depth of indexing sites like Reddit feels dramatically worse lately and recipe search will predictably give me the same list of SEO spam blogs regardless of what I type in.

DuckDuckGo also seems to be doing the YouTube search thing that everyone hates where after the first several results it just starts throwing semi-related things at you instead.

I still add "!g" to my DuckDuckGo queries when I don't have time to mess around or if the first page of results is obvious SEO spam.

The other main point in this blog post isn't really about Google at all, it's just what happened when the author set up a a new e-mail address and didn't sign up for a lot of sites with it:

> Leaving Gmail also gave me the opportunity to start implementing better digital hygiene. I no longer give my primary email to fly-by-night sites, and I'm deliberate with what things I'm signing up for.

I thought there was going to be some substance to this post but it reads like someone congratulating themselves for a choice they made and then trying to backwards justify it.


Reaction 1: how would this even work with embedded systems that have no UI to input this data?

Reaction 2: it's open source, make the lawmakers do submit the changes.

Reaction 3: how would this ever be enforced? Would they outlaw downloading distributions, or even older versions of distributions? When there's no exchange of money, a law like this is seems like it would be suppression of free speech.

Reaction 4: Someone needs to maliciously comply, in advance, on all California government systems. Shutdown the phones, the Wi-Fi, the building access systems, their Web servers, data centers, alarm systems, payroll, stop lights, everything running any operating system. Get everyone to do it on the same day as an OS boycott. And don't turn things back on until the law is repealed.


Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.

The writeup here[1] was pretty clear to me.

> *Isn’t it unreasonable for Anthropic to suddenly set terms in their contract?* The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.

> *Doesn’t the Pentagon have a right to sign or not sign any contract they choose?* Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.

[1]: https://www.astralcodexten.com/p/the-pentagon-threatens-anth...


This has much broader implications for the US economy and rule of law in the US.

If government procurement rules intended for national security risks can be abused as a way to punish Anthropic for perceived lack of loyalty, why not any other company that displeases the administration like Apple or Amazon?

This marks an important turning point for the US.


> it's very hard to see how anyone could look at what just happened

I think what you are missing is their annual comp with two commas in it.


Stay strong Anthropic. We just like you more for this.

> The warrants included a search through all of her photos, videos, emails, text messages, and location data over a two-month period, as well as a time-unlimited search for 26 keywords, including words as broad as “bike,” “assault,” “celebration,” and “right,” that allowed police to comb through years of Armendariz’s private and sensitive data—all supposedly to look for evidence related to the alleged simple assault.

That's an insane overreaction and overreach. There's some quotes from officers during the protests that are particularly troubling, too.

The article links directly to the ruling: https://www.ca10.uscourts.gov/sites/ca10/files/opinions/0101...

I wonder how the Sargent and Judge who approved these searches feel. If they take their jobs seriously, I do hope that they are more critical of search warrant applications in the future.


This might actually make Anthropic very popular among those who do not support the current US presidency, a significant market share.

Department of Defense: You just bombed the wrong Georgia! The people of Atlanta are furious!

ChatGPT: You're absolutely right, and you're right to call that out. Upon examination it does appear that there might have been a mistake with the coordinates of the bomb. Let's try again, this time we will double check before we launch any missiles! :missile emoji:


The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.

My lived experience with tech companies is that principles are easy when they're free - i.e., when you're telling others what to do, or taking principled stances when a competitor is not breathing down your neck.

So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.

However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.


The administration's approach to contracts, agreements, treaties and so on could be summed up as 'I am altering the deal. Pray I do not alter it further.'

The basic problem in our polity is that we've collectively transferred the guilty pleasure of aligning a charismatic villain in fiction to doing the same in real life. The top echelons of our government are occupied by celebrities and influencers whose expertise is in performance rather than policy. For years now they've leaned into the aesthetics of being bad guys, performative cruelty, committing fictional atrocities, and so forth. Some MAGA influencers have even adopted the Imperial iconography from Star Wars as a means of differentiating themselves from liberal/democratic adoption of the 'rebel' iconography. So you have have influencers like conservative entrepreneur Alex Muse who styles his online presence as an Imperial stormtrooper. As Poe's law observes, at some point the ironic/sarcastic frame becomes obsolete and you get political proxies and members of the administration arguing for actual infringements of civil liberties, war crimes, violations of the Constitution and so on.


Given that SLS is the part of Artemis that has actually shown it works, and Starship is the part that is nowhere near schedule, and doesn't work, it's very funny to suggest that NASA should learn from SpaceX and not the other way around.

SpaceX hasn't even had the confidence to put Starship in LEO yet, and has not carried 1kg of real payload (and barely a few kg of test payloads) - while SLS did an orbit of the Moon, with real payload satellites.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: