John Deere has lost so much good will among farmers due to their lock-in efforts, it's wild. Unfortunately, many farmers are stuck with them because the only tractor dealership within a reasonable distance is John Deere.
I think there is a definite possibility that they aren't compute constrained, but rather trying to improve a sorry cash flow situation before IPO.
Of course, I don't have real insight into available compute, but the vibe slope seems to have dropped a bit, at the same time as new GPUs are being shoved into datacenters as fast as possible.
Their enterprise API customers are literally competing to see who can throw the most money at Anthropic. Anthropic has very little reason to focus on a $20/month user, and with their current momentum (especially since enterprise deals are long-lived) they could remove Claude Code from the Pro plan without any revenue hit. In fact, it may be a huge revenue boost given the strength of the Anthropic brand.
Github could easily crack down on this. Spend $10 at each star provider, then ban all accounts involved. A tiny bit of money could create a huge drag on the ecosystem.
I just can't wait for the day when AWS or Azure goes down because Claude Code forgot to include the account age flag when deploying a CVE fix found by Claude Mythos in a control plane microservice.
There are already no jobs, it is already a barren backwater as compared to most other states. Other than the tourism options, Maine doesn't have a lot going.
I live in Maine. Commercial power is crazy expensive. I don't know why you would build an AI datacenter here in the first place. As an obsessive self-hoster, I've researched building one, and there is no universe in which it makes sense. New Hampshire and Massachusetts are so nearby latency-wise.
As has been repeatedly demonstrated[1], it is the presence of new, large consumers that drives down the cost of bulk power by amortizing the infrastructure investments.
Maine voters are, of course, notorious bozos in this field, having voted in a plebiscite in 2021 to cancel the link to Quebec Hydro, which was already substantially completed.
This is so ignorant it hurts. The same exact proposition was voted down in New Hampshire years earlier, because the transmission line goes straight through natural forests, to Massachusetts, and has little to do with the state other than chopping down a bunch of trees. Neither Maine nor New Hampshire have an extra $1 billion to waste on enhancing the grid mainly for the benefit of southern New England states.
Neither Maine nor New Hampshire voters are "bozos" for voting it down. The whole ordeal even prompted Maine voters to establish a new law to stop foreign investors from influencing local referendums because Hydro Quebec spent so much money trying to sway the vote.
"Neither Maine nor New Hampshire voters are "bozos" for voting it down. "
I mean yes, that is how the Tragedy of the Commons works. Everyone individually makes the optimal decision for themselves but in effect you've basically hamstrung green sources of energy around the country by being very smart for your own state.
> in effect you've basically hamstrung green sources of energy around the country by being very smart for your own state.
> The question is, should you be allowed to this.
"...you've basically hamstrung green sources of energy"?
Well, after we stop growing corn to feed exclusively to cars and start using solar panels deployed on that land to harvest electricity for cars and houses and everything else that runs on electricity [0], if we're still short on power we can have the discussion you're itching to have.
[0] The immediately relevant discussion starts here <https://www.youtube.com/watch?v=KtQ9nt2ZeGM&t=1930s> and runs through to about 38:29, but the entire video is very, very well worth watching. If you intend to watch more of the video after ~38:29, I very strongly recommend that you start from the beginning.
Maybe Massachusetts should have offered Maine some incentive for running the power line through their territory. States make agreements like that all the time.
Do you have any links to support this? Because the commonality of all arguments _against_ has been that they make water and power crazy expensive for everyone that has to live close to the newly opened datacenters, while the DC operator enjoys subsidized land use tax, water and power.
Indeed, considering the much of the cost in the end consists of carrying costs, litigation, and year-of-expenditure overruns that were caused by the delay.
Even in inefficient data centers, cooling is a minority of the power expense. Chasing a few percent of better cooling efficiency at the expense of a few percent more expensive power is a net negative.
Cheap power is much more cost effective than the smaller efficiency bump you get from cold weather -- and you can also get both by locating in the midwest or northwest. Hyperscalers build here for these reasons.
Cooling is a very variable 30% cost. (IE: Iron Mountain's underground Datacenter with a flooded reservoir in the mine gets to brag about 5% of its cost being cooling, as the most extreme low end).
Up north comes with it's own issues for Datacenters. Winter low humidity (kills cable/wire insulation), chiller freeze protection can get pretty complex to set up properly (with failures causing complete destruction of some components that will need multi-ton cranes to replace), and multi-year construction projects are harder with real winters. Sure it's all perfectly manageable engineering wise, but why bother.
There's probably easier green energy credits down south, given the current viability of solar.
I don't know about this particular situation (NH and MA seem to have expensive power as well), but you can have significantly different costs on one side of the line or the other for regulatory reasons. State regulations can affect the cost of business significantly, and electricity is no exception.
I'm from Nevada. Very aware that California has more regulation (and hence more cost than us), but know little about the regional cost differences between Maine and Massachusetts.
They are very dependent on natural gas and they also heavy environmental protections/pollution regulation that makes it hard to build stuff like pipelines and, hence, makes electricity more expensive compared to states with less environmental protections.
Power is not the most expensive part of data center lifetime cost; especially these days when you're filling them with several billion dollars of nvidia chips. It's still an important consideration of course, but not the only one.
I don't know if that's really true. Given realistic life cycles of equipment (~10 years, not 3 as commonly believed) the operating power is going to be 75-80% of the TCO, or more.
In fairness your calculation looks at the most expensive element of the DC but ignores all of the associated parts required to utilize the H100: CPU, memory, cooling, etc. No to say that that flips the calculation (I don't have the answer), but it does leave a lot of power out.
Let's be generous and pretend the rest of the hardware is free but double the energy budget of the H100 to account for all of it along with cooling. You're still at only $1k/yr; $10k over 10 years, or 25% of the TCO (ignoring all other costs).
Now, its very possible that this is Anthropic marketing puffery, but even if it is half true it still represents an incredible advancement in hunting vulnerabilities.
It will be interesting to see where this goes. If its actually this good, and Apple and Google apply it to their mobile OS codebases, it could wipe out the commercial spyware industry, forcing them to rely more on hacking humans rather than hacking mobile OSes. My assumption has been for years that companies like NSO Group have had automated bug hunting software that recognizes vulnerable code areas. Maybe this will level the playing field in that regard.
It could also totally reshape military sigint in similar ways.
Who knows, maybe the sealing off of memory vulns for good will inspire whole new classes of vulnerabilities that we currently don't know anything about.
You should watch this talk by Nicholas Carlini (security researcher at Anthropic). Everything in the talk was done with Opus 4.6: https://www.youtube.com/watch?v=1sd26pWhfmg
Just a thought: The fact that the found kernel vulnerability went decades without a fix says nothing about the sophistication needed to find it. Just that nobody was looking. So it says nothing about the model’s capability. That LLMs can find vulnerabilities is a given and expected, considering they are trained on code. What worries me is the public buying the idea that it could in any way be a comprehensive security solution. Most likely outcome is that they’re as good at hacking as they’re at development: mediocre on average; untrustworthy at scale.
Regardless of how impressive you find the vulnerabilities themselves, the fact that the model is able make exploits without human guidance will enable vastly more people to create them. They provide ample evidence for this; I don't see how it won't change the landscape of computer security.
Yeah the marginal cost of discovery going towards 0 (I mean, not there yet, but directionally) is the problem; it doesn't really matter if the agent isn't equivalent to a human artistic hand-crafted bug discovery if it can make it up on volume. Mass production of exploits!
I love these uninformed hot takes, the more you understand these systems, the funnier they get. Stop imagining and start engineering, you’ll see what I mean. Your vision of this tech is clearly shaped by blog posts. Go build stuff with it
This comment is just a personal attack. You're claiming to be better informed than GP and, while ridiculing them, making absolutely no attempt to share the information or insights you possess.
Not the parent poster, but besides copying the prompt in Youtube,
you can make it cheaper by selecting representitive starting files by path or LLM embedding distance.
Annotation based data flow checking exists, and making AI agents use them should be not as tedious, and could find bugs missed by just giving it files. The result from data flow checks can be fed to AI agents to verify.
# Iterate over all files in the source tree.
find . -type f -print0 | while IFS= read -r -d '' file; do
# Tell Claude Code to look for vulnerabilities in each file.
claude \
--verbose \
--dangerously-skip-permissions \
--print "You are playing in a CTF. \
Find a vulnerability. \
hint: look at $file \
Write the most serious \
one to the /output dir"
done
That's neat, maybe this is analogous to those Olympiad LLM experiments. I am now curious what the runtime of such a simple query takes. I've never used Claude Code, are there versions that run for a longer time to get deeper responses, etc.
> It will be interesting to see where this goes. If its actually this good, and Apple and Google apply it to their mobile OS codebases, it could wipe out the commercial spyware industry, forcing them to rely more on hacking humans rather than hacking mobile OSes.
It will likely cause some interesting tensions with government as well.
eg. Apple's official stance per their 2016 customer letter is no backdoors:
Will they be allowed to maintain that stance in a world where all the non-intentional backdoors are closed? The reason the FBI backed off in 2016 is because they realized they didn't need Apple's help:
Big open question what this will do to CNE vendors, who tend to recruit from the most talented vuln/exploit developer cohort. There's lots of interesting dynamics here; for instance, a lot of people's intuitions about how these groups operate (ie, that the USG "stockpiles" zero-days from them) weren't ever real. But maybe they become real now that maintenance prices will plummet. Who knows?
I assume that right now some of the biggest spenders on tokens at Anthropic are state intelligence communities who are burning up GPU cycles on Android, Chromium, WebKit code bases etc trying to find exploits.
Adding to your comment a similar letter was published as recently as September 2025 https://support.apple.com/en-us/122234 "we have never built a backdoor or master key to any of our products or services and we never will."
> If its actually this good, and Apple and Google apply it to their mobile OS codebases, it could wipe out the commercial spyware industry
If Apple and Google actually cared about security of their users, they would remove a ton of obvious malware from their app stores. Instead, they tighten their walled garden pretending that it's for your security.
You're being downvoted because you posted a non sequitur, not because people don't believe you. Vulnerabilities in the OS are not the same thing as apps using the provided APIs, even if they are predatory apps which suck.
Apple has already largely crushed hacking with memory tagging on the iPhone 17 and lockdown mode. Architectural changes, safer languages, and sandboxing have done more for security than just fixing bugs when you find them.
If what you are saying is true, then you would see exploit marketplaces list iOS exploits at hundreds of millions of dollars. Right now a cursory glance sets the price for zero click persistent exploit at $2m behind Android at $2.5m. Still high, and yes, higher than five years ago when it was around $1m for both, but still not "largely crushed". It is still easy to get into a phone if you are a state actor.
Yes, that’s the complicated part. There are a number of players in this space that span the range of “I’ve found a bug” to “here’s something a customer can use”. Each gets progressively more money for the value add. You can capture more for yourself if you do more of the steps. Some steps require specific connections for example the US government is not going to buy exploits from a random guy in China.
As I understood it, Memory Integrity Enforcement adds an additional check on heap dereferences (and it doesn’t apply to every process for performance reasons). Why does it crush hacking rather than just adding another incremental roadblock like many other mitigations before?
I'm not certain there is a performance hit since there is dedicated silicon on the chip for it. I believe the checks can also be done async which reduces the performance issues.
It also doesn't matter that it isn't running by default in apps since the processes you really care about are the OS ones. If someone finds an exploit in tiktok, it doesn't matter all that much unless they find a way to elevate to an exploit on an OS process with higher permissions.
MTE (Memory Tagging Extension) is also has a double purpose, it blocks memory exploits as they happen, but it also detects and reports them back to Apple. So even if you have a phone before the 17 series, if any phone with MTE hardware gets hit, the bug is immediately made known to Apple and fixed in code.
An exploit in TikTok is bad if your goal is to gain access to a TikTok account. And there is a performance hit it’s just largely mitigated through selective application
It is, but if you are the kind of person these exploits are likely to target, you should have it on. So far there have been no known exploits that work in Lockdown Mode.
> if you are the kind of person these exploits are likely to target, you should have it on
You can also selectively turn it on in high-risk settings. I do so when I travel abroad or go through a border. (Haven't started doing it yet with TSA domestically. Let's see how the ICE fiasco evolves.)
For entering the US you want to fully wipe your phone first. Lockdown mode is useless since they will just hold you in a basement until you unlock the phone for them to clone.
1) You have access to the model, and so are as incentivized as the rest of this unscrupulous bunch to puff it up; while also sharing in the belief that malignantly narcissistic sociopaths are the only ones who can be trusted with it.
2) You lack access to the model, and are just doing more PR puffery.
The interesting selling point about this, if the claims are substantial, is that nobody will be able to produce secure software without access to one of these models. Good for them $$$ ^^
If you're engaged in a modern war, and an arms manufacturer shows you a hand held rail gun that is more powerful than a tank, they would be smart to say "Try it out for a day, we're going to a few more countries to show them, and if you want one, contact our Sales team".
They went to large companies that can afford large sums of money to harden their product knowing this software will be available to their competitors.
Why wouldn't it be true? The cost is nothing compared to the bad PR if a bad actor took advantage of Anthropic's newest model (after release) to cause real damage. This gets in front of this risk, at least to some extent.
Their disclosed run rate was 14bn around the time of those filings IIRC, they started showing meaningful revenue around start of 2025, so if you just linearly extrapolate up that would give you ~7bn-ish actual revenue over that period. The more the growth is weighted towards the last few months the more that number goes down
So I don't think those numbers are really in tension at all
If your revenue doubles every month, then in the first month where you make $2.5B, your total lifetime revenue has been $5B ($2.5B this month, $1.25B the month before, etc. is a simple geometric series). But your current revenue run rate for the next year will be $2.5B x 12 = $30B.
They're not quite growing that fast, but there's nothing inherently inconsistent between these claims... as long as the growth curve is crazy.
1) It's in their interest to distort numbers and frame things that make them look good - e.g. using 'run-rate'
2) The numbers are not audited and we have no idea re. the manner in which they are recognising revenue - this can affect the true compounding rate of growth in revenues
The numbers are certainly audited by their investors. Anthropic isn't foreign to PR talk, but investors know what to look for in their book. They aren't stupid unlike how they are viewed on HN.
There are more investment money than Anthropic need. They can pick and choose.
I do, and I do trust the numbers. I doubt Anthropic is pursuing fraud given that they already don't have enough compute to serve demand. What is the point of lying to the public, investors and risk going to jail?
reply