Bit surprised about the amount of flak they're getting here. I found the article seemed clear, honest and definitely plausible.
The deterioration was real and annoying, and shines a light on the problematic lack of transparency of what exactly is going on behind the scenes and the somewhat arbitrary token-cost based billing - too many factors at play, if you wanted to trace that as a user you can just do the work yourself instead.
The fact that waiting for a long time before resuming a convo incurs additional cost and lag seemed clear to me from having worked with LLM APIs directly, but it might be important to make this more obvious in the TUI.
I agree that it’s plausible, and I hope they learn. But trust is earned, and Anthropic’s public responses this past month were dismissive and unhelpful.
Every one of these changes had the same goal: trading the intelligence users rely on for cheaper or faster outputs. Users adapt to how a model behaves, so sudden shifts without transparency are disorienting.
The timing also undercuts their narrative. The fixes landed right before another change with the same underlying intent rolled out. That looks more like they were just reacting to experiments rather than understanding the underlying user pain.
When people pay hundreds or thousands a month, they expect reliability and clear communication, ideally opt-in. Competitors are right there, and unreliability pushes users straight to them.
All of this points to their priorities not being aligned with their users’.
> All of this points to their priorities not being aligned with their users’.
Framing this as "aligned" or "not aligned" ignores the interesting reality in the middle. It is banal to say an organization isn't perfectly aligned with its customers.
I'm not disagreeing with the commenter's frustration. But I think it can help to try something out: take say the top three companies whose product you interact with on a regular basis. Take stock of (1) how fast that technology is moving; (2) how often things break from your POV; (3) how soon the company acknowledges it; (4) how long it takes for a fix. Then ask "if a friend of yours (competent and hard working) was working there, would I give the company more credit?"
My overall feel is that people underestimate the complexity of the systems at Anthropic and the chaos of the growth.
These kind of conversations are a sort of window into people's expectations and their ability to envision the possible explanations of what is happening at Anthropic.
>My overall feel is that people underestimate the complexity of the systems at Anthropic and the chaos of the growth.
Making changes like reducing the usage window at peak times (https://x.com/trq212/status/2037254607001559305) without announcing it (until after the backlash) is the sort of thing that's making people lose trust in Anthropic. They completely ignored support tickets and GitHub issues about that for 3 days.
You shouldn't have to rely on finding an individual employee's posts on Reddit or X for policy announcements.
> You shouldn't have to rely on finding an individual employee's posts on Reddit or X for policy announcements.
I agree with this as a principle. Which raises this question: is it true? Are you certain these messages don't show up in (a) Claude Code and (b) Claude on the Web?
I've seen these kinds of messages pop up. I haven't taken inventory of how often they do. As a guess, maybe I see notifications like this several times a month. If any important ones are missing, that is a mistake.
Anyhow, this is the kind of discussion that I want people to have. I appreciate the detail.
> A company with their resources could easily do better.
Yes, they could. But easily? I'm not so sure.
Also ask yourself: what function does saying e.g. "they could have done better" serve? What does it help accomplish? I'm asking. I think it often serves as a sort of self-reinforcing thing to say that doesn't really invite more thinking.
Ask yourself: If "doing better" was easy, why didn't it happen? Maybe it isn't quite as easy as you think? Maybe you've baked in a lot of assumptions. Easy for who? Easy why? Try the questions I asked, above. They are not rhetorical. Here they are again, rephrased a bit
> take the top three companies whose product you
> interact with on a regular basis. Take stock of
> (1) how fast the technology is moving;
> (2) how often things break from your POV;
> (3) how soon the company acknowledges it;
> (4) how long it takes for a fix.
>
> Then ask "if a friend of mine (competent, hard working)
> worked there, how would I be thinking about the situation?"
There is a reason why I recommend asking these questions. Forcing yourself to write down your reference class is ... to me, table stakes, but well, lots of people just leave it floating and then ask other people to magically reconstruct it. Envisioning a friend working there shifts your viewpoint and can shake lose many common biases.
Thanks for the example -- you are one of the first people to quote a source, so I appreciate it. This makes constructive discussion much easier. You quoted this:
> To manage growing demand for Claude we're adjusting our
> 5 hour session limits for free/Pro/Max subs during peak
> hours. Your weekly limits remain unchanged.
>
> During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll
> move through your 5-hour session limits faster than before.
And yeah, no disagreement from me: many users are not going to like this. Narrowly speaking, I don't want any chance that reduces what I get for what I pay for. I also care about overall reliability, so if some users on the right tail of the usage distribution find themselves losing out, my take is "Yeah, they are disappointed, but this is rational decision for any company with this kind of subscription model."
Broken expectations are highly dependent on perception. People get used to having some particular level. When that changes and they notice, and being humans a strong default is to reach for something to blame. Then we rationalize. That last two parts are unhelpful, and I push back on them frequently.
>> My overall feel is that people underestimate the complexity of the systems at Anthropic and the chaos of the growth.
> Do you not think people here work at big companies with big products? I do, and we have a much higher bar for shipping.
This form of comment (The "Do you not think {X}?") comes across as a swipe (discouraged by the HN guidelines). It doesn't respond to the strongest plausible interpretation of my comment (also in the guidelines).
That's fair. I'll adjust and say that I think there's a mix: some people certainly are bashing without understanding, but there are also a lot of engineers here whose day to day work is held to a higher standard than I think we see coming out of Anthropic, at least w.r.t. the product side of things (obviously the models are great).
Thanks. Along those lines, here's a sort of thought experiment. Of said engineers who know a higher standard, say we teleported them into Anthropic, what are some likely scenarios?
- How much time would they need to import their standards into Anthropic? ... things like tooling, process, culture, hiring, etc? Maybe externally-sourced discipline and rigor are the missing catalysts. [1]
- OTOH, it seems possible these engineers (many of which are used to certain levels of stability, sanity, internal tooling, etc) would be destabilized by Anthropic's problems, the scale, the rate of hiring, the rate of customer growth.
- Perhaps Anthropic needs new instrumentation to cover end-to-end customer metrics? More internal tool-building teams? A new ops team? A new org structure? I don't know.
The growth, the environment has put Anthropic into a position where these kinds of mistakes are just statistically inevitable ... unless they chose to grow more slowly.
So my overall hunch (very few people really grok the constellation of factors at Anthropic) is fuzzy. That's why I'm trying to lay out some of the questions that underlie it, without resorting to simplistic notions of blame (which paper over the deeper causes).
Lastly, can you think of comparable scenarios with this kind of growth where companies don't have major hiccups? This is driving towards thinking about the outside view [2]. Roughly speaking: don't expect to "beat the market" for long. Entropy wins.
[1]: I recently watched a video where Steve Jobs described a time in early Macintosh history where Apple tried to "professionalize" its management. Hiring proven managers didn't work, so they shifted towards hiring for cultural fit and letting them grow the management skills.
> So you're arguing they're just plain incompetent? Not sure that's going to win the trust of customers either.
This is not a charitable interpretation of what I wrote. Please take a minute and rethink and rephrase. Here are two important guidelines, hopefully familiar to someone who has had an account since 2019:
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
I didn't assume bad faith, I simply reworded your conclusions with less soft language so that others would understand your position more clearly.
You are saying what they are doing is hard. That's fine. Their stated goals are to be the responsible stewards of the technology and we agree they are failing at that goal. You would attribute that to incompetence and not malice.
I personally try to follow Rapoport's Rules, and I since think they are consistent with the HN Guidelines, I like to mention them: [1].
I've thought on it, and I will try to start off with something we both agree on... We both agree that Anthropic made some mistakes, but this is probably a pretty uninteresting and shallow agreement. I find it unlikely that we would enumerate or characterize the mistakes similarly. I find it unlikely that we would be anywhere near the same headspace about our bigger-picture takes.
> I didn't assume bad faith
Ok, I'm glad. That one didn't concern me; if I had a do-over I would remove that one from the list. Sorry about that. These are the ones that concern me:
> Comments should get more thoughtful and substantive,
> not less, as a topic gets more divisive.
When I read your earlier comment (~20 words), it didn't come across as a thoughtful and substantive response to my comment (~160 words). I know length isn't a perfect measure nor the only measure, but it does matter.
> Please respond to the strongest plausible interpretation of what
> someone says, not a weaker one that's easier to criticize.
Are you sure you didn't choose an easier to criticize interpretation? Did you take the take to try to state to yourself what I was trying to say? Back to Rapaport's Rules ...
> You should attempt to re-express your target’s position so
> clearly, vividly, and fairly that your target says, “Thanks,
> I wish I’d thought of putting it that way.”
I'm grateful when people can express what I'm going for better than the way I wrote it or said it.
> I simply reworded your conclusions with less soft language
Technically speaking, lots of things could be called "rewording", but what you did was relatively far from "simply rewording". Charitably, it is closer to "your interpretation". But my intent was lost, so "rewording" doesn't fit.
> ... so that others would understand your position more clearly.
If you want to help others understand, then it is good to make sure you understand. For that, I recommend asking questions.
> Their stated goals are to be the responsible stewards of the technology and we agree they are failing at that goal.
No, I do not agree to that phrasing. It is likely I don't agree with your intention behind it either.
> You would attribute that to incompetence and not malice.
No; even if I agreed with the premise, I think it is more likely I would still disagree. I don't even like the framing of "either malice or incompetence". These ideas don't carve reality at the joints. [2] [3] There are a lot of stereotypes about "incompetence" but I don't think they really help us understand the world. These stereotypes are more like thought-terminators than interesting generative lenses.
I'll try to bring it back to the words "malice" and "incompetence" even though I think the latter is nigh-useless as a sense-making tool. Many mistakes happen without malice or incompetence; many mistakes "just happen" because people and organizations are not designed to be perfect. They are designed to be good enough. To not make any short-term mistakes would likely require too much energy or too much rigidity, both of which would be a worse category of mistake.
Try to think counterfactually: imagine a world where Anthropic is not malicious nor incompetent and yet mistakes still happened. What would this look like?
When you think of what Anthropic did wrong, what do you see as the lead up to it? Can you really envision the chain of events that brought it about? Imagine reading the email chain or the PRs. Can you see how there may be been various "off-ramps" where history might have gone differently? But for each of those diversions, how likely would it be that they match the universe we're in?
At some point figuring out what is a "mistake" even starts to feel strange. Does it require consciousness? Most people think so. But we say organizations make mistakes, but they aren't conscious -- or are they? Who do we blame? The CEO, because the buck stops there, right? He "should have known better". But why? Wait, but the Board is responsible...?
Is there any ethical foundation here? Some standard at all or is this all just anger dressed up as an argument? If this assigning blame thing starts to feel horribly complicated or even pointless, then maybe I've made my point. :)
If nothing else, when you read what I write, I want it to make you stop, get out a sheet of paper, and try to imagine something vividly. Your imagination I think will persuade you better than I can.
Some of the flak is that issues are often only acknowledged once a fix is in place, and the partial fixes are presented as if they solve the whole problem.
The near-instant transition from "there is no problem" to "we already fixed the problem so stop complaining" is basically gaslighting. (Admittedly the second sentiment comes more from the community, but they get that attitude after taking the "we fixed all the problems" posts at face value.)
And they are often dismissed at first as perception/subjective bias, getting used to models being good and having higher expectations due to that, etc. users are blamed a lot before they are forced to admit that there is an actual problem.
We take reports about degradation very seriously. We never intentionally degrade our models [...] On March 4, we changed Claude Code's default reasoning effort from high to medium
Anthropic is the best company of its kind, but that is badly worded PR.
Is adding JPEG compression to your software “intentional degradation” of the software? I wouldn't say providing a selectable option to use a faster, cheaper version of something qualifies as “degradation”.
It is certainly true that they did a poor job communicating this change to users (I did not know that the default was “high” before they introduced it, I assumed they had added an effort level both above and below whatever the only effort choice was there before). On the other hand, I was using Claude Code a fair bit on “medium” during that time period and it seemed to be performing just fine for me (and saving usage/time over “high”), so it doesn't seem clear that that was the wrong default, if only it had been explained better.
Is default enabling JPEG compression to your software's output because the compression saves you money “intentional degradation” of the software?
I would say it does, and I'd loathe to use anything made by people who'd couch that change to defaults as "providing a selectable option to use a faster, cheaper version".
yes. if instagram started performing intensive JPEG compression that made photos choppy and unpleasant, I would consider that an intentional degredation of the software.
As I understand Anthropic's recent retrospective, calling the models directly via API did not change; the problem was that the harness changed and this was not communicated well to users.
Metaphorical reasoning is lossy, so talking about lossy image compression seems to be ironically fitting! ... perhaps a (hypothetical) metaphor involves Photoshop changing their default JPEG compression level without making it clear to users. PS did not change the JPEG algorithm, only a setting for it. If you look closely, you would notice it: I'll come back to this point in the last paragraph.
But a part of metaphor breaks down if you accept that Anthropic was making a net positive trade-off for customers so that they could provide a better overall service level statistically to their entire user base.
A rough metaphor for the individual versus collective trade-off might be when a retail store caps the number of toilet paper rolls customer can buy at a time. The goal is to reduce hoarding, which in a way is an analogous to Claude users having usage patterns at the high end of the statistical tail.
When it comes to PR*, transparency almost always wins? Anthropic's mistake hid the change from users, but they're going to notice when overall performance is degraded. I would hazard a guess that Claude has endured more verbal assault in the last month than in its entire history.
To my eye, gaslighting is a serious accusation. Wikipedia's first line matches how I think of it: "Gaslighting is the manipulation of someone into questioning their perception of reality."
Did I miss something? I'm only looking at primary sources to start. Not Reddit. Not The Register. Official company communications.
Did Anthropic tell users i.e. "you are wrong, your experience is not worse."? If so, that would reach the bar of gaslighting, as I understand it (and I'm not alone). If you have a different understanding, please share what it is so I understand what you mean.
I'd rather not speak too poorly of Anthropic, because - to the extent I can bring myself to like a tech company - I like Anthropic.
That said, the copy uses "we never intentionally degrade our models" to mean something like "we never degrade one facet of our models unless it improves some other facet of our models". This is a cop out, because it is what users suspected and complained about. What users want - regardless of whether it is realistic to expect - is for Anthropic to buy even more compute than Anthropic already does, so that the models remain equally smart even if the service demand increases.
It seems to me you dropped the "gaslighting" claim without owning it. I personally find this frustrating. I prefer when people own up to their mistakes. Like many people, to me, "gaslighting" is just not a term you throw around lightly. Then you shifted to "cop out". (This feels like the motte and bailey.) But I don't think "cop out" is a phrase that works either...
Some terms:... The model is the thing that runs inference. Claude Code is not a model, it is harness. To summarize Anthropic's recent retrospective, their technical mistakes were about the harness.
I'm not here to 'defend' Anthropic's mistakes. They messed up technically. And their communication could have been better. But they didn't gaslight. And on balance, I don't see net evidence that they've "copped out" (by which I mean mischaracterized what happened). I see more evidence of the opposite. I could be wrong about any of this, but I'm here to talk about it in the clearest, best way I can. If anyone wants to point to primary sources, I'll read them.
I want more people to actually spend a few minutes and actually give the explanation offered by Anthropic a try. What if isolating the problems was hard to figure out? We all know hindsight is 20/20 and yet people still armchair quarterback.
At the risk of sounding preachy, I'm here to say "people, we need to do better". Hacker News is a special place, but we lose it a little bit every time we don't in a quality effort.
Fair enough. If the comments in question were still editable, I would be happy to replace 'gaslighting' with 'being a bit slippery' or something less controversial.
No worries about 'sounding preachy'; it's a good thing people want to uphold the sobriety that makes HN special.
They didn’t say “your experience is not worse” but they did frequently say “just turn reasoning effort back up and it will be fine”. And that pretty explicitly invalidates all the (correct) feedback which said it’s not just reasoning effort.
They knew they had deliberately made their system worse, despite their lame promise published today that they would never do such a thing. And so they incorrectly assumed that their ham fisted policy blunder was the only problem.
Still plenty I prefer about Claude over GPT but this really stings.
I'm aiming for intellectual honesty here. I'm not taking a side for a person or an org, but I'm taking a stand for a quality bar.
> They knew they had deliberately made their system worse
Define "they". The teams that made particular changes? In real-world organizations, not all relevant information flows to all the right places at the right time. Mistakes happen because these are complex systems.
Define "worse". There are lot of factors involved. With a given amount of capacity at a given time, some aspect of "quality" has to give. So "quality" is a judgment call. It is easy to use a non-charitable definition to "gotcha" someone. (Some concepts are inherently indefensible. Sometimes you just can't win. "Quality" is one of those things. As soon as I define quality one way, you can attack me by defining it another way. A particular version of this principle is explained in The Alignment Problem by Brian Christian, by the way, regarding predictive policing iirc.)
I'm seeing a lot of moral outrage but not enough intellectual curiosity. It embarrassingly easy to say "they should have done better" ... ok. Until someone demonstrates to me they understand the complexity of a nearly-billion dollar company rapidly scaling with new technology, growing faster than most people comprehend, I think ... they are just complaining and cooking up reasons so they are right in feeling that way. This possible truth: complex systems are hard to do well apparently doesn't scratch that itch for many people. So they reach for blame. This is not the way to learn. Blaming tends to cut off curiosity.
I suggest this instead: redirect if you can to "what makes these things so complicated?" and go learn about that. You'll be happier, smarter, and ... most importantly ... be building a habit that will serve you well in life. Take it from an old guy who is late to the game on this. I've bailed on companies because "I thought I knew better". :/
> Define "they". The teams that made particular changes? In real-world organizations, not all relevant information flows to all the right places at the right time. Mistakes happen because these are complex systems.
Accidentally/deliberately making your CS teams ill-informed should not function as a get out of jail free card. Rather the reverse.
> Accidentally/deliberately making your CS teams ill-informed should not function as a get out of jail free card. Rather the reverse.
Thanks for your reply. I very much agree that intention or competence does not change responsibility and accountability. Both principles still apply.
In this comment, I'm mostly in philosopher and rationalist mode here. Except for the [0] footnote, I try to shy away from my personal take about Anthropic and the bigger stakes. See [0] for my take in brief. (And yes I know brief is ironic or awkward given the footnote is longer than most HN comments.) Here's my overall observation about the arc of the conversation: we're still dancing around the deeper issues. There is more work to do.
It helps to recognize the work metaphors are doing here. You chose the phrase "get out of jail free". Intentionally or not, this phrase smuggles in some notion of illegality or at least "deserving of punishment" [1]. The Anthropic mistakes have real-world impacts, including upset customers, but (as I see it) we're not in the realm of legal action nor in the realm of "just punishment", by which I mean the idea of retributive justice [2].
So, with this in mind, from a customer-decision point of view, the following are foundational:
Rat-1: Pay attention to _effects_ of what Anthropic. did
Rat-2: Pay attention to how these effects _affect me_.
But when to this foundation, I need to be careful:
Rat-3: Not one-sidedly or selectively re-introduce *intent* into my other critiques. If I get back to diagnosing or inferring *intent*, I have to do so while actually seeking the whole truth, not just selecting explanations that serve my interests
Rat-4: When in a customer frame, I don't benefit from "moralizing" ... my customer POV is not well suited for that. As a customer, my job is to *make a sensible decision*. Should I keep using Claude? If so, how do I adjust my expectations and workflow?
...
Personally, when I view the dozens of dozens I've read here, a common theme is see is disappointment. I relatively rarely see constructive and truth-seeking retrospective-work. On the other hand, I see Anthropic going out of their way to communicate their retrospective while admitting they need to do better. This is why I say this:
Of course companies are going to screw up. The question is: as a customer, am I going to take a time-averaged view so I don't shoot myself in the foot by overreacting?
[0]: My personal big-picture take is that if anyone in the world, anywhere, builds a superintelligent AI using our current levels of understanding, there is no expectation at all that we can control it safely. So I predict with something close to 90% or higher, that civilization and humanity as we know it won't last another 10 years after the onset of superintelligence (ASI).
This is the IABIED (The book "If Anyone Builds It, Everyone Dies" by Yudkowsky and Soares) argument -- plenty of people write about it -- though imo few of the book reviews I've seen substantively engage with the core arguments. Instead, most reviewers reject it for the usual reasons: it is a weird and uncomfortable argument and the people making it seem wacky or self-interested to some people. I do respect reviews who disagree based on model-driven thinking. Everything else to me reads like emotional coping rather than substantive engagement.
With this in mind, I care a lot about Anthropic's failures and what they imply about how it participates in the evolving situation.
But I care almost zero about conventional notions of blame. Taking materialism as true, free will is at bottom a helpful fiction for people. For most people, it is the reality we take for granted. The problem is blame is often just an excuse for scapegoating people for their mistakes, when in fact these mistakes just flow downstream from the laws of physics. Many of these mistakes are nearly statistical certainties when viewed from the lens of system dynamics or sociology or psychology or neuroscience or having bad role models or being born into a not-great situation.
To put it charitably, blame is what people do when they want to explain s--tty consequences on the actions of people and systems. That sense bothers me less; I'm trying to shift thinking away from the kind of blaming that leads to bad predictions.
[1]: From the Urban Dictionary (I'm not citing this as "proof of credibility" of the definition):
"A get out of jail free card is a metaphorical way to refer to anything that will get someone out of an undesirable situation or allow them to avoid punishment."
... I'm only citing UD so you know what mean. When I use the word dictionary, I mean a catalog of usage not a prescription of correctness.
I know some people use the word "gaslighting" in connection with Anthropic. I've read some of those threads here, and some on Reddit, but I don't put much stock in them. To step back, hopefully reasonable people can start here:
1. Degraded service sucks.
2. Anthropic not saying i.e. "we're not seeing it" sucks.
3. Not getting a fix when you want it sucks.
Try to understand what I mean when I say none of the above meet the following sense of gaslighting: "Gaslighting is the manipulation of someone into questioning their perception of reality." Emphasis on understand what I mean. This says it well: [1].
If you can point me to an official communication from Anthropic where they say "User <so and so> is not actually seeing degraded performance" when Anthropic knows otherwise that would clearly be gaslighting -- intent matters by my book.
But if their instrumentation was bad and they were genuinely reporting what they could see, that doesn't cross into gaslighting by my book. But I have a tendency to think carefully about ethical definitions. Some people just grab a word off the shelf with a negative valence and run with it: I don't put much stock in what those people say. Words are cheap. Good ethical reasoning is hard and valuable.
It's fine if you have a different definition of "gaslighting". Just remember that some of us have been actually gaslight by people, so we prefer to save the word for situations where the original definition applies. People like us are not opposed to being disappointed, upset, or angry at Anthropic, but we have certain epistemic standards that we don't toss out when an important tool fails to meet our expectations and the company behind it doesn't recognize it soon enough.
I feel a bit wacky even saying this, but I just started re-reading Team Topologies last week because it's starting to feel like the whole orchestration pattern only works reliably when roles and structure are clearly defined.
I love this insight, and it generalizes. Just swapping out humans with AIs won't just fix everything, because many of the biggest problems are structural or emergent.
I'm hopeful that we can use AI models to pressure test better options of social organization etc.
IMO, "ish". You can reliably and repeatedly produce good teams _if_ you reliably and repeatedly invest in your people.
IMO, what's really happening is that small, effective teams aren't _fungible_ - you can't just swap people around without breaking the magic in a team, and you can't just move a team around an organization without similarly breaking the magic (although the latter _is_ way more possible).
IMO, it's sort of an organizational version of "context switching". It takes time for a team to get up to gel and get up to speed. If you're swapping out team members, you break that cohesion. If you move around teams, you (somewhat) reset that "getting ramped up" process.
I wonder if that made it into the training set intentionally, or just as an unexpected side effect of stealing every character of text available on the internet with absolutely no curation?
I've been mulling the same, but decided against (for now)
Using Claude Code Max 20 so ROI would be maybe 2+ years.
CC gives me unlimited coding in 4-6 windows in parallel. Unsure if any model would beat (or even match) that, both in terms in quality and speed.
I wouldn't gamble on that now. With a subscription, I can change any time. With the machine, you risk that this great insane model comes out but you need 138GB and then you'll pay for both.
I re-visited my prompt manager and asked the question: what does it even do differently? I realized it's agents-as-a-service / as an API rather than an SDK. Now looking into making it a lot more usable and flexible, but the website, UI and onboarding need work: https://www.promptshuttle.com
Also added a small side-project, https://www.revuo.ai for software-reviews and feature-tracking. This is only the start, obviously there are enough directories but I'm trying to dig deeper into the features. This one just started and is basically invisible as of now. Well, you gotta start somewhere I guess :)
Over 10 years ago, the best satellites had 500W/kg [2]. Modern solar panels that are designed to be light are at 200g per sqm [1]. That's 5sqm per kg. One sqm generates ca. 500W. So we're at 2.5kW per kg. Some people claim 4.3kW/kg possible.
Starship launch costs have a $100/kg goal, so we'd be at $40 / kW, or $4800 for a 120kW cluster.
120kW is 1GWh annually, costs you around $130k in Europe per year to operate. ROI 14 days. Even if launch costs aren't that low in the beginning and there's a lot more stuff to send up, your ROI might be a year or so, which is still good.
And it's not the same at all. 5x the solar panels on the ground means 5x the power output in the day, still 0 at night. So you'd need batteries. If you add in bad weather and winter, you may need battery capacity for days, weeks or even months, shifting the cost to batteries while still relying on nuclear of fossil backups in case your battery dies or some 3/4/5-sigma weather event outside what you designed for occurs.
> Or you put the data centers at different points on earth?
> Or you float them on the ocean circumnavigating the earth?
What that does have to do with anything? If you want to solar-power them, you still are subject to terrestrial effects. You can't just shut off a data center at night.
> Or we put the datacenters on giant Zeppelins orbiting above the clouds?
They'd have to fly at 50,000+ ft to be clear of clouds, I doubt you can lift heavy payloads this high using bouyancy given the low air density. High risk to people on the ground in case of failure because no re-entry.
> If we are doing fantasy tech solutions to space problems, why not for a million other more sensible options?
How is this a fantasy? With Starlink operational, this hardly seems a mere 'fantasy'.
A capacity problem can be solved by having another data center the other side of the earth.
If it's that the power cycling causes equipment to fail earlier, then that can be addressed far more easily than radiation hardening all equipment so that it can function in space.
Because GPUs are expensive, much more expensive than launch costs if they get starship to the low end of the range they’re aiming for, and you want your expensive equipment running as much as possible to amortize the cost down?
But the GPUs on the ground will be a lot cheaper to manufacture as they don't have to deal with space conditions.
It seems a real stretch to me to assume that costs for putting GPUs into space can ever come within a factor of 2-3 of putting them on the ground, even neglecting launch costs.
1. Europe doesn't have comparable offerings. The amount of money invested is below what a single hyperscaler spends per quarter. (StackIT might be on track to change that looking at the pure numbers)
2. European politicians still seem to believe it's about renting compute and storage; they seem to have little understanding of what "a cloud offering" really is; the EU has less than 5% of GPUs, supposedly
3. For healthcare, they already forced you years ago. This led to hosting on Telekom Cloud which runs on OpenStack by Huawei. (EU commision wants to ban Huawei from 5G but it's ok to use their software? 'Is open source and can be inspected' seems largely theoretical given the reality of cybersecurity)
4. If push comes to shove, the EU is critically dependent on the US in so many aspects (defense, lng to name two very important ones) that eventually, they would falter if the US wants your data in a specific case anyway
5. As a private citizen, given the incarcerations in the UK and Germany, it seems one should worry more about the EU getting your data than the other way around
That said, would be nice to have healthy competition, but after hearing this for 10++ years, it's getting really old. It might have been a good idea not to sleep on the AI trend, but, well...
True for the current situation, but something needs to happen before the thinking turns into acting. There's no better time than now, since most cloud services have become commodities. You don't need to be big-tech to have a competitive offering that. Naturally, the tech won't be as efficient and shiny as those of the big ones, but you have none of the corporate bloat and inefficiencies.
And don't forget about legislation. If there are new laws that set a limit to egress costs you can say goodbye to the walled garden of cloud empires.
After all, how many cloud services does the average company actually need? Most problems have been figured out by now, so such a project would be less like creating thought-leaders and more like a public infrastructure project. With exception of cutting-edge technologies, the cloud has become a commodity.
"5. As a private citizen, given the incarcerations in the UK and Germany, it seems one should worry more about the EU getting your data than the other way around"
As a private citizen given the cold blood murders of US citizens by ICE, it seems EU citizens should really be worried of what such an administration is willing to do and can do to their long time allies.
This is surely not what OP was referring to, but: arguably worse than incarcerations, I strongly condemn EU sanctions against EU citizens and residents, such as Hüseyin Dogru, Jacques Baud, and Nathalie Yamb, merely for speech that is not aligned with EU foreign policy.
Note that I don't consider it at all relevant whether one agrees/disagrees with the content of their speech.
If this isn't what OP was referring to, why bring it up? I don't see how this is relevant, to the security of data from European companies, stored in European clouds.
It is a related complaint. Moving services to a jurisdiction where free speech is grossly violated (even the lower level of free speech that was prevalent in the EU compared to US).
In the past, protection of free speech in the EU might have been less absolute, than in the US. Given the actions of the current US administrations and the (in)action of the supreme court, free speech in the US currently doesn't even serve its most essential purpose of being able to criticize the government, without fear of repercussions (e.g. Kimmel, Colbert).
As such moving services to the EU jurisdiction is most likely also better, if you're concerned about free speech, at least for the moment.
> The whole post is a gish gallop of half truths and nonsense.
That is indeed true. OP fails to raise a single point that either makes sense or is grounded on reality. Sometimes I wonder where consuming propaganda stops and wilful ignorance starts.
I don't actually see what's stopping European firms from figuring this out in terms of hardware infrastructure.
Buildings full of computers aren't that difficult a problem to solve compared to things like semiconductor manufacturing or energy.
Perhaps the issue is more on the software and architecture side. Getting sucked into weird cloud products that don't translate clean to other premises is perhaps the more difficult aspect of this for larger firms. I've made a very strong point to only use EC2, Route53, S3 and Azure AD. Moving between environments is a lot easier when you stick with the VM as the unit of deployment. Getting out of something like a MSSQL hyper scale instance is simply not possible without switching to a different SQL provider or accepting new operational risks.
If you mean to say that OpenStack is made by Huawei, that is not true. They are a major contributor and a platinum member of that open source project, though.
Europe doesn’t have similar offerings because they never had a chance to or need to compete Silicon Valley. Now that the cats out of the bag, offerings can simply materialize out of the really high demand for homegrown solutions, EU have a large population after all.
US Tech really went off the rails in the last year or last few years, it simply cannot be trusted anymore. Even if such offerings may lag a little behind, they still look like a better proposition. EU is in a similar situation with respect to self defense, they have to step up to the plate and start building their own.
The Huawei ban in the European Union (EU) has been a gradual, uneven process, shifting from voluntary guidelines in 2020 to increasingly mandatory, country-specific, and EU-wide restrictions by 2025–2026.
Here is the timeline of Huawei's ban and restrictions in the EU and UK:
Phase 1: Initial Restrictions and Voluntary Guidelines (2019–2020)
May 2019: The United States places Huawei on a trade blacklist, restricting access to key technologies (Google Android, US chips), which triggers security reviews across Europe.
January 2020: The European Commission launches its "5G Security Toolbox," encouraging EU member states to restrict or exclude "high-risk vendors" (HRV) like Huawei from critical core network infrastructure.
July 2020 (UK): The UK government announces a total ban on buying new Huawei 5G equipment after December 31, 2020, and orders the removal of all existing Huawei 5G gear by 2027.
October 2020 (Sweden): Sweden bans Huawei and ZTE from 5G networks and orders the removal of existing equipment by January 2025.
Phase 2: Implementation Hurdles (2021–2023)
2021-2022: Many EU nations slow-walk the implementation of the 5G toolbox, with only a small number of countries actively banning Huawei from core networks due to costs and dependence on its technology.
June 2023: EU officials express frustration that only one-third of EU countries have implemented restrictions on high-risk vendors.
Phase 3: Hardening Stance and National Bans (2024–2025)
July 2024 (Germany): After years of delays, Germany announces an agreement with major operators to remove Huawei and ZTE critical components from 5G core networks by the end of 2026, and from access/transport networks by 2029.
August 2025 (Spain): Spain cancels a government contract with Telefonica involving Huawei equipment.
November 2025 (EU-wide): The European Commission pushes for a binding, mandatory ban, threatening to make the 2020 voluntary guidelines legally required for all member states.
Phase 4: Proposed Mandatory EU-Wide Ban (2026)
January 20, 2026: The European Commission unveils a new proposal aimed at forcing EU member states to remove Huawei and ZTE from their networks within three years of adoption.
January 2026: Reports indicate the EU may move to ban Huawei and ZTE from critical infrastructure, including fixed-line and fiber networks, not just 5G.
Summary of Key Country Timelines
UK: New equipment banned (Dec 2020), full removal by 2027.
Sweden: Full 5G ban, removal by Jan 2025.
Germany: Core removal by end of 2026, RAN removal by 2029.
EU (General): Proposed 3-year mandatory phase-out starting from 2026
Must say, tech that has held up for all that time, must be doing something right.
So this cloud ride, the possibility of a whole new paradigm in computing could happen before we see EU cloud centricity.
I think you should pause for a moment. There are plenty of European cloud providers that allow you to run VMs in multiple points of presence across the world. Some even offer managed Kubernetes clusters.
It is true that most European cloud providers don't offer many high-level managed services such as function-as-a-service compute solutions, durable execution engines, etc. However, those are not exactly hard requirements. In fact, some cloud providers offer these services for reasons that are not in line with the customer's best interests, such as better hardware utilization and vendor lock-in.
So think about it for a second: if you can put together a Kubernetes cluster, what high-level service do you absolutely need to be able to put together a working service?
I can tell you right away: nothing.
> 2. European politicians still seem to believe it's about renting compute and storage;
I think you need to touch grass on this one. European companies require cloud services for the same reason any other company requires cloud services. If you take the time to learn about how cloud providers such as AWS market their services, you will learn that they firmly base their offering on the exact criteria you are arguing against: compute that scales, and reliability. To argue otherwise, you must argue against how US cloud providers market themselves, which would be baffling.
> 4. If push comes to shove, the EU is critically dependent (...)
There is no "if". We are already at that point. NATO is already running military exercises without the US, and since Trump took over support for Ukraine has been driven primarily by Europe. NATO has been very vocal in how France and the UK have been the primary providers of intelligence to Ukraine.
> 5. As a private citizen, given the incarcerations in the UK and Germany, it seems one should worry more about the EU getting your data than the other way around
You got to be joking. The US now demands access to your social media accounts as precondition to enter the country, and the US also outright disappears people out of the street.
> So think about it for a second: if you can put together a Kubernetes cluster, what high-level service do you absolutely need to be able to put together a working service?
Agreed that K8s helps a lot. But let's say I want managed Redis or MongoDB Atlas, I can't get that, at least I couldn't when I last checked (I can them physically hosted in the EU of course, but on a hyperscaler)
> that they firmly base their offering on the exact criteria you are arguing against: compute that scales, and reliability
Sure these are central, but I can also get e.g. computer vision, distributed queues etc.; a lot of money has gone into the software, not just the hardware is my point.
- Visited/lived in: Austria, Indonesia, Thailand, Singapore, Vietnam, Cambodia, UAE
- Started (too many) side-projects
- Managed to grow a number of projects / startups
- Almost back to PRs: deadlift 155k, squat 90k, bench 80k
Ideas for 2026
- Focus on getting things done / releasing stuff rather than starting / release an app (iOS/Android)
- Triple revenue
- Visit 12 countries I haven't visited before (Morocco, Sweden, Finland, Norway, Lithuania, Latvia, Estonia, Malta, Cyprus, Brazil, Peru, Chile)
- Deadlift 200k, squat 120k, bench 100k; lose 10k BW
- Learn formatting on HN :)
The deterioration was real and annoying, and shines a light on the problematic lack of transparency of what exactly is going on behind the scenes and the somewhat arbitrary token-cost based billing - too many factors at play, if you wanted to trace that as a user you can just do the work yourself instead.
The fact that waiting for a long time before resuming a convo incurs additional cost and lag seemed clear to me from having worked with LLM APIs directly, but it might be important to make this more obvious in the TUI.
reply