Hacker Newsnew | past | comments | ask | show | jobs | submit | johnbarron's commentslogin

>> Everyone using Claude code on a personal subscription is default opted in to getting their data trained on

This is completely not true if you use AWS Bedrock, and applies to both your private that or in a business context. Its one of their core arguments for the service use.

[1] - "...At Amazon, we don’t use your prompts and outputs to train or improve the underlying models in Amazon Bedrock and SageMaker JumpStart (including those from third parties), and humans won’t review them. Also, we don’t share your data with third-party model providers. Your data remains private to you within your AWS accounts..."

[1] - https://aws.amazon.com/blogs/security/securing-generative-ai...


I'm talking about the subsidized subscription plans.

The data isn't the sole point of them, they also are about bringing in users that will encourage the product use in companies and ultimately drive more profitable API adoption within their orgs, and just general diffuse mindshare doing the same.

You can still opt out (except with Google's offering which disables lots of features if you opt out of training).


Please, some of us are long NVIDIA...let us cope in peace. :-)

Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity.

So you will get no productivity increase from the AI bubble. Yes, you read that correctly.

The test is simple, if raw brainpower were the bottleneck, you could 10x any company by hiring 200 PhDs. In practice you get 200 brilliant people writing unread memos, refactoring things that worked, and forming a committee to rename the committee. Smart has always been cheaper and more abundant than the discourse pretends.

Every real productivity revolution came from somewhere else like energy (steam, electricity), capital stock (machines that do the physical work), or coordination (railroads, shipping containers, the assembly line, the internet).

None of these raised the average IQ of the workforce, they changed what a given worker could move, reach, or coordinate with. Solow old line basically still holds. The output per worker grows when you give the worker better tools and infrastructure, not better neurons.

Meanwhile the actual bottlenecks in a modern firm are regulatory approval, legacy systems, procurement cycles, customer adoption, internal politics, and physical supply chains that don't care how clever your email was. A smart brains intern at every desk produces more artifacts, not more throughput, and in a lot of organizations, more artifacts is actively negative ROI.

Jevons does not save you either, cheaper cognition mostly means more slide decks, not more GDP.

So the setup is that models are commoditizing on one side, and on the other side a product whose core value add (more intelligence, faster) is aimed at a constraint that was never really binding. This of course a rough combo for a trillion dollar capex supercycle.

Fun for the trade, while it lasts, but there is no thesis. Just dont tell CNBC and short NVDA on time ,-)


> Jevons does not save you either,

There's also a very strong Trurl and Klapaucius [1] component to this AI craziness, as in I remember a passage in Lem's The Cyberiad where either Trurl or Klapaucius were "discussing" with an intelligent/AGI robot and asking it for stuff-to-know/information, at which point said AGI robot started literally inundating them with information, paper on top of paper on top of paper of information. At that point it doesn't even matter if that information is correct or smart or whatever, because by that point the very amount of said information has changed everything into a futile endeavour.

[1] https://en.wikipedia.org/wiki/The_Cyberiad


Besides to say that your competitor can turn around and hire the same team of PHDs at the same rate that you can. Compare and contrast PHD's on leaderboards and have access in seconds with a new API key or model selector.

Granted LLM's are not even PHDs.

What a weird time we live in...


Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity.

Exactly. We don't use the intelligence we already have! That seems to be the real problem with the "AGI" concept. Given such a capability, we'll just nerf it, gatekeep it, and/or bias it. There's no reason to think we'll actually use it to benefit humanity as a whole. It will be shaped into an instrument to enforce our prejudices.


Palantir CEO is a Psychopath:

"CEO of Palantir, described people killed in the Gaza Genocide as “useful idiots”"

https://www.reddit.com/r/PublicFreakout/comments/1sp4rpd/ale...

"12% of corporate leaders are psychopaths. It’s time to take this problem seriously"

https://fortune.com/2021/06/06/corporate-psychopaths-busines...

And Musk well its a whole classification on its own...

https://en.wikipedia.org/wiki/Elon_Musk_salute_controversy

https://www.thedailybeast.com/elon-musk-sued-by-british-dive...

https://www.pbs.org/newshour/politics/these-federal-employee...

https://www.timesofisrael.com/musk-endorses-tweet-claiming-j...


Palantir CEO is a Psychopath:

"CEO of Palantir, described people killed in the Gaza Genocide as “useful idiots”"

https://www.reddit.com/r/PublicFreakout/comments/1sp4rpd/ale...

"12% of corporate leaders are psychopaths. It’s time to take this problem seriously"

https://fortune.com/2021/06/06/corporate-psychopaths-busines...



Aniara is a wonderful take on these issues with humans on board.

https://www.imdb.com/title/tt7589524/

Without the benefit of large special effects budgets, I found it incredibly effective, and left me nostalgic and reflective for days.


>> But when I look at the national debt that seems even more out of reach

Of course, I would like to note, you have just spent 20 times the NASA annual budget, in a 3 week war of choice...


You mean V'ger


>> Just monitor it and you’re done.

This is just anecdote, colliding with documented database behavior, who is not an issue on Oracle, SQL Server, or IBM DB2.

PostgreSQL explicitly documents xid wraparound as a failure mode that can lead to catastrophic data loss and says vacuuming is required to prevent it. Near exhaustion, it will refuse commands.

Small sample of known outages:

- Sentry — Transaction ID Wraparound in Postgres

https://blog.sentry.io/transaction-id-wraparound-in-postgres...

Mailchimp / Mandrill — What We Learned from the Recent Mandrill Outage

https://mailchimp.com/what-we-learned-from-the-recent-mandri...

Joyent / Manta — Challenges deploying PostgreSQL (9.2) for high availability

https://www.davepacheco.net/blog/2024/challenges-deploying-p...

BattleMetrics — March 27, 2022 Postgres Transacton ID Wraparound

https://learn.battlemetrics.com/article/64-march-27-2022-pos...

Duffel — concurrency control & vacuuming in PostgreSQL

https://duffel.com/blog/understanding-outage-concurrency-vac...

Figma — Postmortem: Service disruption on January 21–22, 2020

https://www.figma.com/blog/post-mortem-service-disruption-on...

Even AWS updated their recommendation as recently as Feb 2025, and is an issue in Aurora Postgres as well as Postgres.

"Prevent transaction ID wraparound by using postgres_get_av_diag() for monitoring autovacuum" https://aws.amazon.com/blogs/database/prevent-transaction-id...


Finance ministers panicking over AI marketing...while ignoring a nearly $40 trillion U.S. debt pile, increasingly unsustainable financing, gated private credit redemptions, hidden CRE losses, and pension insurance exposure tells you exactly how corrupt the priorities are.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: