>> Everyone using Claude code on a personal subscription is default opted in to getting their data trained on
This is completely not true if you use AWS Bedrock, and applies to both your private that or in a business context. Its one of their core arguments for the service use.
[1] - "...At Amazon, we don’t use your prompts and outputs to train or improve the underlying models in Amazon Bedrock and SageMaker JumpStart (including those from third parties), and humans won’t review them. Also, we don’t share your data with third-party model providers. Your data remains private to you within your AWS accounts..."
I'm talking about the subsidized subscription plans.
The data isn't the sole point of them, they also are about bringing in users that will encourage the product use in companies and ultimately drive more profitable API adoption within their orgs, and just general diffuse mindshare doing the same.
You can still opt out (except with Google's offering which disables lots of features if you opt out of training).
Please, some of us are long NVIDIA...let us cope in peace. :-)
Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity.
So you will get no productivity increase from the AI bubble. Yes, you read that correctly.
The test is simple, if raw brainpower were the bottleneck, you could 10x any company by hiring 200 PhDs. In practice you get 200 brilliant people writing unread memos, refactoring things that worked, and forming a committee to rename the committee. Smart has always been cheaper and more abundant than the discourse pretends.
Every real productivity revolution came from somewhere else like energy (steam, electricity), capital stock (machines that do the physical work), or coordination (railroads, shipping containers, the assembly line, the internet).
None of these raised the average IQ of the workforce, they changed what a given worker could move, reach, or coordinate with. Solow old line basically still holds. The output per worker grows when you give the worker better tools and infrastructure, not better neurons.
Meanwhile the actual bottlenecks in a modern firm are regulatory approval, legacy systems, procurement cycles, customer adoption, internal politics, and physical supply chains that don't care how clever your email was. A smart brains intern at every desk produces more artifacts, not more throughput, and in a lot of organizations, more artifacts is actively negative ROI.
Jevons does not save you either, cheaper cognition mostly means more slide decks, not more GDP.
So the setup is that models are commoditizing on one side, and on the other side a product whose core value add (more intelligence, faster) is aimed at a constraint that was never really binding. This of course a rough combo for a trillion dollar capex supercycle.
Fun for the trade, while it lasts, but there is no thesis. Just dont tell CNBC and short NVDA on time ,-)
There's also a very strong Trurl and Klapaucius [1] component to this AI craziness, as in I remember a passage in Lem's The Cyberiad where either Trurl or Klapaucius were "discussing" with an intelligent/AGI robot and asking it for stuff-to-know/information, at which point said AGI robot started literally inundating them with information, paper on top of paper on top of paper of information. At that point it doesn't even matter if that information is correct or smart or whatever, because by that point the very amount of said information has changed everything into a futile endeavour.
Besides to say that your competitor can turn around and hire the same team of PHDs at the same rate that you can. Compare and contrast PHD's on leaderboards and have access in seconds with a new API key or model selector.
Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity.
Exactly. We don't use the intelligence we already have! That seems to be the real problem with the "AGI" concept. Given such a capability, we'll just nerf it, gatekeep it, and/or bias it. There's no reason to think we'll actually use it to benefit humanity as a whole. It will be shaped into an instrument to enforce our prejudices.
This is just anecdote, colliding with documented database behavior, who is not an issue on Oracle, SQL Server, or IBM DB2.
PostgreSQL explicitly documents xid wraparound as a failure mode that can lead to catastrophic data loss and says vacuuming is required to prevent it. Near exhaustion, it will refuse commands.
Finance ministers panicking over AI marketing...while ignoring a nearly $40 trillion U.S. debt pile, increasingly unsustainable financing, gated private credit redemptions, hidden CRE losses, and pension insurance exposure tells you exactly how corrupt the priorities are.
This is completely not true if you use AWS Bedrock, and applies to both your private that or in a business context. Its one of their core arguments for the service use.
[1] - "...At Amazon, we don’t use your prompts and outputs to train or improve the underlying models in Amazon Bedrock and SageMaker JumpStart (including those from third parties), and humans won’t review them. Also, we don’t share your data with third-party model providers. Your data remains private to you within your AWS accounts..."
[1] - https://aws.amazon.com/blogs/security/securing-generative-ai...
reply