I interpret their argument differently. We know that bullying leads to harmful outcomes. We know that punishment reduces the frequency of undesirable behaviour. So we know that this policy will lead to an aggregate reduction in harm. The question is whether it could lead to some degree of harm to the bully. In the absence of compelling evidence of that, the policy itself seems merited.
For the record, bullying is a complex problem to solve, and no nation or policy or tactic has the silver bullet.
There are famous stories of prisoners in Japanese prisons during the World Wars being healthier than their captors because they ate refined, white rice instead of the brown rice for the prisoners.
Costo makes almost no money on food sale, and almost all of its profit from the membership fee. It's required for their business model, which is VERY friendly for its employees. This is an example of capitalism done right.
There is a trust component for sure, but a business requires assessing the value of time against revenue. I can say for our org that using an off the shelf solution like Clerk saves us time and money and we believe the risk is very small relative to the savings. Maybe the cost for you is not large right now, but when you've got 20 enterprise customers all asking for specific OIDC integrations configured with Private Link, custom domains, and private clusters, an auth solution starts looking mighty fine.
Isn't it widely speculated that these are distilled from current frontier models? Distillation is far less compute intensive than primary training. That said, if distillation produces something almost as good for a fraction of the cost, Jensen's point may stand.
You can't really distill a model without access to the internal weights. You could train on chat logs, but that's absolutely not the same thing, it doesn't even come close to comprehensively "extracting" the model's capabilities. And everyone does that in the industry anyway ever since ChatGPT was first released, some versions of Opus even claimed to be DeepSeek if you prompted them in Chinese.
Calling it distillation does however make normies go along with it when they inevitably add all the Chinese labs to the entities list to pad Dario and Sam’s pockets.
Weights are not required for distillation. I'm not sure how you came to that belief. Distillation is training a student model to minimic a teacher model output.
Anthropic, for example, posted a 2026 disclosure (https://www.anthropic.com/news/detecting-and-preventing-dist...) which singles out DeepSeek's distillation activity. They detected over 16M actions over 24,000 fraudulent accounts. That's just what they detected.
We are also technically a statistical process generating one part of a word at a time when we speak. Our neurons form the same kind of vectorised connections LLMs do. We are the product of repeated experiences - the same way training works.
Our brains are more advanced, and we may not experience the world the same way, but I think we have clearly created rudimentary digital consciousness.
My experience as well. This is even worse than just having a mediocre model, because I can work around that. The inconsistency means it produces different outputs for the same prompt, and I can't rely on that as a business tool.
If we take it a step further, in a few years, why would anyone purchase generic software anymore? If we can perfectly customise software for our needs and preferences for almost free, why would anyone purchase generic software from an App Store? I genuinely think Apple's business model is in jeopardy.
Most apps aren’t standalone and the services they depend on are nontrivial to build. For example, maybe you could vibe code a guitar tuner app, but not a ride share app.
I agree. The services which will be left standing will be those with a competitive moat: critical mass (Tinder, Facebook), content (YouTube, AppleTV), and scale (frontier AI models requiring expensive hardware), etc.
That said, if you look at the apps on your phone, I wager a large proportion don't have these moats. Translation, passwords, budget, reminders, email, to do, project management, messaging, browser, calendar, fitness, games, game tracking, etc.
The most recent software paradigm has been SaaS - software as a service. Capex is distributed among all customers and opex is paid for through the subscription. This avoids the large upfront capex and provides easy cost and revenue projections for both sides of the transaction. The key to SaaS is that the software is maximally generic. Meaning is works well for the largest number of people. This necessitates making tough cuts on UX and functionality when they only benefit small parts of the userbase.
Vibe coding or LLM accelerated development is going to turn this on its head. Everyone will be able to afford custom software to fit their specific needs and preferences. Where Salesforce currently has 150,000 customers, imagine 150,000 customers all using their own customised CRM. The scope for software expansion is unbelievably large right now.
SaaS is not a new idea and has been renamed multiple times.
In the 70s, it was called "time-sharing". Instead of buying a mainframe, you got a CICS application instance on a mainframe and used that. (tangentially, spare time on these built-out nation-wide dialup-supported networks is what gave birth to CompuServe and GEnie).
In the dot-com era, it was called "application service providers". Salesforce and actually started in this era (1999). So did NetSuite. This was the first attempt to be browser-based but bandwidth and browsers sucked then.
I think PaaS is a more recent software paradigm, albeit a far less successful one.
For the record, bullying is a complex problem to solve, and no nation or policy or tactic has the silver bullet.
reply