If your model converges to the same outputs as everyone else's because everyone trained on the same data, the only thing left to differentiate on is brand, and fear is great at building brand.
"We are too dangerous to commoditize" pitches better than "we are mostly typical of the internet's median answer", those are kind of the same statement.
OpenAI is doing the Joel Spolsky move, commoditize the layer above yours. Google did it with Kubernetes back when GCP was losing on infra. If they don't put a spec out, every orchestrator gets built around MCP or A2A by default and OpenAI is the third wheel. The spec is free, the runtime is where the money is..
AlphaZero worked because chess and Go have terminal rewards and positions you can prove are right or wrong. General intelligence has neither, and the leap from self-play in a well-defined game to self-play in arbitrary environments is the hard part Silver isn't really demoing. Sara Hooker's stuff on scaling laws lines up here (1)
50% cache savings is nice buuuuut.. token cost is the product now, not the model. Every coding agent is converging on the same trick, cache the context across calls. Anthropic's prompt caching, then Cursor's snapshots, now this Reddit thing.
Captive aftermarkets are roughly the biggest hidden cross-subsidy in consumer goods imo.. printer and tractor OEMs price the unit near cost and pull the lifetime margin out of parts, service, and locked firmware. That's why right-to-repair is one of the few issues where farm states and urban progressives end up in the same column.
The veto doesn't read as pro-growth, more "don't apply the brake too broadly".. ME-307 was the first state-wide moratorium to get serious traction and one Jay carve-out was the only thing keeping it from law. NY is queued up next. Grid pushback is the part of the AI capex story the 10-Ks aren't really pricing in: https://philippdubach.com/posts/ai-capex-arms-race-who-blink...
It's only open because nobody who's interested in this model would send their data to openai to be stripped of PII. If they thought otherwise, it would be closed-weights and API-only for "safety" reasons
> whether the final result is actually better, or whether it is just a more polished hallucinatio
Agents sampled from the same base model agreeing with each other isn't validation, it's correlation. Cheaper orchestration mostly amplifies whatever bias the model already has. Neat hack though.
"We are too dangerous to commoditize" pitches better than "we are mostly typical of the internet's median answer", those are kind of the same statement.
reply