The article started with showing how complex the frontend is. And then moves the complexity to the backend, with tools that aren’t well supported and putting extra load on the server. For some applications, this is a good solution, for most, however, it’s not.
I think it's totally fair, but I would assume that most Spring projects make significant DX tradeoffs compared to a full JS stack or serve an API rather than html.
1. Spring + Handlebars: You can either write the html template in a string loosing syntax highlighting and other LSP stuff, load it from a file loosing colocation.
2. Handlebars + webcomponents. They simply bundle all the web components into a single file, which breaks down when they get large and you don't need every component on every route.
3. Tailwind: Looking online you can get it working with spring boot, but the route chosen here is a script to run the cli, which again means every route ships every tailwind class used anywhere.
This is precisely where the publisher has the most control over the user experience. Putting load on the browser makes a user's experience dependent on their hardware & software stack.
Part of what makes a good user experience is is working nicely with the users hardware and software stack and that's much easier on the client.
The user would like the website to have native scroll physics, respect their system preferences, react to changing window sizes and with different input methods, screen readers and so on.
If the key to a good user experience was server side control, than the hallmark of a good website would be being an RDP stream and prefers-color-scheme wouldn't be a CSS feature but a HTTP header.
> The user would like the website to have native scroll physics, respect their system preferences, react to changing window sizes and with different input methods, screen readers and so on.
Yes, these are features implemented by every popular browser via HTML & CSS. Fancy front-end work frequently breaks these features.
> key to a good user experience was server side control
I think you're reading my comment from an extremely front-end perspective. I simply mean that where possible it's better to render pages and do logic on the server side versus on the client side. The same HTML + CSS is generated either way, the only question is what % of the work is done by the user's device vs what % is done in a data center.
its an SKU from OpenAI's perspective, broader goal and vision is (was) different. Look at the Claude and GLM, both were 95% committed to dev tooling: best coding models, coding harness, even their cowork is built on top of claude code
I'm not sure how this makes sense when Claude models aren't even coding specific: Haiku, Sonnet, Opus are the exact same models you'd use for chat or (with the recent Mythos) bleeding edge research.
But they detect it under the hood and apply a similar "variant", as API results are not the same than on Claude Code (that was documented before by someone).
I agree, but it's not clear if the situation was "hey we paid, look at our docs" and hetzner was just like "no give us money" and they were like "no we're not paying", or if hetzner just shut them down without recourse.
Personally, if I knew they were gonna shut me down if I didn't pay before X date, I'd fight it up until X-2 days, pay it, then continue fighting (depends on the amount of). But it's not clear that OP was given such a deadline.
An experienced developer would not have created this mess nor 'vibe-coded' (i.e. used AI without checking), but this person probably didn't know what they didn't know and believed the AI when it confidently asserted this mess was the correct way to do this.
None of that is related to the practice of Continuous Integration.
MCP is dead? Which cli tool should we use to instruct Chrome to open a page and click the Open button? And to read what appears in the console after clicking?
MCP permanently sacrifice a chunk of the context window? And a skill for you cli is free?
It isn't usually an American company doing the local operations, but a local subsidiary. Like Walmart Canada telling Walmart corporate to pound sand in the 1990's over Cuban pajamas. It's illegal for Canadian companies to participate in the US embargo of Cuba.
This is all well within the realm of what governments can and do regulate. Want to do business in a country with their laws or not is the choice.
At some point it comes to a head; Walmart corporate and the USA didn't care enough about Cuban pajamas, but in a situation where they DO care, you quickly get Вкусно – и точка.
The EU (nay, perhaps every country) should be prepared to deal with Microsoft or AWS completely cutting them off from access to all their systems - what would be the cost and impact?
We are rapidly heading to not one Internet, but country-specific internets that may or may not bridge to other ones in some cases.
Apparently AWS sovereign cloud is designed to continue operating even if the US offices cut them off. The servers are in the EU and the people running them are subject to EU laws, not US ones.
Realistically a US executive could be legally required to give an EU engineer a command that they legally couldn’t follow. At that point I guess we find out if the engineers’ national or corporate identities are dominant. I suspect the former in most cases, but who knows?
The US exec probably doesn't want to order them either. So the game would be played and they did their best. There's another article about the US fighting data sovereignty requirements/laws in other countries, but that relies on their quickly dwindling soft power.
reply