Yugoslavia broke into several smaller countries following the death of the Yugoslavian dictator, and a huge war ensued. Maintaining the domain records was probably quite a low priority.
The break-up of Yugoslavia was a long, arguably still on-going, process, the final phase of which happened peacefully. Serbia and Montenegro, that made the post-1992 Yugoslavia, agreed in 2003 to change the name of the country to Serbia and Montenegro, pending the Montenegrin independence referendum scheduled for 2006.
Considering the possibility of another country name depreciation in three years, they agreed to keep the yu domain.
Fun fact, had the Montenegrin referendum gone the other way, the plan was to use .cs as the national domain, which used to be owned by another ex-country, Czechoslovakia.
I assume you're referring to Tito? He died in 1980. None of the constituent countries tried to leave Yugoslavia until 1991, right? That's following, technically, but there's a lot of history in that decade. From my very vibes based knowledge of the area, Tito is the only dude who could have held it together though.
.yu was purchasable long after the country ceased to exist, until 2008 to be exact.
Technically speaking, "Yugoslavia" continued to exist until 2003, when the name finally got deprecated in favour of "Serbia & Montenegro" as one country (also including the territory of Kosovo), which itself only lasted 3 years before Montenegro declared independence (and Kosovo did the same 2 years after).
So however you spin it, the domain outlived the country by at least 5 years, arguably 15(ish), 9 of which were post-war(s).
The organization that ran the nameservers for .yu still exists today. Even in the case where there was no one fit to run them, all the records could be transferred to ICANN or someone else to run the server.
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
People will default to believing something is AI if there's no downside to that opinion. It's a defence mechanism. It stops them being 'caught out' or tricked into believing something that's not true.
As soon as there's a potential loss (e.g. missing out on getting rich, not helping a loved one) people will switch off that cynical critical thinking and just fall for AI-driven scams.
Just on a personal note, tying your personal devices to your work email account is a very silly thing to do. Even if it's your company you could be locked out of your company email account at any time (HR grievance, SEC investigation, hostile takeover...) Losing access to your devices and not being able to access things like reset emails at the same time would not be fun.
I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors.
I give agent either a simple browser or Playwright access to proper browsers to do this. It works quite well, to the point where I can ask Claude to debug GLSL shaders running in WebGL with it.
Agreed. Anthropic added a plugin accessible under `/plugins` to CC to make it even easier to add MCP Playwright to your project. It automatically handles taking screenshots.
It's not perfect though - I've personally found CC's VL to be worse than others such as Gemini but its nice to have it completely self contained.
This project desperately needs a "What does this do differently?" section because automated LLM browser screenshot diffing has been a thing for a while now.
All the power to you if you build a product out of this, I don't wanna be that guy that says that dropbox is dead because you can just setup ftp. But with Codex/Claude Code, I was able to achieve this very result just from prompting.
> often the playwright skill will verify using DOM API instead of wasting tokens on screenshots
So... Bypassing the whole "sees what it actually looks like in the browser. It can’t tell if the layout is broken" parent commentator is talking about? Seems worse, not better.
Unreal Engine 5 can limp on my browser, and usually most demos end up crashing it, not really a good example.
What is the most successful game on the browser, done with Unreal 5 that can compare to Flash 3D games, other than the citadel demo done with Unreal 3, and with Infinity Blade graphics as baseline?
Fallback support is a legitimate reason for additional code being in the bundle, but it's not 'bloat' because it's necessary. In an ideal world every website would generate ES5, ES6, and ES2025 bundles and serve the smallest one that's necessary for the app to run based on the browser capabilities, but that is genuinely quite hard to get right and the cost of getting it wrong is a broken app so it's understandable why devs don't.
The other two, atomic architecture and ponyfills, are simply developer inexperience (or laziness). If you're not looking at the source of a package and considering if you actually need it then you're not working well enough. And if you've added code in the past that the metrics about what browsers your visitors are using show isn't needed any more, then you're not actively maintaining and removing things when you can. That's not putting the user first, so you suck.
Bloat is mostly added by package authors, not website authors. And they can't know who's running it and can't look at the metrics. I doubt many website authors directly use isEven or polyfills.
reply