Because we've passed the point of no return. There's no need for empty mission statements, or even a mission at all. AI is here to stay and nobody is gonna change that no matter what happens next.
When I started the project I was having a hard time finding a good domain name for the project. Some time later, I came up with this name, and found it for sale on some website for ~800€. I figured it was something I could do, but fortunately I ended up on dynadot's website where it was for sale for a fraction of the price.
I think I got lucky while doing all the work :)
> Honest question, if large portions of labor are automated or marginalized who ultimately buys the goods and services that companies produce?
Once the robots, energy, and weapons all belong to the same small group, that group no longer needs to sell anything to anyone. Production continues, but only for themselves and their enclosed system. The rest of humanity becomes a surplus population that can simply be allowed to die off.
In other words: the economy you’re worried about preserving is already obsolete the moment the owners of the machines no longer require wage slaves or consumers to keep the system running. At that point, mass demand is no longer a feature, it’s a bug that gets patched out.
> Anyone who has had to clean up AI comments riddled with stupid emojis from their code will understand this.
I have no idea what you're talking about. I code daily, with 80/90% of my work AI-assisted, and never had to clean one emoji.
As for emojis appearing in EHRs, a more likely explanation is the growing presence of Gen Z professionals in healthcare, who are known for integrating emojis into their communication. This trend probably has little to do with AI and more to do with generational habits.
It depends on the task, or the particular product/agent you're using. ChatGPT is a lot more emoji-heavy than say the business Copilot. Claude code, never. GitHub copilot never.
What I can tell you is, people I know who are SME's who are being paid several hundred thousand dollars a year this past year have started just copypastaing my questions into an LLM and regurgitating back to me whatever they said.
From my friend who is a director of a medical research library, a huge number of doctors recently switched from googling shit to just running it through the free ChatGPT.
Well, it is. Let's say that AI adds emojis to my code/text. Me, a millennial who hates emojis, will tell the AI to delete those emojis and never use them again in my code or my official documents. The gen Z guy who got his first job last week will love to keep them.
I've noticed coworkers starting to use them in communication (emails, Teams chats, meeting minutes) so now maybe I see others doing it I feel it is fun and acceptable and might throw some in too. I wouldn't put them in code or EDC or any source documentation but an email sure why not.
I did have a scientist recently write a list of lab best practices and before he wrote the list he had a note "Follow instructions below" and then he had a finger pointing DOWN emoji pointing to the list... my work bestie and I actually screenshotted that and sent it to each other and were giggling about it, because he generally is a serious, smart, straight-laced dude and him putting in a garish down facing bright yellow finger emoji just seemed very silly compared to his personality. But it caught our attention and ensured we both read his list!
I would say the uptick is also partly responsible from people using their phones more often during work communication, if he sent that email from his phone instead of his computer it was easier to throw in an emoji to emphasize his important list.
If you can tell it instructions, and you know you can tell it instructions, then how smart do you have to be to realize that "omit emojis" is an instruction you can use? If what you said is true, I have no hope...
There’s an option in ChatGPT’s settings to lessen the use of emojis. Though most people never bother to change the default setting and I didn’t know of it myself until recently.
Most people are not anything like anyone on this website. But even if your personal opinions were universally shared, there is no way that what you are suggesting could even be mathematically possible. Gen-Z, being 15 years wide, enters the workforce at approximately 7% per year.
There were not ~800% more gen-z healthcare workers in 2025 than there were in 2024.
Claude (the only model I use regularly) will definitely add emojis to non-code documentation and/or commit messages (which I almost never let it write, but it will sometimes try). However, I can't recall Claude ever adding emoji to code or in comments.
I always read and review the code and it's true that the old models from 2023/2024 were using a lot of emojis. But that code was garbage. Since LLMs have started to write decent code, I haven't seen one emoji.
That's where they are prevalent. It's just mimicking its training set. If you use LLMs as Q&A oracles or code generators the emoji output is less frequent.
I grade student work, and I see a lot of Python generated by AI. I don't know exactly which AI, but about a third of the work I see is littered with emojis.
Emojis are not widely used on platforms that dont make them easy to add. IE medical software on windows.
>I have no idea what you're talking about. I code daily, with 80/90% of my work AI-assisted, and never had to clean one emoji.
Yeah because they dont just add them to any generated code. Although if you ask them to make some sort of UI that might involve graphics, they will happily add lots of emojis. They do add them very liberally, especially in headings, for writing articles, blog posts, repots etc.
> I have no idea what you're talking about. I code daily, with 80/90% of my work AI-assisted, and never had to clean one emoji.
It depends on what you ask it. Asking it to code won't generate a single emoji, but ask it to make a list, summarize something, and similar tasks and you will have it all over.
And I disagree with people who always try to stick whatever to "generational stuff" as if there's a distinct wall with total culture differences, plus assuming XYZ gen is a monolith to apply whatever label on. I think this is just an easy, lazy way to explain things that you couldn't understand or explain. Sure, you might have some differences between a 13-year-old and 55-year-old in some categories, but they still share a lot of common ground as well. But a 20-something and 30-something? Barely any difference, let alone at work where usually there are policies and whatnot that will restrict such differences from surfacing.
Does it really matter? Even if a billionaire’s kid gets hooked on Coca-Cola or social media, they still have vastly more resources (therapy, education, support) to overcome it. Meanwhile, kids in underprivileged communities don’t get that safety net. For CEOs like Zuckerberg or Coca-Cola’s leadership, that disparity is just a small price to pay for the profits their products generate.
It's part of the parental responsibility to provide enough structure and instill enough discipline to your kids so that they grow to be complete persons. Sure it will be nice if social media was restricted like tobacco, and I am sure one day it will, but you can't relegate all responsibility for everything to the state. I don't want to live in a bubble wrapped society for the sake of the children.
I'm with you. The industry has pivoted from building tools that help you code to selling the fantasy that you won't have to. They don't care about the reality of the review bottleneck; they care about shipping features that look like 'the future' to sell more seats.
It's all about the hardware and infrastructure. If you check OpenRouter, no provider offers a SOTA chinese model matching the speed of Claude, GPT or Gemini. The chinese models may benchmark close on paper, but real-world deployment is different. So you either buy your own hardware in order to run a chinese model at 150-200tps or give up an use one of the Big 3.
The US labs aren't just selling models, they're selling globally distributed, low-latency infrastructure at massive scale. That's what justifies the valuation gap.
Edit: It looks like Cerebras is offering a very fast GLM 4.6
It doesn't work like that. You need to actually use the model and then go to /activity to see the actual speed. I constantly get 150-200tps from the Big 3 while other providers barely hit 50tps even though they advertise much higher speeds. GLM 4.6 via Cerebras is the only one faster than the closed source models at over 600tps.
The network effects of using consistently behaving models and maintaining API coverage between updates is valuable, too - presumably the big labs are including their own domains of competence in the training, so Claude is likely to remain being very good at coding, and behave in similar ways, informed and constrained by their prompt frameworks, so that interactions will continue to work in predictable ways even after major new releases occur, and upgrades can be clean.
It'll probably be a few years before all that stuff becomes as smooth as people need, but OAI and Anthropic are already doing a good job on that front.
Each new Chinese model requires a lot of testing and bespoke conformance to every task you want to use it for. There's a lot of activity and shared prompt engineering, and some really competent people doing things out in the open, but it's generally going to take a lot more expert work getting the new Chinese models up to snuff than working with the big US labs. Their product and testing teams do a lot of valuable work.
Qwen 3 Coder Plus has been braindead this past weekend, but Codex 5.1 has also been acting up. It told me updating UI styling was too much work and I should do it myself. I also see people complaining about Claude every week. I think this is an unsolved problem, and you also have to separate perception from actual performance, which I think is an impossible task.
Assuming your hardware premise is right (and lets be honest, nobody really wants to send their data to chinese providers) You can use a provider like Cerebras, Groq?
Tariffs might keep Chinese EVs out of the US, but they don't stop US influence from fading everywhere else. South America is voting with their wallets, and 'buy American' doesn't work when the price is double and the tech is the same.
Unless the US intends to sanction every country that prioritizes value over US geopolitics, this battle is already lost.
In South America there's also no anxiety over China becoming a superpower, which may be an argument against Chinese products in the US.
In fact, China has pretty good relations with most South American countries. Likely better than the US. I wouldn't be surprised if many people view China more favorably.
The average person in the west isn't losing sleep over China either. That anxiety is mostly manufactured by the media pushing the narrative that they are an existential threat. Maybe they are, I don't know. But what I do know is that western companies love it when they can sell you overpriced products made in China, but panic the moment chinese companies sell the exact same product at a fair price.
Hmm, I wonder if that might have anything to do with the decades of state sponsored terrorism the US has inflicted on the entire region since the 70s? Maybe it wasn't the best idea to make that "we will coup whoever we want" crashout tweet in between begging for crumbs of latam market share?
How do you expect open source alternatives to exist when they cannot enforce how you use their IP? Open source licenses exist and are enforced under IP law. This is part of the reason why AI companies have been pushing hard for IP reform because they to decimate IP laws for thee but not for me.
Under copyright laws, if HN's T's & C's didn't override it, anything I write and have written on HN is my IP. And the AI data hoarders used it to train their stuff.
Calling a HN comment “intellectual property” is like calling a table saw in your garage “capital”. There are specific regulatory contexts where it might be somewhat accurate, but it’s so different from the normal case that none of our normal intuitions about it apply.
For example, copyright makes it illegal to take an entire book and republish it with minor tweaks. But for something short like an HN comment this doesn’t apply; copyright always permits you to copy someone’s ideas, even when that requires using many of the same words.
I never advocated "stricter IP laws". I would however point out the contradiction between current IP laws being enforced against kids using BitTorrent while unenforced against billionaires and their AI ventures, despite them committing IP theft on a far grander scale.