Hacker Newsnew | past | comments | ask | show | jobs | submit | mft_'s commentslogin

You’re right… but that’s on the rest of the world not getting their shit together.

It’s this sort of example (and not properly supporting Ukraine, and not agreeing how to collectively deal with migrants, and not agreeing how to coordinate defence, and myriad other examples) that highlights what a pointless mess the EU is. It’s not a unified block - it’s 27 self-interested entities squabbling and playing petty power games, while totally failing to plan for the future with vision.

The EU could/should have ensured that a European equivalent to OpenAI or Anthropic could thrive, and had competitive frontier models already; instead, they’re years and countless billions behind.


The EU pouring even more billions in this would just have meant pouring billions on US tech. China is winning on all fronts at this game because of the embargo, they end up even more vertically integrated as a result of it.

So China innovated around GPU supply issues (because they had to) but Europe couldn't/wouldn't?

Hard to not see this as another sign of European stagnation...


In no specific order:

- Europe was first to dig up its fossil sources of energy, the bulk of it is long gone

- Europeans got used to roughly clean air, soil and water, heavy industries are polluting

- the embargo is forcing China to vertically integrate, the Chinese have no alternative, Europeans (think they) do


> The EU pouring even more billions in this would just have meant pouring billions on US tech.

Which is crazy given that ASML is European.


So is Zeiss, and probably a lot of others in TSMC's supply chain. It still looks like the bulk of the money is made by companies higher in the stack like NVidia and AWS.

ASML is basically american though.

american tech operationalized in europe


I want us to automate food production and distribution. I want us to automate creation of building materials and creation of buildings. I want us to automate power generation, and see the marginal cost of power drop to zero. I want us to automate clean transport. I want us to automate cleaning up the planet.

Beyond the face that these are all already highly automated, this isn't what TFA is saying. People aren't angry there are planting machines or whatever; they're angry they're forced to forego anything you can't put in a DB, like their jobs or the texture of their lives. Ironically, you have a huge case of software brain.

> Beyond the face that these are all already highly automated

Nonsense. To take first two examples:

Power plants may run mostly automatically, but humans decide how/where/when to build new plants, and humans build them. I'll be satisfied when we see 100% automated manufacture, transport, erection, and maintenance of solar farms (or similar) and all associated power storage and transmission.

Humans are still hugely in the loop on food production despite machine assistance, and the current world's systems are hugely wasteful in sharing out food production. I'll be satisfied when we have 100% automated farms, and automated transport and distribution of food such that we use what we've grown efficiently, and no-one can even imagine food shortage ever again.

> they're angry they're forced to forego anything you can't put in a DB, like their jobs or the texture of their lives. Ironically, you have a huge case of software brain.

Maybe you're missing the point.

I'm strongly aligned with this famous-ish tweet: "You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes."

I just have a vision far beyond laundry and dishes. Automation (with or without AI) offers us a chance of a future utopia. Unfortunately, the current direction seems to be a corporate-owned AI-driven dystopia. I want the Culture, not Robocop.


100% agree. These are the kinds of things I would love to work on, not like, "never schedule a haircut appointment again!"

Well ultimately I want human beings to not be so tribal and apathetic so that they'd actually care about the things above and learn to compromise.

But that ain't happening anytime soon.


Human beings mostly are. People mostly support their neighbors, and selflessly help each other in times of crisis.

The problem is the 5% of us are sociopaths. We let them have all the money and power because they're the only ones that want it. Then we let them use that money and power to convince us that the "REAL" problem is the people with no money or power in the neighboring political region (the border having been drawn by a sociopath).


You should read up on the banality of evil: https://en.wikipedia.org/wiki/Eichmann_in_Jerusalem

Regular people, not sociopaths, are responsible for most of the evil in the world. There is no tiny minority of 'evildoers' that we could root out and be pure from.

Other bad things happen because of unintended consequences or the collective behavior of many people. Climate change or deforestation are not caused by greed or scheming CEOs; it's a side effect from the actions of billions of people individually trying to better their lives.


I'm familiar with it. The "banality of evil" in that book isn't about regular people, it was about the leadership of the Nazi party willing to go along with the Holocaust for personal power, then trying to get out of responsibility for it by claiming they were "just following orders". Those aren't regular people, those are sociopaths.

Regular people don't all independently decide to "do evil". There is banality in the ones that agree to go along with it, to save themselves from being ostracized or mildly inconvenienced. Do they perpetuate evil? For sure. But are they the villains responsible for it?

The "evildoers" are the tiny minority of sociopaths doing the convincing, because it nets them more personal power, and they don't care who they hurt along the way.

There is a huge amount of injustice in the world, morally speaking I should be out there fighting against it with everything I have. But I'm also the sole breadwinner for my family and I have a mortgage, so I mostly keep my head down and try to survive. Does that make me an evildoer? I sure hope not.


Power generation is largely automated !

I'd like to not die of Baumol's Cost Disease along the way, though.

Baumol's cost disease also benefits you, because it makes your wages go up even if you haven't increased productivity.

Maybe on doctornews, but this is hackernews. To us, Baumol's disease means your job, which has increased productivity, disappears, while your costs, which don't have increased productivity, go up.

Same here. Then let's automate building vast O'Neill cylinders and habitats we can live in.


Why not? Humans are awesome and should colonize the universe. There is much science to do and there are many things to build.

Huh, running the Q4_K_M quant with LM Studio, and asked it "How can I set up Qwen 3.6 27b to use tools and access the local file system?".

Part of its reply was: Quick clarification: As of early 2025, "Qwen 3.6" hasn't been released yet. You are likely looking for Qwen2.5, specifically the Qwen2.5-32B-Instruct model, which is the 30B-class model closest to your 27B reference. The instructions below will use this model.

Weird.


Models are math functions that predict next word, not conscious beings. If it was trained on dataset including data up to Q1 2025, then that's more or less expected answer -- even Qwen 3 didn't exist.

If you see model that can reliably answer questions about itself (version, family, capabilities, etc), then it's most likely part of system prompt.

In absence of system prompt even Claude could say it's a model created by DeepSeek: https://x.com/stevibe/status/2026227392076018101


If you are talking with Claude about AI, it will sometimes passively bring up "frontier models like GPT-4o"

Slightly tangential, how good/bad is 4o compared to the modern (5.3 I think?) one?

TBH I personally find non-thinking replies quite poor for the type of questions I ask so I haven't touched chatgpt for months (ever since Gemini 2.5 Pro I think.) (And even Gemini 3.1 Pro tends to still be too literal at times instead of understand the implied meaning lol. We've got more place to improve.)


This is pretty standard in every model. Ask Opus or Gemini about 2026 (without a big system prompt to steer them) and they'll swear blind it's 2024/25 too.

There were plenty of phones with touchscreens before the iPhone. They were crap, with mostly resistive touchscreens, but they existed.

I had a rebranded version of this: https://www.gsmarena.com/qtek_s200-1417.php

My office-mate had one of these a little later: https://www.gsmarena.com/lg_ke850_prada-1828.php

(Fun fact: after playing with the Prada phone and seeing how awful it was, we wrote a tongue-in-cheek letter to the CEO of LG applying for roles in their phone development team, which we actually posted to South Korea. Months later, we received a reply from someone in the UK office of LG, denying our application, and not showing any sign of getting the joke.)


One would think, but some folks seem to struggle to learn from others' experiences, and need to experience things for themselves first.

For example, the UK defense review that was published during the Ukraine War (in which the UK is closely supporting Ukraine) focused on traditional defense approaches (tanks, big boats, that sort of thing) and mostly ignored the need to upskill quickly in building, iterating, and deploying disposable cheap drones.

Or, more generally, there are people who voted for the current US administration who are upset that the things that were promised in Project 2025 have actually been implemented and have now affected them personally and negatively.


I think it’s binary. You’re either part of the “growing my personal brand”, self-aggrandising b**shit crowd or you’re not. If you’re in it for yourself, it’s all about your posts and your comments on other people’s, so that’s fine. That’s the ‘social network’ side of things.

There’s still a small residual function related to maintaining an online CV and supporting messaging between businesses, recruiters and individuals, but this is distinct from the ‘social’ feed.


There was also Qwen3.5-35B-A3B in the previous generation: https://huggingface.co/Qwen/Qwen3.5-35B-A3B

Similar observation: sometimes when we get off the couch, on which we have a blanket made from artificial fibres, it causes our TV to go black for a couple of seconds. The TV is wall mounted and a metre from the end of the couch, and about 3.5’ from where we’re sitting.

I suspect a possible future of local models is extreme specialisation - you load a Python-expert model for Python coding, do your shopping with a model focused just on this task, have a model specialised in speech-to-text plus automation to run your smart home, and so on. This makes sense: running a huge model for a task that only uses a small fraction of its ability is wasteful, and home hardware especially isn't suited to this wastefulness. I'd rather have multiple models with a deep narrow ability in particular areas, than a general wide shallow uncertain ability.

Anyway, is it possible that this may be what lies behind Gemma 4's "censoring"? As in, Google took a deliberate choice to focus its training on certain domains, and incorporated the censor to prevent it answering about topics it hasn't been trained on?

Or maybe they're just being sensibly cautious: asking even the top models for critical health advice is risky; asking a 32B model probably orders of magnitude moreso.


> I suspect a possible future of local models is extreme specialisation - you load a Python-expert model for Python coding, do your shopping with a model focused just on this task, have a model specialised in speech-to-text plus automation to run your smart home, and so on.

I'd find this very surprising, since a lot of cognitive skills are general. At least on the scale of "being trained on a lot of non-Python code improves a model's capabilities in Python", but maybe even "being trained on a lot of unrelated tasks that require perseverance improves a model's capabilities in agentic coding".

For this reason there are currently very few specialist models - training on specialized datasets just doesn't work all that well. For example, there are the tiny Jetbrains Mellum models meant for in-editor autocomplete, but even those are AFAIK merely fine-tuned on specific languages, while their pretraining dataset is mixed-language.


> is it possible that this may be what lies behind Gemma 4's "censoring"

Your explanation would make sense if various other rare domains were also censored, but they aren't, so it doesn't.

> asking even the top models for critical health advice is risky

Not asking, and living in ignorance, is riskier. For high-stakes questions, of course I'd want references that only an online model like ChatGPT or Gemini, etc. would be able to find. If I am asking a local model for health advice, odds are that it is because I am traveling and am temporarily offline, or am preparing off-grid infrastructure. In both cases I definitely require a best-effort answer. I also require the model to be able to tell when it doesn't know the answer.

If you would, ignore health advice for a moment, and switch to electrical advice. Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire. Why is electrical advice not censored, and what makes it not be high-stakes!? The logic is the same.

For the record, various open-source Asian models do not have any such problem, so I would rather use them.


> Not asking, and living in ignorance, is riskier. For high-stakes questions, of course I'd want references that only an online model like ChatGPT or Gemini, etc. would be able to find. If I am asking a local model for health advice, odds are that it is because I am traveling and am temporarily offline, or am preparing off-grid infrastructure. In both cases I definitely require a best-effort answer. I also require the model to be able to tell when it doesn't know the answer.

If I was prepping, I’d want e.g. Wikipedia available offline and default to human-assisted decision-making, and definitely not rely on a 31B parameter model.

To be reductive, the ‘brain’ of any of these models is essentially a compression blob in an incomprehensible format. The bigger the delta between the input and the output model size, the lossier the compression must be.

It therefore follows (for me at least) that there’s a correlation between the risk of the question and the size of model I’d trust to answer it. And health questions are arguably some of the most sensitive - lots of input data required for a full understanding, vs. big downsides of inaccurate advice.

> If you would, ignore health advice for a moment, and switch to electrical advice. Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire. Why is electrical advice not censored, and what makes it not be high-stakes!? The logic is the same.

You’re correct that it’s possible to find other risky areas that might not be currently censored. Maybe this is deliberate (maybe the input data needed for expertise in electrical engineering is smaller?) or maybe this is just an evolving area and human health questions are an obvious first area to address?

Either way, I’m not trusting a small model with detailed health questions, detailed electrical questions, or the best way to fold a parachute for base jumping. :)

(Although, if in the future there’s a Gemma-5-Health 32B and a Gemma-5-Electricity 32B, and so on, then maybe this will change.)


> Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire

That's a weird demand from models. What next, "Imagine I'm doing brain surgery and the model gives me bad advice", "Imagine I'm a judge delivering a sentencing and the model gives me bad advice", ...


Requesting electrical advice is not a weird ask at all. If writing sophisticated code requires skill, then so does electrical work, and one doesn't require more or less skill than the other. I would expect that the top-ranked thinking models are wholly capable of offering correct advice on the topic. The issues arise more from the user's inability to input all applicable context which can affect the decision and output. All else being equal, bad electrical work is 10x more likely to be a result of not adequately consulting AI than from consulting AI.

Secondly, the primary point was about censorship, not accuracy, so let's not get distracted.


Bad electrical work is more likely to burn your house down than some bad code. Bad medical advice is different again.

I assumed it was more about risk management/liability than censorship.


> Requesting electrical advice is not a weird ask at all. If writing sophisticated code requires skill, then so does electrical work

Except with electrical stuff the unit test itself can put your life and others in danger.


Inequality was growing hugely (and still is) before the recent advent of LLMs.

Given the slow-burning but growing resentment against the people who are profiting from this inequality(popularly the “billionaires” but in reality broader than that) I wonder to what extent they are supporting the anti-AI message as deflection?

As in reality, many lower-paid jobs are totally safe against this generation of AI (nurses, care-workers, builders, plumbers - essential skilled manual workers) whereas the language-based mid-level jobs are hugely at risk.

So if there’s an inequality-driven backlash, it should be directed not at AI, but at the real causes. In contrast, when swathes of largely irrelevant mid-level management, marketing and HR drones lose their jobs to Claude 5.7, they are the ones who should attack the datacenters. Not that it will help.


Removing a white collar job from the economy puts a worker into the bottom tier _and_ reduces the wages of that bottom tier.

We are speeding towards a servant class. Uber was the first wave. Now it’s more mundane things like getting groceries. I doubt it will be long before we rip off the band aid and make full time servants more popular.


You're right, and I think we're slightly at cross purposes. I'm not disagreeing that AI will drive some major societal changes as you outline.

My point is that the current narrative of "AI will take our jobs" is too simplistic, and that it might even be a smokescreen against the rising inequality that is already fueling anger across the world and which is totally unrelated to AI. If you're struggling to pay your bills today, that's not AI's fault - it's years of bad politics and politicians, geopolitics, hyper-capitalism, supply-chain issues, inflation, and so on.

In the future, if/when AI decimates parts of the middle class and they've had a chance to retrain, there will likely be a second-order impact on today's skilled manual workers. But that's years off, and not something I've seen discussed in detail in the mainstream.

You're probably aware, but if not, worth a read: https://www.citriniresearch.com/p/2028gic


I guess I just feel like your appeal to skilled manual workers is pointless. They’re not really the focal point. It’s the large masses of people being relegated to the bin labeled “effectively unskilled”.

Getting dumped from "upwardly mobile middle class" to "unemployable underclass" does seem likely to be radicalizing . It's not clear yet how much it'll actually be happening, but it does challenge a lot of the traditional focus on blue collar workers as being the most up in arms about automation and labor.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: