The current strategy of the AI hype machine is to exhaust people's reserves of attention by presenting a never-ending stream of hard-to-verify "positive" claims. It's Gish Gallop done on the Internet scale with a never-ending parade of tech influencers, proxy "journalists" and low-value accounts. The whole strategy aims for saturation and demoralized acceptance.
It's no surprise that people readjust their immediate reactions by expressing hostility and skepticism about anything AI-related without spending much time on analysis. In fact, it's an entirely rational repones.
Complaining about it without acknowledging the larger picture is disingenuous.
In this particular case, using the term "machine learning" would likely avoid the immediate negative reaction.
The Gaussian Processes underpinning this work are hardly a product of the 'AI Hype Machine' - they've been around for decades, have strong statistical underpinnings, and are being widely explored for experimental design across many disciplines. Reflexive and poorly-informed backlash to any variety of machine learning is no more productive than blindly hyping up LLMs.
Meta Platforms, Inc featuring this technology with a title announcing “AI for American-produced cement and concrete” is, on the other hand, 1000% a product of the AI Hype Machine.
Sure, it's clearly marketing. I think a private company pursuing marketing via open research with open source code (including datasets) is a good trade. A hypey blogpost + research is better than no blogpost and no research.
Was that the one immediately after the great paradigm shift of November 2025, and before the great paradigm shift of January 2026? I think I remember it.
There was no such paradigm shift. LLMs still suck just as much as they did before, in the exact same ways they did before. In 6 months you'll be trying to BS us about the "great paradigm shift of summer 2026".
This is in an excellent characterization of the kind of marketing tactic I see all over social media right now and that I find absolutely disgusting.
The keyword here is fear. Despite faux-positive veneer, the messaging around certain technologies (especially GenAI) is clearly designed to induce anxiety and fear, rather than inspire genuine optimism or pique curiosity. This is significant, because fear is one of the most powerful tools to shut down rational thinking.
The subliminal (although not very subtle) message there is something very primitive. "If you don't join our group, you will soon starve to death." This is radically different from how most transformative technologies were promoted in the past.
It seems that the emotional rhetorical range in general has been stunted to just fear. Politicians seem to be the worst at it. They used to be able to give actually inspiring speeches. Now they just mash the fear button for everything to get what they want and then wonder why there are problems of despair in society.
I think AI is not quite the same as crypto when it comes to FOMO. At the peak of the craze you could not write on HN that 'crypto is nonsense' unless you wanted to be modded down to oblivion, to be shadow banned forever. I exaggerate, but not much.
With AI people are able to say 'this is nonsense' without people getting the pitchforks out.
As for myself, I don't have the bandwidth to learn how to do clever things with AI. I know you just have to write a prompt and it all happens by magic, but I have been burned quite badly.
First off, my elderly father got tricked out of all of his money and my mother's savings, which were intended for my niece, when she comes of age. It was an AI chatbot that did the deed. So no inheritance for me, cheers AI, didn't need it anyway!
Then there was the time I wanted to tidy up the fonts list on my Ubuntu computer. I just wanted to remove Urdu, Hebrew and however many other fonts that don't have any use for me. So I asked Google and just copied and pasted the Gemini suggestion. Gemini specified command line options so that you could not review the changes, but the text said 'use this as you can review changes'. I thought the '-y' looked off, but I just wanted to do some drawing and was not really thinking. So I typed in the AI suggestion. It then began to remove all the fonts and the window manager, and the apps. It might as well have suggested 'sudo rm -fr /'.
This was my wakeup call. I am sure an AI evangelist could blame me for being stupid, which I freely admit to. However, as a clueless idiot, I have been copying and pasting from Stack Overflow for aeons, to never be tricked into destroying all my work.
My compromise is to allow some fun with cat pictures, featuring my uncle's cat, with Google Banana. This allows me to have a toe in the water.
Recently I went on a course with lots of people with few of them being great intellects. I was amazed at how popular AI was with people that have no background in coding. They have collectively outsourced their critical thinking to AI.
I did not feel the FOMO. However, I am old enough to remember when Word came out. I was at university at the time and some of my coursemates were using it. I had genuine FOMO then. What is this Word tool? I was intimidated that I had this to learn on top of my studies. In time I did fire up Word, to find that there was nothing to learn of note, apart from 'styles', which few use to this day, preferring to highlight text and making it bold or biglier. I haven't used a word processor in decades, however, it was a useful tool for a long time.
Looking back, I could have skipped learning how to use a word processor, to stick to vi, latex and ghostscript until email became the way. But, for its time, it was the tool. AI is a bit like that, for some disciplines, you can choose to do it the hard way, using your own brain, or use the new tools. However, I have been badly burned, so I am waiting it out.
Small Web, Indie Web and Gemini are terminally missing the point. The web in the 90s was an ecosystem that attracted people because of experimentation with the medium, diversity of content and certain free-spirited social defaults. It also attracted attention because it was a new, exciting and rapidly expanding phenomenon. To create something equivalent right now you would need to capture those properties, rather then try to revive old visual styles and technology.
For a while I hoped that VR will become the new World Wide Web, but it was successfully torpedoed by the Metaverse initiative.
There's an element of nostalgia, certainly but it's also a reaction to the overwhelmingly commercial web. Why not build something instead of scrolling through brief videos interspersed with more and more ads that follow you everywhere?
Large companies have helped build the web but they've done at least as much, if not more, to help kill it.
The small web can be a lot of things, but IMO it gets too overrun by the ideologically zealous. One does not have to believe in primitive anarchism to enjoy camping, for example. In general it seems any niche idea on the internet is like candle flame to zealous moths.
Ideological zealots are more or less the only people who hate the modern web so much that they want to quarantine themselves within an entirely different and functionally limited protocol or ecosystem. Everyone else is fine discussing camping in Facebook groups and on Reddit and wherever, maybe just using an ad blocker.
I don't think there's anything terribly modern about a collection of large companies trying to present themselves as the entirety of a given thing (the internet in this case).
I don't think any social media platform has ever actually tried to present themselves as the entirety of the internet.
I don't think anyone actually believes social media platforms comprise the entirety of the internet, either.
But isn't really what people tend to complain about when they complain about the "modern" web. Mostly it's the complexity of websites and the presence of advertising and javascript, the homogeneity of frameworks versus the "quirkiness" of hand-coded HTML, the consolidation of content into platforms (versus, again, hand-coded HTML) and the fact that the web no longer entirely consists of people like themselves. And now AI, of course.
And the fact that every single alt-web is more restrictive than the web, almost universally antithetical to "design" or "creativity" as opposed to pure hypertext, and seems meant to appeal only to the strictly technical mind, bears that out.
It's about capturing the noncommerciality, not the experimentation. Most of the small web sites are just blogs, a solved problem by now, but there's interesting content in many of them.
I'm a dinosaur who bemoans the loss of whatever-it-was we had prior to the mass exploitation and saturation of the web today, so I feel it's my duty to check out Gemini and stop complaining. I'm prepared to trade ease of use or some modern functionality for better content and less of what the internet has become.
Not quite. I think Gemini has deliberately gone for a "text only" philosophy, which I think is very constraining.
The early web had a lot going on and allowed for a lot of creative experimentation which really caught the eye and the imagination.
Gemini seems designed to only allow long-form text content. You can't even have a table let alone inline images which makes it very limited for even dry scientific research papers, which I think would otherwise be an excellent use-case for Gemini. But it seems that this sort of thing is a deliberate design/philosophical decision by the authors which is a shame. They could have supported full markdown, but they chose not to (ostensibly to ease client implementation but there are a squillion markdown libraries so that assertion doesn't hold water for me)
It's their protocol so they can do what they want with it, but it's why I think Gemini as a protocol is a dead-end unless all you want to do is write essays (with no images or tables or inline links or table-of-contents or MathML or SVG diagrams or anything else you can think of in markdown). Its a shame as I think the client-cert stuff for Auth is interesting.
It’s tough but one of the tenets of Gemini is that a lone programmer can write their own client in a spirited afternoon/weekend. Markdown is just a little too hard to clear the bar. Already there was much bellyaching on the mailing list about forcing dependence on SSL libraries; suggesting people rely on more libraries would have been a non-starter
Note that the Gemini protocol is just a way of moving bytes around; nothing stops you from sending Markdown if you want (and at least some clients will render it - same with inline images).
Didn't the creator of the protocol go on a rant when someone made a browser for Gemini that included a favicon?
I can't imagine the backlash if someone tried to normalize Markdown. Isn't the entire point of Gemini that it can never be extended or expanded upon?
Maybe it would be better to create an entirely different protocol/alt web around Markdown that didn't risk running afoul of Gemini's philosophical restrictions?
Yeah, instead someone makes a new and incompatible protocol whenever they want to change it.
> The SmolNet consists of content available through alternative protocols outside the web such as gemini:// gopher:// Gopher+ gophers:// finger:// spartan:// text:// SuperText nex:// scorpion:// mercury:// titan:// guppy:// scroll:// molerat:// terse:// fsp://. There is a summary of the main SmolNet protocols.
I think a "markdown-web" that uses some of the Gemini approaches for privacy and auth/identity etc would be pretty nice.
Of course, as others have said, we could just use HTML without JavaScript or cookies and we'd be a lot of the way there with 95% less effort but hey in the future we'll probably just query an AI rather than load a web page ourselves.
Given how many people on HN say they like Gemini in principle but wish it weren't so restrictive, some people would use it. All of those people might just be that cross section of HN users, however.
There are images in geminispace, and audio, and (probably) video. It's just not inline. One of constraints of the protocol is that pages cannot load content without your express say-so.
I would like to note that it would be trivial to definitively prove or disprove such things if we had a searchable public archive of the training data. Interestingly, the same people (and corporate entities) who loudly claim that LLMs are creating original work seem to be utterly disinterested in having actual, definitive proof of their claims.
The fact that everyone is now constantly forced to use (oftentimes faulty) personal heuristics to determine whether or not they read slop is the real problem here.
AI companies and some of their product users relentlessly exploit the communication systems we've painstakingly built up since 1993. We (both readers and writers) shouldn't be required to individually adapt to this exploitation. We should simply stop it.
And yes, I believe that the notion this exploitation is unstoppable and inevitable is just crude propaganda. This isn't all that different from the emergence of email spam. One way or the other this will eventually be resolved. What I don't know is whether this will be resolved in a way that actually benefits our society as a whole.
> fact that everyone is now constantly forced to use (oftentimes faulty) personal heuristics to determine whether or not they read slop is the real problem here
It would be ironic and terrific if AI causes ordinary Americans to devote more time to evaluting their sources.
> Instead of asking “which language is best?” we need to ask “what is this language going to cost us?”
As long as engineering salaries depend on tribal identity markers (i.e. language and tooling preferences) rather than ability to save money, people will entirely rationally choose tools that look good on their resume rather than save their companies money.
In the past the problem was about transferring a mental model from one developer to the other. This applied even when people copy-pasted poorly understood chunks of example code from StackOverflow. There was specific intent and some sort of idea of why this particular chunk of code should work.
With LLM-generated software there can be no underlying mental model of the code at all. None. There is nothing to transfer or infer.
It’s even worse because the solution an LLM produces is not obvious as to whether it was inherently chosen by the user and favored over a different approach for any reason, or it was just what happened to be output and “works”.
I’ve had to give feedback to some junior devs who used quite a bit of LLM created code in a PR, but didn’t stop to question if we really wanted that code to be “ours” versus using a library. It was apparent they didn’t consider alternatives and just went with what it made.
The same accounts that defended and promoted LLM use just a few weeks ago are now telling RPi users to use less RAM.
reply