Hacker Newsnew | past | comments | ask | show | jobs | submit | michaelbuckbee's commentslogin

Maybe check out TRMNL, they've got a Home Assistant plugin.

From the article: "Polar bears are still sadly expected to go extinct this century, with two-thirds of the population gone by 2050,"

There's a push and pull here, Typescript + React + Vercel are also very amenable to LLM driven development due to a mix of the popularity of examples in the LLMs dataset, how cheap the deployment is and how quick the ecosystem is to get going.

Memory makers make capital investements (build different factories, convert physical production lines, etc.) to meet orders that have been place for the next ~5 years.

OpenAI (or whoever) crashes and can't pay for the order leaving the memory makers in a tough spot.


> leaving the memory makers in a tough spot

Oh noes! Think of a poor memory makers!

The amount of money flowing both from the AI bubble and from quite literally scalping both the server and consumer market... They gambled on the opportunity and if they fail - it's their problem.


Exactly, that's why they are not building more capacity and that's why RAM prices will stay up for years.

And how is that a problem and more importantly how's that a problem of Average Joe?

Capitalists did their gamble things. If they fail in that gamble what forbids them to sell the regular RAM they made for AI bubbleists to the regular consumers? Besides HBM it's just the regular chips which are exactly the same for the consumer/server market, why it would be any different?


Partially this was solved by the ZipDrive being designed for portability as well (I'm not even sure if there was ever a built in model). So if you needed to copy a large file and take it to a friends house you just took the drive with you.

More importantly it collapses mythical-man-month communication overhead.

Hang on, tell me how, because I am not picking up what you are putting down. At a minimum, wouldn’t this require working from a perfectly written spec that has already accounted for the discovery of changes that would need to be made from the original perfect spec?

So we have two things here:

1. "Mythical Man Month" which is the shorthand for a whole book + concept that you can't just throw more people at a software development project and get linear productivity improvements as the communications overhead (meetings, emails, mistakes due to poor assumptions, etc.) deeply eat into the raw number of productive hours that a new person added to the team brings.

2. AI automation tools (Claude Code) are often described as a "junior developer" which is an imperfect comparison as while you could potentially sort of set them up that way many people use them as more of a singular force multiplier.

I use them to work on many more projects in many more ways and ship far more than I could even if I had a "junior developer" sitting alongside of me as there's not the same level of communication needed.


The way I see it, because you can spin up additional AI employees at will (and spin them back down), when the problem with the spec is found, it's no big deal to redo all of that work from before, adjusting for that change.

Ironically, people keep saying this, but then gloss over the core problem of coordinated between these agents... For completely independent codebases with no dependencies, sure thing right on...go for it. But the vast majority of F500 companies I work with have wild and undocumented dependencies between almost every system that will take years to "agentify" (assuming they ever figure out that it's an organization and governance problem.... which they might not ever realize)

This is an amazing frame /reframe.

What's weird though is the bifurcation in pricing in the market: aka if your app can function on a non-frontier level AI you can use last years model at a fraction of the cost.

This is one of those slippery slope things where Grammarly did "just" Grammar and then slowly got into tone and perception and brand voice suggestions and now seems to more or less just want to shave everything down to be as bland as possible.

I tried using an LLM to help me write some stuff and it simply didn't sound like I'd written it - or, it did but in a kind of otherworldly way.

The only way I can describe it is like when I was playing with LPC10 codecs (the 2400bps codec used in Speak'n'Spells, and other such 80s talking things). It didn't sound like me, it sounded like a Speak'n'Spell with my accent, if that makes sense.

No? Okay, if not, if you want I could probably record another clip to show you.


All you have to do is prompt your AI with a writing sample. I generally give it something I wrote from my blog. It still doesn't write like I do and it seems to take more than that to get rid of the emdashes, but it at least kicks it out of "default LLM" and is generally an improvement.

It's fine. We can't have it both ways. I prefer bad grammar to Claude blandness, so I think the author should just write how they write.

Is there an example of a consumer facing SaaS that's been able to handle the "unlimited" in a way you'd consider positive?

US cellular data plans? Where it's throttled after soft cap?

Although I will say it's been nice to have them give more transparency around their actual soft cap numbers.


That’s an example of where unlimited can work (because the limit is a number of hours of degraded service which is quantifiable).

Storage was already a hairy beast with the original setup, and it would be much better if they had defined limits you could at least know about (and pay for).


You can only do it during growth phases or if there’s complimentary products with margin. The story I was told about Office 365 was the when they were using spinning disk, exchange was IOPS-bound, so they had lots of high volume, low iops storage to offer for SharePoint. Google has a similar story, although neither are really unlimited, but approaching unlimited with for large customers.

Once growth slows, churn eats much of the organic growth and you need to spend money on marketing.


Google and Youtube, especially Youtube.

YouTube is constantly reencoding videos to save space at the expense of older content looking like mud, so arguably even they're having their struggles.

We all know the "nobody has watched this video in ten years, login at least once or it'll be yeeted" email is coming, someday.

YT would have to start declining in growth pretty substantially for that to be the case. All the 360p video from 2010-2015 probably doesn't take up even 1% of the storage new videos added in 2025.

True, it's more likely to be aimed at stemming the tide of 4k video that nobody watches - but luckily they're worth more than Disney right now so we don't have to confront that ... yet.

YouTube shorts are incredibly highly compressed.

Google does not have unlimited. I had to pay to increase my storage.

Google Drive reneged on unlimited storage for Education accounts once they realized that universities also contain researchers who need to store huge amounts of data.

Not only did they cut unlimited, they went to insultingly low limits with not much warning after all their nice promises. Moderately large universities ended up with less space per student than the 15GB they give out to anyone for free. It was a pretty bad rug pull.

Massive fraud from abroad didn't help there either. A favorite backup spot for terabytes of pirated media, complete with guides on which schools had good @edu addresses for it.

Hadn't even considered your obvious point, a good one!


Google forced everyone off their deprecated G Suite for Business plan (which had unlimited storage) and onto a Workspace plan.

I had to give up and delete plenty of data because of this. That data was important to me, but not important enough to pay their ransom.


Telegram?

Two things:

1. Tests have always been both about the function of the application, but also the communication of what should be occurring to the larger team or yourself six months down the road.

With automated software development the communication with the LLM itself is a much larger part of it so I feel like it's "ok" to have lots of easy tests that are less about rigor and more about "yes this is how this should work"

2. Ideally we're going to get to the point where the tooling allows for adversarial agents with one writing code and one writing tests. Even for now just popping open a separate terminal window and generating+running tests in it from your main coding terminal is helpful.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: