Hacker Newsnew | past | comments | ask | show | jobs | submit | lijok's commentslogin

Most transparent marketing stunt to date.

25k - come on now..


This isn't a web development concept. It's the unix philosophy of "write programs that do one thing and do it well" and interconnect them, being taken to the extremes that were never intended.

We need a different hosting model.


Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web.

Instead of "programs that do one thing and do it well", "write programs which are designed to be used together" and "write programs to handle text streams", I might go with a foundational philosophy like "write programs that are do not trust the user or the admin" because in applications connected to the internet, both groups often make mistakes or are malicious. Also something like "write programs that are strict on which inputs they accept" because a lot of input is malicious.


The Unix model wasn't simply do one thing and do it well.

It was also a different model on ownership and vetting of those focused tools. It might have been a model of having the single source tree of an old UNIX or BSD, where everything was managed as a coherent whole from grep to cc all the way to X11. Or it might have been the Linux distribution model of having dedicated packagers do the vetting to piecemeal packages into more of a bazaar, even going so far as to rip scripting language bundles into their component pieces as for Python and Perl.

But in both of those models you were put farther away from the third-party authors bringing software into the open-source (and proprietary) supply chains.

This led to a host of issues with getting new software to users and with a fractal explosion of different versions of software dependencies to potentially have to work around, which is one reason we saw the explosion of NPM and Cargo and the like. Especially once Docker made it easy to go straight from stitching an app together with NPM on your local dev seat to getting it deployed to prod.

But the issue isn't with focused tooling as much as it is with hewing more closely to the upstream who could potentially be subverted in a supply chain attack.

After all, it's not as if people never tried to do this with Linux distros (or even the Linux kernel itself -- see for instance https://linux.slashdot.org/story/03/11/06/058249/linux-kerne... ). But the inherent delay and indirection in that model helped make it less of a serious risk.

But even if you only use 1 NPM package instead of 100, if it's a big enough package you can assume it's going to be a large target for attacks.


> Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web.

GP said it's about taking the Unix philosophy to extremes, you say something different.

Anything taken to extremes is bad; the key word there is "extremes". There is nothing wrong with the Unix philosophy, as "do one thing and do it well" never meant "thousands of dependencies over which you have no control, pulled in without review or thought".


I do not see what this has to do with Unix. The problem is not that programs interoperate or handle text streams, the problem is a) the supply chain issues in modern web-software (and thanks to Rust now system-level) development and b) that web applications do not run under user permissions but work for the user using token-based authentication schemes.

I guess we failed at the "do it well" step.

> We need a different hosting model.

There really isn't an option here, IMO.

1. Somebody does it

2. You do it

Much happier doing it myself tbh.


There's a lot of wiggle room on how you define "it". At the ends of the spectrum it's obvious, but in the middle it gets a bit sticky.

In my mind the unix philosophy leads to running your cloud on your own hardware or VPS's, not this.

exactly this, write - not use some sh*t written by some dude from Akron OH 2 years ago”

That's why I wrote my own compiler and coreutils. Can't trust some shit written by GNU developers 30 years ago.

And my own kernel. Can't trust some shit written by a Finnish dude 30 years ago.

And my own UEFI firmware. Definitely can't trust some shit written by my hardware vendor ever.


Yeah definitely no difference between GNU coreutils and some vibe coded AI tool released last month that wants full oAuth permissions.

I’m not joking, but weirdly enough, that’s what most AI arguments boil down to. Show me what the difference is while I pull up the endless CVE list of which ever coreutils package you had in mind. It’s a frustrating argument because you know that authors of coreutils-like packages had intentionality in their work, while an LLM has no such thing. Yet at the end, security vulnerabilities are abundant in both.

The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.

Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.


"endless list of CVE" seems rather exaggerated for coreutils. There are only very few CVEs in the last decade and most seem rather harmless.

Now I'd genuinely like to know whether "yes" had a CVE assigned, not sure how to search for it though...

I wouldn't describe myself as an AI maximalist at all. I just don't believe the false dichotomy of you either produce "vulnerable vibe coded AI slop running on a managed service" or "pure handcrafted code running on a self hosted service."

You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.

And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.


I used to believe that too, yet the dichotomy is what’s being pushed by what I called an “AI maximalist” and it’s what I was pushing against.

There is no “I wrote this code with some AI assistance” when you’re sending 2k line change PR after 8 minutes of me giving you permission on the repo. That’s the type of shit I’m dealing with and management is ecstatic at the pace and progress and the person just looks at you and say “anything in particular that’s wrong or needs changing? I’m just asking for a review and feedback”


It's such a bad faith argument, they basically make false equivalencies with LLMs and other software. Same with the "AI is just a higher level compiler" argument. The "just" is doing a ton of heavy lifting in those arguments.

Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].

In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:

- Make it easy to write, test, and run programs.

- Interactive use instead of batch processing.

- Economy and elegance of design due to size constraints ("salvation through suffering").

- Self-supporting system: all Unix software is maintained under Unix.

In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to

- Write programs that do one thing and do it well.

- Write programs to work together.

- Write programs to handle text streams, because that is a universal interface.

This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.

Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!

0: https://en.wikipedia.org/wiki/Not_even_wrong


So it’s not a binary thing, there’s context and nuance?

Embrace the suck.

cue Jeopardy theme song

Who is Apple?


TempleOS, is that you?

It's not a hosting model, it's a fundamental failure of software design and systems engineering/architecture.

Imagine if cars were developed like websites, with your brakes depending on a live connection to a 3rd party plugin on a website. Insanity, right? But not for web businesses people depend on for privacy, security, finances, transportation, healthcare, etc.

When the company's brakes go out today, we all just shrug, watch the car crash, then pick up the pieces and continue like it's normal. I have yet to hear a single CEO issue an ultimatum that the OWASP Top 10 (just an example) will be prevented by X date. Because they don't really care. They'll only lose a few customers and everyone else will shrug and keep using them. If we vote with our dollars, we've voted to let it continue.


ShinyHunters are a phishing group. What does this have to do with AI agents?

Run ai agents around the clock to do hyper targeted fishing

I feel like humans would be better at hyper targeting.

AI agents have the benefit of working at scale, probably "better" used for mass targeting.


this like is saying email marketing is done better if you hand write every email. Thats true, but the hit rate is so low, that you are better off generating 1 million hyper personalized emails and firing them off into the ether

As someone who did the former for a couple years, “better off” is subjective and dependent on your business model, particularly for B2B. It’s a trade off like anything else. You may get more leads, but they may convert at a lower rate. Sending at that scale also increases your risk of email deliverability problems. Trashing your domain has more impacts than you’d think. In smaller, targeted markets it even can damage your business reputation and hurt future sales if done poorly; word gets around.

If you’re targeting a million people, I wouldn’t consider that a hyper targeted attack.

But I get your point.


I disagree. Many humans are phishing in a different language than their native tongue, and LLMs are way better at sounding legit/professional than many of them. The best spear-phishing will still be humans, but AI definitely raises the bar.

> who cares if it can store an exabyte if it takes all month to read it > You probably don't want to have to need a separate device to read and a device to write

Are you only thinking about home consumer applications?


I’m not sure what the GP is thinking, but I would love a cheap-ish exabyte storage even if it takes a month to read fully. Damn, I’d gladly take it even if the speed is comparable to an SSD! (Though the price would be a question of course.)

Not a very strong argument now is it?

if the project already has positive revenue then arguably the ability to capture new users is worth a lot, which requires acceptable performance even when a big traffic surge is happening (like a HN hug of attention)

if the scalability is in the number of "zero cost" projects to start, then 5 vs 15 is a 3x factor.


which is exactly why this being different departments makes no sense

one infra team - provides the entire platform

any other approach and you’re dicking around


You’re conflating training and inference


They changed. You wouldn’t believe it but those most impacted by the mental rot that social media can induce - are the ultra wealthy.


I have a tangential theory to this.

Being rich != being famous. There are tons of extremely wealthy people out there that keep a very low profile. Sure they might be well known within their circle but ask the average person and they have no clue who that person is. I would say this is the case for like 90-95% of billionaires.

Musk, Andreessen, Zuck and others were all in this camp 10 years ago but they all decided that simply being rich wasn't enough, they wanted to be famous. These folks have all the resources and connections to become famous so they can get on all the podcasts, write op-eds, and are guaranteed to get the best reach on social media and thus the most eyeballs on their content and the most attention paid to them.

But when you go from making a few media appearances a year to constantly making media appearances in one way or another is that you need more "content" so to speak. Just like a comedian needs more content if they are going to do a 1hr special versus a 10min set at a comedy club.

The problem for all these guys is they have a few genuinely insightful ideas mixed in with a ton of cooky and out of touch ideas. Before they could safely stick to the genuinely insightful ideas but as they've made more and more appearances, they have to reach for some of those other ideas. They don't realize that their cooky ideas sound very different than their few insightful ideas. They think all their ideas are insightful based on the feedback they have been getting for the past decade or so.


> Being rich != being famous.

> decided that simply being rich wasn't enough, they wanted to be famous

While these are true, the real detail is that these people were never satisfied with being rich -- they wanted to be powerful. And influence is what makes one powerful. Being rich goes a certain distance: once you have f you money, the only thing worth buying to gain more power is fame.

They also truly believe they have all the right ideas, and the validation that comes from being platformed for a financial success (often right-place-right-time type luck, but sometimes combined with genuine skill or insight in a relevant field) hardens them to all criticism.


> They also truly believe they have all the right ideas, and the validation that comes from being platformed for a financial success

Not only that, but they clearly surround themselves with sycophants who always tell them they're absolutely right. Imagine what it's like to go 10 years without anyone having the guts to tell you you're wrong or your ideas are actually stupid. What would that do to your ego?


I need to reread it but Paul Fussell makes the case that old wealth is inconspicuous and secure (and maybe inherited) versus nouveau riche which is about visible luxury, branding, and showy consumption. I don't remember if he mentions the need to promote ideas.

https://en.wikipedia.org/wiki/Class:_A_Guide_Through_the_Ame...


Paul Fussel’s Class was an interesting read


Meh dynastic families which are about as old money as you can get have some of the most ostentatious displays of wealth. They sit on thrones, wear crowns, and preside over public celebrations.


On official occasions.

British aristos tend to be outdoor sorts. Range Rover and a Barbour jacket.

QEII was always in wellies whenever she was off the clock.


I think Musk definitely financed many of his ventures on his personal brand. The amount of capital he could raise because of his public persona as some kind of Tony Stark, made all the difference.

Same for Andreessen, a VC's success is built on his ability to raise capital and pick winners. His whole strategy, like Musk, was also on building a public persona to raise capital and get people to believe in his picks.


There's also differences between fame, infamy, popularity and elite social status, which is probably not all that clear to newly-minted billionaires that are already lacking in the social skills department.


The silent ones have also gone insane though. You don't get to see it as much, but they're the people promoting bad policies and funding delusional projects behind the scenes. Every so often one of them will make a public statement or political contribution which reveals that they have also been spending way too much time online.


Elon was always problematic. His increasing social media use removed the natural filters that prevented people from seeing it.


I honestly think there's more going on here. It seems to be primarily the vain billionaires that are going off the deep end. I experimented with stimulants when I was young and I remember being shocked at how they changed my personality. I went from pretty stoic to wanting to fight people over the slightest perceived insult. I can't help but think these billionaires with their expensive implants, hair and skin treatments, blood boys, etc. are on some life-extending or performance enhancing stimulants that are affecting their state of mind.


We know that to be the case with Musk. He's admitted it. Andreessen, don't know.


I'm not defending Musk, but "problematic" used in this type of context is one of those words that says more about the speaker than it does the subject.


I think you can forgive it as a rhetorical device when speaking to a really broad audience.


IMO it's best to use fewer thought-terminating cliches in that case, not more. Unless one is simply engaging in a Reddit-style call-and-response exercise.

To me, Musk crossed from "maverick" to "problematic" around 2018, when he tried to insert himself into the Thai cave rescue operation and ended up slinging accusations of pedophilia on Twitter.

At this point, he has unlocked many more specific adjectival achievements, and those are the ones that should be invoked whenever Musk's behavior is the topic. (Which it isn't here.)


Taking issue with this use of "problematic" says a lot about the speaker too.


"Problematic" is just vague. It's not that much more writing to specify the actual problems.


It's a rhetorical device that dates back to the ancient Greeks (meiosis). It's absolutely a lot more writing to enumerate the ways in which Elon Musk is problematic.


In a sane world it would read that way. Unfortunately, we live in a world where such nondescript descriptors (“problematic”, “objectionable”, “unprofessional”, “toxic”, “extremist”, “far-$SIDE”, a few others depending on usage) have been used, and overused, to accuse or smear people without taking on much of a burden of proof or making any statements specific enough to be falsifiable.

They now provoke instinctive revulsion when used in culture-war-adjacent contexts even when, as here, their usage is entirely legitimate (you presuppose a vague but mutually understood allegation rather than nebulously introducing a fresh one). I think only “controversial” has escaped this fate, but it might be too weak for your purposes.

(To be clear, I am only trying to explain why your phrasing might cause your interlocutor to momentarily recoil even when—as in my case—they don’t actually have any problem with the contents of your statement. What you do with this explanation is up to you: I don’t believe these terms are short-term salvageable at this point, but neither will I begrudge others their choice of hopeless cause; I certainly have my own fair share of those.)


You are just moving the goalposts to criticize a position without being required to provide a reason. You could pick any phrase and mark it as beneath intelligent discourse. You are choosing “problematic” because you don’t like the implication.

Musk is easy to laugh at and to criticize. Problematic encompasses his lying, pettiness, racism, sexual weirdness, ego and fraud as well as anything. Regardless of the proportions of specific traits, as a whole, he’s a problematic individual. That’s perfectly cromulent.


He was always a nutcase. So unhinged he got booted out of PayPal. Then the shenanigans and retconing of him as a Tesla founder. His unrealistic promises he never realised over the years at Tesla. The pedo guy thing. The HyperLoop bullshit. It was not as obvious because he still had some filters but it was already visible if you were paying attention. Problematic is a good description of what he was.


Just because you've been programmed to associate "problematic" with "liberals" and then further trained to think that people who use the word "problematic" are in fact problems, that's on you, the larger zeitgeist you don't see, and the people programming you.


I feel like taking issue with a word, even when used in a perfectly valid situation, is something worth reflection. Like fair enough if you've heard problematic used in ways you disagree with before, but maybe respond to those comments, not one where you agree with its use. Unless you actually do mean to defend Musk and don't think lying to investors, calling people pedos for saving kids, delaying public infrastructure, doing Nazi salutes, etc. etc. is problematic.


It seems to me to be saying that the person finds Elon Musk’s behavior problematic. What else are you reading into it?


I doubt that. The only thing social media removed was scruples and shame. People were ashamed to say such dumb things and now they think they have some kind of deeper knowledge.

Their thinking didn’t change.


I think they also suddenly had to deal with a bunch of people being mean to them, and telling them they were wrong, which drove them a little mad.

Sort of an oppositional defiant thing, filtered through immense wealth and power


After one becomes wealthy, social media easily becomes the only place where anyone says no to them. When everyone who surrounds you tells you "you're absolutely right, let me get that for you", you atrophy the muscle that let's you course correct when you're making a mistake, and when someone disagrees with you it feels that much stronger.

Wealth is not the only way this can happen, you see it with notoriety and power who have gotten used to " being right" (Dawkins comes to mind), and now this experience is being "democratised" by LLMs.


This. I remember many a time pmarca getting so upset and just blocking everyone who disagreed with him on Twitter. It was the weirdest thing.


Blocking people that annoy him on Twitter is the only humanizing thing about him. Deciding that someone has annoyed you enough on that platform that you don't care to ever hear from them ever again is the only thing that made that platform usable when you have any minimal audience.

"I've known you for all of 10 seconds and enjoyed not a single one of them" followed by blocking is good, actually. That doesn't make you any more correct or wrong, of course.


They can finally say "retard" openly. They have been openly gloating about this! So yes, I agree: previously they felt constrained. They no longer do.


"Retarded" is an euphemism. It was preceded by "mentally disabled", "feeble-minded", and "idiot". Renaming doesn't help.

Agnews, in Santa Clara CA, was successively called The Great Asylum for The Insane, Agnews Insane Asylum, Agnews Mental Hospital, Agnews Developmental Center, and Oracle's Santa Clara campus.


I often wonder if tech billionaire psychosis might lead to a "Great Filter" event for our species. They have entirely unchecked power, lack of empathy, and gleeful ignorance of everything our species has done that their success rests upon.


They haven't even read the scifi that positions AI as an obvious resource trap. Sure, let's devote all our resources to birthing an AI. Do we think if its smarter than us we can contain it? Do we think it will help us by default? Have we not thought through the basics of what we're attempting?


Where do you envision the pop will come from?


Two possible sources:

1. People who are currently buying AI services realizing it's not all that useful to them and discontinuing their subscriptions. Note that this can come from a changing ecosystem as much as anything to do with the products themselves. I know a couple people running AI propaganda operations where a single person can now do what previously took a major media conglomerate; this is great for them, but if I personally know a couple folks doing this, it indicates that there are probably hundreds of thousands worldwide, and people are simply going to stop trusting anything they read on the Internet.

2. Rising interest rates from the Iran war. Suddenly the cash flows needed to finance all this datacenter and AI model expansion are much higher, and combined with #1, may not be viable.


1. Most AI datacenter plans and valuation are not tied to subscriptions, but from a more vague promise of "AGI," so this isn't likely to pop the bubble IMO (even if it does happen)

2. Historical precedent holds that governments are more likely to suppress rates to spur the economy during wartime.


> Where do you envision the pop will come from?

sudden end of overinvestments in hardware procurements by big players. Its unclear if google for example will sustain 50B/y investments.


How do you with classic code?


Exactly.... -> Unit tests. Integration tests. UI tests. This is how code should be verified no matter the author. Just today I told my team we should not be reading every line of LLM code. Understand the pattern. Read the interesting / complex parts. Read the tests.


But unit and integration tests generally only catch the things you can think of. That leaves a lot of unexplored space in which things can go wrong.

Separately, but related - if you offload writing of the tests and writing of the code, how does anybody know what they have other than green tests and coverage numbers?


I have been seeing this problem building over the last year. LLM generated logic being tested by massive LLM generated tests.

Everyone just goes overboard with the tests since you can easily just tell the LLM to expand on the suite. So you end up with a massive test suite that looks very thorough and is less likely to be scrutinized.


if you are asking me how you *guarantee* there is not a single possible exploit in your code, you can't do that. But you can do your best and learn about common pitfalls and be reasonably competent. Just because you can't do the former doesn't mean the latter is useless.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: