I don't follow your logic. 99% of people only interact with cryptocurrency because of a scam but you think that we should be discussing the technology instead? Why? The technology itself seems to be much less important than the widespread fraud it enables... And even then, people were discussing the effect of the technology on the environment, and on energy and component prices... They were also discussing the actual underlying technology below cryptocurrencies - the blockchain, and how it is yet to find its killer app, even as of this day...
Interesting? I'd say they were interesting if you find looking at vibe-coded stuff interesting. If you're instead into learning from projects based on the author's unique insight, experience and research, they're utterly boring...
I find that I just don't learn anything new from Show HN vibe-coded side projects, and I can often replicate them in a couple of hundred of dollars, so why bother looking at them? Also why bother sharing one in the first place, since it doesn't really show any personal prowess, and doesn't bring value to the community due to it being easy to replicate?
> Interesting? I'd say they were interesting if you find looking at vibe-coded stuff interesting.
There's a lot of ways things can be of interest. The problem being solved, how it's being solved, the UI, UX, etc.
THAT it is vibe coded may or may not be interesting to some, but finding it un-interesting because it's vibe coded is no better than finding that it is.
This assumes that pre-LLM projects were based on the author's unique insight, experience and research, and not just boilerplated framework code, copying the design trends of the week.
I'd challenge the lack of personal prowess argument. Piecing together technology in novel ways to solve highly targeted problems is a skill, even if you're not hand-crafting CSS and SQL.
I liken it to those who tune cars, who buy cars made in a factory, install parts made by someone else, using tools that are all standardized. In the middle somewhere is the human making decisions to create a final result, which is where the talent exists.
I agree that some (many) pre-LLM Show HN projects were worthless as well. But at least they were fewer, which meant that interesting projects were harder to miss.
> Piecing together technology in novel ways to solve highly targeted problems is a skill
The LLM outputs this out of the box? Where's the skill?
I don't believe the comparison to car tuners benefits your thesis here. The spectrum of people I know who tune their cars varies from utter idiots to professional engineers. You cannot state as a fact that anyone who does it has insight or even natural talent. The bar is so low that anyone who has enough money can do it (just like coding with LLMs). In fact one can say that most people are incompetent, and by tuning their cars to varying degrees they endanger themselves and others, enlarge their running/maintenance costs, lower their car's resale value, and harm the environment.
Yes, I find looking at vibe coded stuff interesting when they solve a worthy problem.
No amount of denial will roll back the technology that millions can use now, that makes it realistic to produce in a day software that would take at least months five years ago.
No, but everyone else forgot this is a possibility and are increasingly making the mechanisms of social and civil life dependent on possession of a modern smartphone.
Why waste space for gaskets and o-rings when you can already get the battery changed out while you wait with glue? Glue is clearly the superior method, which is why almost the entire market has adopted it.
Heat pads exist even in the most basic repair shops. It's not advanced technology, no need to over-engineer it.
> but at the point where there isn't enough economically useful things for everyone to do
This assumes that for example a person who has been an artist for 20 years, can easily enough switch professions to a machinist, and the only reason for them not to do it is because the economy has no need for another machinist. An insane way to think. This is not how humans work.
Let me see any HN dweller go from their cushy home office to butchering animals for meat on 12-hour shifts for example... Oh and btw, no safety net to give you food, housing and healthcare while you learn the new craft!
> Let me see any HN dweller go from their cushy home office to butchering animals for meat on 12-hour shifts for example
I think that's the reality of lots of people when they face any redundancy situation - People take up jobs that they wouldn't traditionally want to do in order to survive or look after their family. I don't necessarily see why people on HN would be different.
To theal author: did you continue using btrfs after this ordeal? An FS that will not eat (all) your data upon a hard powercycle only at the cost of 14 custom C tools is a hard pass from me no matter how many distros try to push it down my throat as 'production-ready'...
What are the alternatives to btrfs? At 12 TB data checksums are a must unless the data tolerate bit-rot. And if one wants to stick with the official kernel without out-of-tree modules, btrfs is the only choice.
I tried btrfs on three different occasions. Three times it managed to corrupt itself. I'll admit I was too enthousiastic the first time, trying it less than a year after it appeared in major distros. But the latter two are unforgiveable (I had to reinstall my mom's laptop).
I've been using ZFS for my NAS-like thing since then. It's been rock solid ().
(): I know about the block cloning bug, and the encryption bug. Luckily I avoided those (I don't tend to enable new features like block cloning, and I didn't have an encrypted dataset at the time). Still, all in all it's been really good in comparison to btrfs.
I've been using btrfs as the primary FS for my laptop for nearly twenty years, and for my desktop and multipurpose box for as long as they've existed (~eight and ~three years, respectively). I haven't had troubles with the laptop FS in like fifteen years, and have never had troubles with the desktop or multipurpose box.
I also used btrfs as the production FS for the volume management in our CI at $DAYJOB, as it was way faster than overlayfs. No problems there, either.
Could try ZFS or CephFS... even if several host roles are in VM containers (45Drives has a product setup that way.)
The btrfs solution has a mixed history, and had a lot of the same issues DRBD could get. They are great until some hardware/kernel-mod eventually goes sideways, and then the auto-heal cluster filesystems start to make a lot more sense. Note, with cluster based complete-file copy/repair object features the damage is localized to single files at worst, and folks don't have to wait 3 days to bring up the cluster on a crash.
Isn't bcachefs even younger and less polished than btrfs? It does show more promise as btrfs seems to have fundamental design issues... but still I wouldn't use that for my important data.
I don't disagree. Gotta backups for important data either way too!
Just talking about filesystems with checksumming (and multidevice). Any new filesystem to support these features is going to be newer.
I've had both btrfs and bcachefs multidevice filesystems lock up read-only on me. So no real data loss, just a pain to get the data into a new file system, the time it was an 8 drive array on btrfs.
I think you could use dm-integrity over the raw disks to have checksums and protect against bitrot then you can use mdraid to make a RAID1/5/6 of the virtual blockdevs presented by dm-integrity.
I suspect this is still vulnerable to the write hole problem.
You can add LVM to get snapshots, but this still not an end-to-end copy-on-write solution that btrfs and ZFS should provide.
lvm only supports checksums for metadata. It does not checksum the data itself. For checksums with arbitrary filesystems one can have dm-integrity device rather than LVM. But the performance suffer due to separated journal writes by the device.
But that is just raid on top of dm-integrity. And Redhat docs omits an important part when suggesting to use the bitmap mode with dm-integrity:
man 8 integritysetup:
--integrity-bitmap-mode. -B
Use alternate bitmap mode (available since Linux kernel 5.2) where dm-integrity uses bitmap instead of a journal. If a bit in the bitmap is 1, then corresponding region’s data and integrity tags are not synchronized - if the machine crashes, the unsynchronized regions will be recalculated. The bitmap mode is faster than the journal mode, because we don’t have to write the data twice, but it is also less reliable, because if data corruption happens when the machine crashes, it may not be detected.
I just do not see how without a direct filesystem support one can have both reliable checksums and performance.
Good thing all disks these days have data checksums, then!
(50TB+ on ext4 and xfs, and no, no bit rot. Yes, I've checked most of it against separate sha256sum files now and then. As long as you have ECC RAM, disks just magically corrupting your data is largely a myth.)
Less mythic on SSDs than spinning rust, in my experience.
Not particularly frequent either way, but I have absolutely had models of SSDs where it became clear after a few months of use that a significant fraction of them appeared to be corrupting their internal state and serving incorrect data back to the host, leading to errors and panics.
(_usually_ this was accompanied by read or write errors. But _usually_ is notable when you've spent some time trying to figure out if the times it didn't were a different problem or the same problem but silent.)
There was also the notorious case with certain Samsung spinning rust and dropping data in their write cache if you issued SMART requests...
It has the benefit that I now spend less time reading random articles, you read one sentence, feel it's AI and just skim through it.
Mamy Instagram videos now have creator-added LLM description of what's happening in the video and some bullshit ending like "This video shows why it's important to always remain vigilant when driving".. fuxxckkk off with the faux philosophy!
reply