If you have many cores and have the right optimizations in place the bottleneck for lz4 decompression is RAM throughput which is always going to beat whatever fancy disk setup you have.
But yes on the extreme end absolutely there's a point where lz4 stops making sense, but also most of us aren't trying to max out a 128 core postgres server or whatever.
Yep, apparently Ukraine still cannot affect fuel production in Russia to any significant point. Drones with less than 100 kg of explosives do not do particularly significant damage. One really need to deliver like a ton or more of explosives and for that one needs bombers that can penetrate air defenses or very expensive stealth cruise missiles or big ballistic missiles.
Of course it has had a significant impact. The reason Russia has repeatedly turned off fuel exports every couple of months for the past couple of years despite high global prices because Ukraine keeps disabling enough of their refining capability to cause shortages.
Ukraine dramatically reduced Russian fuel export revenue, and the sanctions did so even more.
It was really coming to the point of urgent existential threat to the Putin regime this spring, before Trump and Netanyahu bailed him out, first by doubling the global oil price and then by relaxing sanctions.
And Ukraine's drone / cruise missile portfolio includes things like the Flamingo, more than twice the payload and range of a Tomahawk.
If Ukraine had access to Tomahawks, Russian oil industry would not exist at this point. With drones after two and halve years of attacks with multiple hits at the same refineries Ukraine reduced Russian fuel production at best by 20%.
Flamingo is still mostly vaporware. For precise strikes against Russian factories Ukraine uses either Storm Shadow or domestic Neptun.
But that just shows again that drones are not particularly effective against most industrial targets and even against oil installations the damage is not lasting.
Or consider how US was able to destroy the bridge in Iran yet Crimea bridge and bridges in Rostov that are absolutely vital to Russian war logistics still stands.
Why do you think this bridge is vital when there is a land bridge (Kherson) with multiple rail links all in Russian controlled territory containing the entry and exit points of the bridge?
That bridge is A) incredibly expensive and something a postwar Ukraine would prefer to exist for economic reasons, B) extremely overbuilt in certain ways, and C) not strictly required if Russia can keep rail going on the landbridge.
It might be in play if the land bridge fell.
It would be almost trivial in terms of range to make it a target of any number of strike munitions. If you can hit the Baltic ports or factories in the Urals...
As for drones vs cruise missiles - at this point every missile strike is associated with drone accompaniment, it's part of the counter SHORAD proposition.
Ukraine was not able to interrupt production of gasoline and diesel in Russia in a significant way after two years of targeting oil refineries. Then attacks on pipelines and their pumping stations were not effective either as Russia was able to repair damage within days and weeks. And then all Russian oil terminals on Baltic and Black seas are operational again albeit in reduced capacity after big Ukrainian attacks few weeks ago. Apparently 50-100 kg warheads that Ukrainian drones deliver is not that effective at damaging oil infrastructure.
This may change if Ukraine can sustain what they were doing last couple of months, but so far Russia benefits extremely well from US war against Iran.
My comment still seems relevant? Do frequent commits to correct mistakes imply more "value" than infrequent, but well tested, commits, or what? I don't think it is a reliable signal.
It isn't. $COMPANY I've worked for use commit counts as a metric, and you can bet all the money in your pockets they've skyrocketed with no change to actual output after they did.
LLM's make it even easier; "Commit all the outstanding code in as many commits as you can, as long as the tests pass after each one". (Sometimes that second clause is ommitted, too.)
I agree with you. Also, there is people (like me) that like to small commits (that don't break stuf) instead of huge mega commits. If I do something like small broken/wip commits, are only under my working bramch and I do a interactive rebase to merge on good cohesive commits.
What are the alternatives to btrfs? At 12 TB data checksums are a must unless the data tolerate bit-rot. And if one wants to stick with the official kernel without out-of-tree modules, btrfs is the only choice.
I tried btrfs on three different occasions. Three times it managed to corrupt itself. I'll admit I was too enthousiastic the first time, trying it less than a year after it appeared in major distros. But the latter two are unforgiveable (I had to reinstall my mom's laptop).
I've been using ZFS for my NAS-like thing since then. It's been rock solid ().
(): I know about the block cloning bug, and the encryption bug. Luckily I avoided those (I don't tend to enable new features like block cloning, and I didn't have an encrypted dataset at the time). Still, all in all it's been really good in comparison to btrfs.
I've been using btrfs as the primary FS for my laptop for nearly twenty years, and for my desktop and multipurpose box for as long as they've existed (~eight and ~three years, respectively). I haven't had troubles with the laptop FS in like fifteen years, and have never had troubles with the desktop or multipurpose box.
I also used btrfs as the production FS for the volume management in our CI at $DAYJOB, as it was way faster than overlayfs. No problems there, either.
Could try ZFS or CephFS... even if several host roles are in VM containers (45Drives has a product setup that way.)
The btrfs solution has a mixed history, and had a lot of the same issues DRBD could get. They are great until some hardware/kernel-mod eventually goes sideways, and then the auto-heal cluster filesystems start to make a lot more sense. Note, with cluster based complete-file copy/repair object features the damage is localized to single files at worst, and folks don't have to wait 3 days to bring up the cluster on a crash.
Isn't bcachefs even younger and less polished than btrfs? It does show more promise as btrfs seems to have fundamental design issues... but still I wouldn't use that for my important data.
I don't disagree. Gotta backups for important data either way too!
Just talking about filesystems with checksumming (and multidevice). Any new filesystem to support these features is going to be newer.
I've had both btrfs and bcachefs multidevice filesystems lock up read-only on me. So no real data loss, just a pain to get the data into a new file system, the time it was an 8 drive array on btrfs.
I think you could use dm-integrity over the raw disks to have checksums and protect against bitrot then you can use mdraid to make a RAID1/5/6 of the virtual blockdevs presented by dm-integrity.
I suspect this is still vulnerable to the write hole problem.
You can add LVM to get snapshots, but this still not an end-to-end copy-on-write solution that btrfs and ZFS should provide.
lvm only supports checksums for metadata. It does not checksum the data itself. For checksums with arbitrary filesystems one can have dm-integrity device rather than LVM. But the performance suffer due to separated journal writes by the device.
But that is just raid on top of dm-integrity. And Redhat docs omits an important part when suggesting to use the bitmap mode with dm-integrity:
man 8 integritysetup:
--integrity-bitmap-mode. -B
Use alternate bitmap mode (available since Linux kernel 5.2) where dm-integrity uses bitmap instead of a journal. If a bit in the bitmap is 1, then corresponding region’s data and integrity tags are not synchronized - if the machine crashes, the unsynchronized regions will be recalculated. The bitmap mode is faster than the journal mode, because we don’t have to write the data twice, but it is also less reliable, because if data corruption happens when the machine crashes, it may not be detected.
I just do not see how without a direct filesystem support one can have both reliable checksums and performance.
Good thing all disks these days have data checksums, then!
(50TB+ on ext4 and xfs, and no, no bit rot. Yes, I've checked most of it against separate sha256sum files now and then. As long as you have ECC RAM, disks just magically corrupting your data is largely a myth.)
Less mythic on SSDs than spinning rust, in my experience.
Not particularly frequent either way, but I have absolutely had models of SSDs where it became clear after a few months of use that a significant fraction of them appeared to be corrupting their internal state and serving incorrect data back to the host, leading to errors and panics.
(_usually_ this was accompanied by read or write errors. But _usually_ is notable when you've spent some time trying to figure out if the times it didn't were a different problem or the same problem but silent.)
There was also the notorious case with certain Samsung spinning rust and dropping data in their write cache if you issued SMART requests...
Physical SIM cards are just as secure as the security enclave on the phone. In Norway few years ago banks even used that for secure authentication that worked on dumb phones with local mobile network providers pre-installing the required software on their SIM cards.
But then to save cost including the support cost banks stopped and instead started to require a non-rooted Android/iPhone.
Or optimize the os because I still find 8GB insane for everyday tasks. Ok, gaming I can understand, but most common tasks should be runnable with at most 2GB of memory and that is mostly for browsers.
Nuclear is not that steady. Nuclear plants require a lot of water to cool things. And when a particular hot summer happens, rivers dry out and nuclear reactors have to scale down the power production or even be shutdown. And then they require quite significant maintenance periodically.
Granted, in Europe a hot dry summer is when solar is at its peak. So it is much lesser problem than a cold winter with a lot of cloudy days with no wind when nuclear energy is ideal.
Still from a perspective of 20 years ago with unknown prospects about renewables natural gas power stations were considered much more reliable and flexible power source compared with nuclear and way more cleaner than coal. Of cause, as long as one gets gas.
So en.wikipedia.org/wiki/// is the article about C++ style comments
reply