Hacker Newsnew | past | comments | ask | show | jobs | submit | arid_eden's commentslogin

Seamonkey (which I'm using to write this) is one option. Built for all platforms, no Mozilla tracking/ad/pocket-partnership nonsense. Unfortunately the team is small and Gecko has been changing a lot lately.

Long term the gemini protocol and browsers like lagrange might be the only option - its just too hard to build a modern standards supporting web browser these days (even Microsoft has thrown in the towel and adopted Chromium for Edge).


They removed webstart - which is fundamental to how the apps we use are distributed. I believe that reason alone is why the distributor has stuck to Java 8/OpenWebStart.


Note that Wine runs Windows binaries, whereas what this project aims for is source compatibility. That at least means you aren't at the mercy of ABI versioning, though you wouldn't be able to 'run' a macOS app you found online.


I find the creators livelihoods lines to be attention grabbing and muddying the waters about what this argument should be about.

The Netflix exemption should be available to all apps - that point is what the Epic lawsuit should be about (they've confused the issue too by talking about having their own app store). Apple could then fight for developers to use Apple Pay on its merits in the marketplace.

In the case of Fanhouse, the rules were well known (and disliked) when they started. They should have created their product as a web app - their particular use case is one where the technical limitations of a web app wouldn't have been much of an issue.

If they felt so strongly that their product had to be in the Apple App store to succeed, they are justifying Apple's 30% cut.


The context for that is in the previous sentence:

> [...] you get an out-of-the-box, slick and capable computer that an idiot could set up and get running in minutes.

He's praising the simplicity of the setup. I smiled. That language is normal discourse in the UK.


> * It drastically improves the upgrade process -- I never need to look at a 3-way diff of /etc/init.d/apache2 again

lol. I had to trash an Arch Linux box because it failed to upgrade systemd from 208 to 211. Everytime I upgraded the system hung trying to mount filesystems so I had to roll back.


You’re complaining about something different. If there’s a bug in systemd itself that makes your system unbootable then not much you can do except not upgrade. What the parent is talking about is the fact that services are organized into base service files and overrides. Base files live in /usr and are to be modified only by the package that owns them. Override files or full-replacements live in /etc which is for the local administrator. Service updates are now always a safe operation.


Perhaps arch broke something? How is it the failing of systemd without further verification??


The post was on 'the good parts' of systemd, so complaining about balance seems unfair. I will call out one aspect that the author deemed a 'good part' as in my experience its a bad part: journald.

Usability: journald requires administrators to learn a new command to access their journals - everytime I have to use it I need to look up the syntax again. Compare with logs in /var/log and you can use all the standard unix tooling (text editors, tail, grep etc).

Reliability: in my use journald has made my servers more unreliable. Two CentOS 7 VMs, setup at the same time. On one journald works just fine, on the other approximately once a month journald would stop logging. It wouldn't start logging again until you noticed and rebooted the server. The real issue this exposes is that on a systemd machine, journald doesn't just log for itself, it also supplies the logs to syslog. So on this server when journald broke, there were no logs whatsoever.

This issue was apparently common on the version of systemd shipped with CentOS 7. The fix was to disable log compression in journald. What it highlighted to me was the inherent issues in an all encompassing system controller like systemd, in that if there is a bug somewhere in there you lose not only the added bells-and-whistles its intended to provide (journald in this case) but also the old previously reliable functionality (syslog).

In my mind the fix for this is a redesign of systemd to make it an optional layer on top of the reliable functionality rather than a low level system component that everything else needs to depend on. In the case of logs journald should consume logs from syslog, not provide them to syslog.


Aren’t you assuming way too much on the architecture of systemd with only a surface level knowledge?


My main issue with ZFS is the integrated nature - like systemd for filesystems. My 'alternative' for ZFS isn't BTRFS (awful performance characteristics for my workloads) but LVM coupled with ext4 and mdraid. I get snapshots, reliability, performance and a 'real UNIX' composable toolchain. I miss out on data checksums.


In principle I dislike the coupling of volume manager, raid and filesystem.

But I still think zfs gets most things right; I see the argument for a concistent system managing caching/logs, volumes, data integrity, discard support, compression, snapshots and encryption.

The fact that it's the first serious, open, cross platform solution (Linux, bsd, Mac, winnt) that provides encryption, integrity and filesystem is a nice bonus.

And the integration of snapshots and fs dumps via zfs send/receive is beautiful.

I think zfs makes sense like one fat layer - networking can go below (drdb, iscsi) or on top (iscsi, nfs, cifs).

Encryption need to be somewhat holistic - for making sane performance and data leaking tradeoffs.


Having run all these thing in prod (except BTRFS, it ate a mirror on my desktop), I’ll say that even the LVM + + is so much more hacky than geom on FreeBSD which feels much more ‘Unix’ with a designed composable interface.

Although, I do prefer the durability of XFS or ext4 (depending on workload) vs UFS, and the setup you described is totally maintainable.


No compression..



His comment was more on the wisdom (or otherwise) of running an out of tree filesystem. I think its hard to disagree with him. He went on to say you would never be able to merge the ZFS tree with Linux. Again he's the one who would know what code gets in Linux. His only actual comment against ZFS was that benchmarks didn't look great - which is unsurprising given all the extra work ZFS is doing wrt data integrity than other filesystems in production use.

https://www.realworldtech.com/forum/?threadid=189711&curpost...


From the link:

>[ZFS] was always more of a buzzword than anything else, I feel,

This is deeply ignorant. I feel that Linux has been handicapped by the fact that many developers have never done any serious enterprise administration and thus not having clear understanding of the needs of a set of their users.


Not everything needs to be in the Linux Kernel (and honestly i don't care if it is), looking at the past "linux-sound-system-tragedy" i would say, to make something outside linux is often much better (not 1000's different peoples who thinks it's better the other way around, and you are full of sh* anyway).

>His only actual comment against ZFS was that benchmarks didn't look great

What benchmark? Mines are looking pretty good, with a preheated Arc and with L2 especially...actually much better than any HW-Raid. Compared to Linus, there are Institutes with a bit more than a single 3GB git repository, and the crazy stuff...they need verified backups.

https://computing.llnl.gov/projects/zfs-lustre

If he ever has to improve a Lustre Filesystem of 55 petabyte with ZFS, he can come back, otherwise Linus...shut-up* and be happy with your ext4 (nothing against that one).

* A homage to the old linus-style of having a discussion.


This is why FreeBSD rebasing its ZFS fork on ZFS-on-Linux made me so scared for the future of FreeBSD. Their one major advantage over Linux and they didn't have the developers to maintain their fork themselves.


ZFS will always be a smoother experience on FreeBSD as opposed to Linux because FreeBSD endorses it. Thus the user land and documentation is written assuming you’re running ZFS. As opposed to Linux where some distros might ship pre-compiled binaries but everything is written assuming you’re not running ZFS. Thus everything takes that extra couple of steps to set up, fix, and maintain.

For example, if you want to use ZFS as a storage for containers on Linux, you have to spend hours hunting around for some poorly maintained 3rd party shell scripts or build some tooling yourself. Whereas on FreeBSD all the tooling around Jails is built with ZFS in mind.

This is why platforms like FreeBSD feel more harmonious than Linux. Not because Linux can’t do the job but because there are so many different contributors with their own unique preferences that Linux is essentially loose Lego pieces with no instructions. Whereas FreeBSD has the same org who manage the kernel, user land and who also push ZFS.

And I say this as someone who loves Linux. There’s room for both Linux and FreeBSD in this world :)


I think "smoother experience on FreeBSD" is a myth -

The standard volume manager on FreeBSD is vinum/geom; ZFS ships its entire separate volume manager to the host OS, so you can't use mount/umount to control mounting a ZFS volume. Maybe it would be okay to move entirely over to ZFS's volume manager but it only supports ZFS's own filesystem, you can't use the ZFS volume manager with a normal FreeBSD UFS2 partition.

In both Linux and FreeBSD, ZFS's bolt-on ARC competes with the kernel's actual page cache for resources instead of properly integrating with it.

It's an out-of-tree filesystem for both OSes. Sure FreeBSD periodically imports it into master from OpenZFS (née ZoL), but all development happens elsewhere, and the SPL is still trying to emulate a Solaris interface on top of both OSes.

Is there any more concrete example of how ZFS is actually better integrated on FreeBSD compared to Linux, say Ubuntu? It takes ZFS snapshots automatically during apt upgrades, root-on-ZFS is a default installer option, etc.


Coincidentally there was a discussion about this yesterday. I agree with a lot of what was posted in it so might be easier to share that: https://news.ycombinator.com/item?id=27059551

This branch in particular addresses your points: https://news.ycombinator.com/item?id=27062069

Has the latest version of Ubuntu finally made mirrored ZFS root pools painless? Because that was anything but a native out of the box experience (compared to setting up the same on FreeBSD) and that has bit me several times.

I've use ZFS on both FreeBSD and Linux for years and while Ubuntu is closing the gap, ZFS has been the default recommended file system on FreeBSD for close on 10 years already. So it's bound to feel more like a native experience on FreeBSD.

> In a review in DistroWatch, Jesse Smith detailed a number of problems found in testing this release, including boot issues, the decision to have Ubuntu Software only offer Snaps, which are few in number, slow, use a lot of memory and do not integrate well. He also criticized the ZFS file system for not working right and the lack of Flatpak support. He concluded, "these issues, along with the slow boot times and spotty wireless network access, gave me a very poor impression of Ubuntu 20.04. This was especially disappointing since just six months ago I had a positive experience with Xubuntu 19.10, which was also running on ZFS. My experience this week was frustrating - slow, buggy, and multiple components felt incomplete. This is, in my subjective opinion, a poor showing and a surprisingly unpolished one considering Canonical plans to support this release for the next five years."

This was Ubuntu's latest LTS release, which is less than a year old. Granted not all of the criticism levelled against it are ZFS related and granted that's just another persons anecdotal report but it mirror the same experiences everyone else, aside from yourself it seems, raises when switching between Linux and FreeBSD for ZFS storage.

I don't post this as a hater though. I, like others, do still run Ubuntu Server + ZFS for some systems (particularly where I wanted ZFS + Docker) and those systems do run well. But I can't deny that everything requires just a little more effort to get right on Linux because there isn't the assumption you're running ZFS where as on FreeBSD it more or less pre-configured to use ZFS right out of the box because that's the expectation. eg FreeBSD containers tooling is already written to support ZFS where as Linux container tooling isn't.

This is why people talk about a smoother experience on FreeBSD. The file system itself is the same code base and performs largely the same. But it's all the stuff around the edge that is built with the assumption of ZFS on FreeBSD that makes things feel a little less hacked together with duct tape.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: