I have installed different Linux distros in 4-5 devices in the last year, including a laptop with an Nvidia GPU and a random NUC off Aliexpress. In all cases everything worked out of the box after a fresh installation--and as far as I remember the situation was the same ~10 years ago, when I first started using it. There are hiccups here and there, but nothing I cannot live with. I do tech support for my girlfriend'd mac, and there are just as many small issues there.
All this wall of texts to say that, respectfully, when you write
>Earlier this year I built a new desktop and installed my normal Linux distro and the screen wouldn’t work after login
the issue might simply be that you are doing something very wrong and/or not following the proper instructions for whatever distro you are using.
But I've had multiple Lenovo laptops not work with Ubuntun or nixOS in multiple situations.
Any new yoga variants just always had trouble.
E.g. my yoga slim 7i had a keyboard issue in Ubuntu such that for the first minute, I can't use my keyboard. Had to change boot configbto use "dumb keyboard" or something
The yoga also had speaker issues in nixos as the drivers haven't been mainlined yet. It was onenof those 6 speaker (2 tweeter) setups. I had to download a random driver and chuck it in my nix config to get the subwoofers working.
I gave up after mic issues in multiple zoom calls or gmeet calls.
You can say it's all skills issue, but Mac worked first try.
Laptops notoriously have rare hardware with poor or non-existent drivers. For laptops, you do need to do research with Linux to make sure things outside of the CPU/GPU work.
And, of course macs work first try. Apple makes both the hardware and software, if it didn't work it would be extremely impressive. The fact it's working is expected, not exceptional or noteworthy.
> I have installed different Linux distros in 4-5 devices in the last year, including a laptop with an Nvidia GPU and a random NUC off Aliexpress. In all cases everything worked out of the box after a fresh installation
How interesting! This mirrors my experience with some of my devices, and not with some of my others.
> All this wall of texts to say that, respectfully, when you write >Earlier this year I built a new desktop and installed my normal Linux distro and the screen wouldn’t work after login the issue might simply be that you are doing something very wrong and/or not following the proper instructions for whatever distro you are using.
Ahh yes, the complicated instructions of writing the ISO to a thumb drive, running the installer, and trying to login after the installation is complete.
My sin was using a current gen nvidia GPU (a 5080) and a 4K monitor with high refresh rates. This unprecedented combo fails to make the transition from SDDM to Plasma Wayland with the latest (at the time) nvidia drivers baked into the distros I tried. Fortunately, I wasn't alone in this issue based on the forum posts across a couple of distros, so I can be confident that at least some others failed to hold it right as well.
Yes, using an Nvidia GPU is absolutely failing to hold it right. Nvidia has shit support on Linux and they do it intentionally, everybody knows that.
You can blame Linux all you want but there's nothing anybody can do except Nvidia. The whole thing is locked down, no distro or developer on Earth can save Nvidia users.
For me:
- Easier access to books in other languages or out of print
- Quick access to a dictionary
- Backlight for reading in bed or in the evening
- Pocketability
- Way cheaper if you read a lot of public domain books (or have a parrot sitting on your shoulder)
That said, I have a jailbroken Kindle, but I am not giving a cent to Amazon. Should it break I'd just get a Kobo.
Nowadays AA [1] is IMO a better choice for users, but aside from that I cannot imagine these changes making much of a difference. There's plenty of ebook sources (Kobo, public libraries, etc) whose DRM are trivial to break (meaning Adobe and, as of a few weeks ago, LPCM). For what little content is exclusive to Kindle, it will just end up like WEB-DL content from streaming services: a handful of knowledgeable uploader with a KU subscription ripping content en masse—and good luck stopping them.
`free(NULL)` is harmless in C89 onwards. As I said, programmers freeing NULL caused so many issues they changed the API. It doesn't help that `malloc(0)` returns NULL on some platforms.
If you are writing code for an embedded platform with some random C compiler, all bets on what `free(NULL)` does are off. That means a cautious C programmer who doesn't know who will be using their code never allows NULL to be passed to `free()`.
In general, most good C programmers are good because they suffer a sort of PTSD from the injuries the language has inflicted on them in the past. If they aren't avoiding passing NULL to `free()`, they haven't suffered long enough to be good.
> That means a cautious C programmer who doesn't know who will be using their code never allows NULL to be passed to `free()`.
If your compiler chokes on `free(NULL)` you have bigger problems that no LLM (or human) can solve for you: you are using a compiler that was last maintained in the 80s!
If your C compiler doesn't adhere to the very first C standard published, the problem is not the quality of the code that is written.
> If they aren't avoiding passing NULL to `free()`, they haven't suffered long enough to be good.
I dunno; I've "suffered" since the mid-90s, and I will free NULL, because it is legal in the standard, and because I have not come across a compiler that does the wrong thing on `free(NULL)`.
So what would be the best practice in a situation like that? I would (naively?) imagine that a null pointer would mostly result from a malloc() or some other parts of the program failing, in which case would you not expect to see errors elsewhere?
> imagine that a null pointer would mostly result from a malloc() or some other parts of the program failing, in which case would you not expect to see errors elsewhere?
Oh yes, you probably will see errors elsewhere. If you are lucky it will happen immediately. But often enough millions of executed instructions later, in some unrelated routine that had its memory smashed. It's not "fun" figuring out what happened. It could be nothing - bit flips are a thing, and once you get the error rate low enough the frequency of bit flips and bugs starts to converge. You could waste days of your time chasing an alpha particle.
I saw the author of curl post some of this code here a while back. I immediately recognised the symptoms. Things like:
if (NULL == foo) { ... }
Every 2nd line was code like that. If you are wondering, he wrote `(NULL == foo)` in case he dropped an `=`, so it became `(NULL = foo)`. The second version is a syntax error, whereas `(foo = NULL)` is a runtime disaster. Most of it was unjustified, but he could not help himself. After years of dealing with C, he wrote code defensively - even if it wasn't needed. C is so fast and the compilers so good the coding style imposes little overhead.
Rust is popular because it gives you a similar result to C, but you don't need to have been beaten by 10 years of pain in order to produce safe Rust code. Sadly, it has other issues. Despite them, it's still the best C we have right now.
For whatever reason people here and on Reddit will tell you that you need to have Jellyfin pass through five VPNs, otherwise nasty things will happen. Meanwhile the actual devs suggests simply setting up a reverse proxy, which you can do in two lines with Caddy:
https://jellyfin.org/docs/general/post-install/networking/re...
Reverse proxy itself will do barely any defense, what you need in combination is an authgate (authentik, authelia), and here we are moving from "simple reverse proxy" to fun weekend activity and then some getting it to work as expected. + it kills the app auth flow, so only web interface is suitable for this.
You can use a reverse proxy and still have working app auth, I have set this up via Authelia with the OIDC Jellyfin plugin.
However:
- This is EVEN MORE complex than "just" a reverse proxy.
- I'm not really sure it wins much security, because...
- at least I'm not relying on Jellyfin's built-in auth but I'm now relying on its/the plugin's OIDC implementation to not be completely broken.
- attackers can still access unauthenticated endpoints.
Overall I really wish I could just do dumb proxy auth which would solve all these issues. But I dunno how that would work with authing from random clients like Wii (and more importantly for me, WebOS).
With a reverse proxy, I don't see how this would work. The whole way the reverse proxy works is you use a subdomain name ("jellyfin.yourdomain.org") to access Jellyfin, rather than some other service on your server. The reverse proxy sees the subdomain name that was used in the HTTP request, and routes the traffic based on that. Scanning only the IP address and port won't get attackers to Jellyfin; they need to know the subdomain name as well.
The only tricky part here would be to make sure you’re doing a wildcard certificate, so that your subdomain doesn’t appear in Certificate Transparency logs.
If you expose Jellyfin on 443, have HTTPS properly set up (which Caddy handles automatically), your admin password is not pswd1234 (or you straight up disable remote admin logins), and use a cheap .com domain rather than your IP--what is the actual attack surface in that case?
As far as I can remember that is more or less what is usually suggested by Jellyfin's devs, and I have yet to see something that convinces me about its inadequacy.
The absolute worst thing I can see in there is that an third party who somehow managed to get a link to one of your library items (either directly from you or from one of your users--or by spending the next decade bruteforcing it I guess) could stream said item:
https://github.com/jellyfin/jellyfin/issues/5415#issuecommen...
Everything else looks to me like unimportant issues, that would provide someone who's already logged in as a user minor details about your server.
Unless I am misunderstanding the discussion on GitHub, the attacker would still need to know the exact path where the file is saved, and the name of the file itself. Even then, all they can do is download the file from your device--which they could just torrent themselves for a fraction of the effort.
All this wall of texts to say that, respectfully, when you write >Earlier this year I built a new desktop and installed my normal Linux distro and the screen wouldn’t work after login the issue might simply be that you are doing something very wrong and/or not following the proper instructions for whatever distro you are using.
reply