> What people realy want: as little OS as possible
I see what you're saying but that isn't how I think about it.
I'm happy to have as "much" OS as is useful and adds value, convenience, or user experience for me.
Example: I quite like Windows Hello. Facial recognition is the smoothest, most pleasant form of biometric authentication available on a laptop, and it's nice to be able to use it anywhere throughout the whole OS that a password would otherwise be required (e.g. before revealing hidden passwords in a password manager, when opening a command prompt with elevated permissions, or before applying passkeys to log into a website). It starts up fast, works in low light thanks to IR emitters, and recognizes me pretty close to 100% of the time. It's a great experience. My use of my laptop would only be reduced by having "less OS" in this case.
What I don't want is anything that compromises my utility, convenience, or user experience in order to make the OS useful and valuable for someone else.
Example: advertisements embedded in the Start menu are plenty valuable to M$, but compromise my user experience in the process.
Example 2: Inserting Copilot into Paint and Notepad seem valuable for pumping M$'s stock price, but both annoy me by cramming unwanted AI into my basic utility programs where I have no interest in it.
From my point of view, the ideal here is something like pre-OS-X Mac OS, where the OS itself was barely even an OS and more just a substrate just complete enough to run the desktop and third party applications on.
The majority of bells and whistles (which Windows Hello falls under) were not baked into the OS, but instead implemented as system extensions that the user could disable and prevent from loading into memory at will.
This meant that even with the last release of Classic Mac OS (9.2.x), if you disabled all extensions you got a desktop reminiscent of the 1985 System 1 except with color and modern resolution support.
I think it should be more of a goal for desktop OSes to try to emulate this. If a Windows user wants a quiet no-frills Win2000 like experience except with choice exceptions like Hello, they should be able to have that without having to resort to messy hacks that impact stability and undo themselves if you update.
On Windows 11, when you reconnect to a monitor or set of monitors that you've connected to before, it will automatically return your open windows to the layout across those monitors that you had when you last disconnected (assuming those windows are still open).
This is extremely nice and saves me time on a literally (not figuratively) daily basis, to the point that I generally forget that it hasn't always worked that way.
I wish this worked! I have to go to the office on my hybrid schedule. When I switch between home and office, Windows is "smart" enough to keep the windows on the correct window in the task bar while generally placing the windows on the opposite window where I want it (and literally opposite where it is on the task bar!) It's so annoying and I dread the days I'm switching between office and home for this reason, as I have to drag each window to the opposing window before things are back to how I want them. It would be less bad the old way, where they were just stupidly thrown on the "primary" monitor and I only had to drag half of them over.
"Well, they turned the entire OS into a tracking, sales and ad/propaganda delivery service, but they managed to make a single feature non-dumb, so guess we're even."
(propaganda - Windows 11 default widgets are "offering" a lot of russian-biased media, because Microsoft is too dumb to recognize that and they take any news source - and russian connected outlets are happy to use this delivery vector that most gullible people leave turned on)
I don't think that any of the news-oriented default Windows application since W8 had an option to provide a custom RSS channel. It was always a default pool of sources they were bringing.
For my dual monitors, they have a conflict with this feature where they do not detect signal and then switch inputs and eventually power down. Then windows sees a different config and switches again causing an endless spiral. I have to turn both monitors on to the correct input while plugging in the laptop to the dock. I wish there was a way to save specific monitor setups and manually toggle them.
Yet when I switch between home and work I have to fully restart my laptop about half the time in order for it to even detect the monitors. I also find this feature has an issue with certain programs (Obsidian in particular) where it opens the window almost off screen.
Even aside from the malevolence, Windows is rotten from the thirty-year old metaphor that it started with: windows themselves. The job of positioning and resizing applications is a confusing mix of responsibility between the user and the system.
Once you've switched to tiled window managers, examples like these sound like Stockholm Syndrome.
I hate tiling window managers. After I start a program, I move and resize its window to the perfect position, and it stays there for weeks. I don't ever want it to be moved or resized automatically, which is what tiling window managers do by default.
I will offer that you can resize/move/float in most tiling managers. Remembering your modifications is usually possible too. It's the default behavior that separates the experience.
I can't see a practical world where the OS doesn't need to take control of window positioning in certain situations. As a core example, there is full screen. Minimize is another, but that doesn't have a clean analogue in the tiled universe.
There's a natural strong reaction folks have to window managers, because it forces you to mentally remap at such a foundational level.
I prefer tiled managers because the user offloads most responsibility. Open something and by default it uses as much space as is available. If you have a special need, you can float or resize, but the vast majority of cases it makes the right call.
At heart, it's offloading cognitive load. They're more predictable and require less faffing around.
Without having run the whole company twice in parallel, once using Haskell and again in some other language, and without having measured both runs exactly the same way, I don't think metrics like you're interested in could possibly have sufficient context to mean anything reliable.
Obviously Mercury is successful, and obviously Haskell is how they did it. So it's essential to their success. Would it be instrumental to anyone else's anywhere else doing anything else? Can't possibly know, I don't think.
You can, but then "The cake is a lie.", because linecount and bug rate, when concieved as proxies for productivity[1] or quality rarely match up with reality in a way that allows you to make predictions or reason about past outcomes.
You can reason about frequency of particular types bugs, such as null pointers or overflow, or whether those bugs can occur at all.
I wasn't aware of the Gloat project before this. It's a compiler that turns Clojure into native binaries by first transpiling to Glojure (which I'd also never heard of before this), which in turn targets Go. This is rather than using a GraalVM native image, which as I understand it is at this point the better-explored mechanism of doing that for JVM-based stuff (but has its own trade-offs).
> If you only care about the UX of TUIs, that I can stand behind
This is a confusing concession. Of course we love TUIs because of the UX, what other reason is there?
Constraint breeds consistency and consistency breeds coherence.
Take 1,000 random TUI designers and 1,000 random GUI designers and plot the variations between them (use any method you like)—the TUI designers will be more tightly clustered together because the TUI interface constrains what's reasonable.
Yes of course you CAN recreate TUI-like UX in a GUI, that's not the issue. People don't. In a TUI they must. I like that UX and like that if I seek out a TUI for whatever thing I want to do, I'm highly likely to find a UX that I enjoy. Whereas with GUIs it's a crapshoot. That's it.
> the TUI designers will be more tightly clustered together because the TUI interface constrains what's reasonable.
It constrains what’s possible, not what’s reasonable. For example, one could typically fit more text on a screen by compressing it, but most of the time, that’s not the reasonable thing to do.
I’m saying most of the time because of the existence of English Braille (https://en.wikipedia.org/wiki/English_Braille#System) which uses a compression scheme to compress frequently used words and character sequences such as ‘and’ and ‘ing’ shows that, if there is enough pressure to keep texts short, humans are willing to learn fairly idiosyncratic text compression schemes.
One could also argue Unix, which uses a widely inconsistent ad-hoc compression scheme, writing “move” as “mv”, “copy” as “cp” or “cpy” (as in “strcpy”), etc. also shows that, but I think that would be a weaker argument.
Try a 300 baud modem for a few months and good money says something terribly modern like Get-MrParameterCount would get compressed, a lot. Here's Bill Joy on the topic:
> No. It took a long time. It was really hard to do because you've got to remember that I was trying to make it usable over a 300 baud modem. That's also the reason you have all these funny commands. It just barely worked to use a screen editor over a modem. It was just barely fast enough. A 1200 baud modem was an upgrade. 1200 baud now is pretty slow. — "Bill Joy's greatest gift to man – the vi editor". The Register. 2003.
> It constrains what’s possible, not what’s reasonable.
Why do you say "constrains what’s possible, not what’s reasonable", as though it's one and not the other? Does possibility conflict with reasonability? I would think it's not an either/or, it's a both/and.
The set of reasonable things is bounded by the set of possible things. So if the constraints of TUI design make certain things impossible, surely they make those same things unreasonable at the same time.
I'm sorry, excellent GUI with Blender? With the 2.5 interface things were ass backwards but you had a bunch of stuff you could do with only the mouse. With the 2.8 interface suddenly a bunch of stuff was hidden behind arcane key combinations, options disabled by default, and the loss of important visual data like the bounding box view and having both the UV and cursor coordinates in the same tab in the UV/image editor. No matter what the controls are different with every sub-window type, and interface panels flip from top to bottom and left to right for best readability without thought spared for consistency. There's a reason why someone can learn FL Studio in a few weeks, but take months or even over a year to become competent in Blender. I love it's jank and have been using it for eleven years, but I would never call the UI more than serviceable.
The thing that happens to me is that I'll get something working in the REPL, then try to deploy it and it breaks—because unbeknownst to me, I had gotten my REPL into some state where everything was working, but a cold start doesn't look the same.
Is this a skill issue? Absolutely. Do I still restart the REPL frequently (not after every def, but often) just to make sure I'm working with the same environment my program will be experiencing at run time? Yes I absolutely do.
Ah yeah, been there, and probably the first time was when renaming a function but missing to update callers, so callers keep calling the old function, and you have no idea why the changes you made in the new function aren't working.
I have a this little function for clearing the current namespace that I call every time I rename any var inside a namespace:
I have an AutoHotkey that just takes whatever is in my clipboard and sends it through as individual virtual keystrokes, specifically for defeating paste-disabled form fields.
"They're too easy to get sentimentally attached to, and then it makes me sad if I blow them up!"
Honestly this probably enhances the sandbox nature of the game by making the stakes more palpable.
reply