I agree with your underlying point, but it's also important to point out that computers are also singularly wonderful in that it's usually much faster and easier to reverse failures, and then to diagnose and debug in a non-impactful manner.
To take your second example - if I could then flip the light switch back, and the pool reappeared, then I'd be miffed but not particularly annoyed (assuming I was able to fix that obvious-bug either myself or with an update in a timely fashion). If the pool stayed gone, then yeah, I'd be pissed.
Of course, that whole argument goes out the window when the tech in question isn't controlled by you. Which is often the case.
Tell that to the 346 people who perished because of negligent and (in my opinion, malicious in terms of regulatory deception) undocumented, uncommunicated programming of the speed trim system of the 737 MAX.
Or the folks who perished because of badly programmed software interlocks on the THERAC-25 radiotherapy machine.
Just knowing or figuring out to flip that switch may be an insurmountable barrier depending on the circumstances when a failure state occurs. Especially when the implementation is intentionally hidden so as to facilitate continued market value extraction opportunities from the happy accident of information asymmetry.
In wiring a house, there is a built in assumption that something could go wrong and disrupt the wiring. That's why we had fuses, and now circuit breakers, grounding, ground fault interrupters, metal conduit, etc.
All of these serve to limit the side effects of faults.
When you turn on a switch... it's part of a circuit which is current limited, and in fact there are several limits on that current, all the way back to the source... each designed to protect a link in the chain. Each of those breakers limits the capability to source current further downstream.
When you run a task in any modern OS, it runs with the full privileges of the user id with which it was launched. This is like hooking a generating station directly up to your floor lamp in the living room with no breakers. If the process has a fault, there is nothing the Operating System will do to prevent it from being used to subvert other parts of the system, there is no limit to what it can do.
There are systems that require you to specify how many resources a given task is to be allowed to access. It turns out that such systems can be just as user friendly as the ones we're used to, but they do require things be re-written because the ground assumptions in the security model are different.
Capability Based Security (also known as "Multi-Level Security) was born out of a need to have both Sensitive and Secret information shared on a computer that scheduled Air Traffic during the Vietnam Conflict. (If I remember the situation correctly) The flights themselves were sensitive, and the locations of the enemy radar were top secret (because people risked their lives spying to find them).
It was extremely important that information could not leak, and solutions were found, and work!
About 10 years ago, when I learned about this, and considered the scope of work required to make it available in general purpose Operating Systems, I estimated it would take 15 years until the need for Capability Based Security would be realized, and another 5 more or so until it was ready. I think we're on track.... 2025 people will start adopting it, and 2030 it will be the defacto way things are done.
Genode is a long standing project to bring this new type of security to the masses... I'm still waiting until the point I get to play with it... and have been for a while.
Things will get better... these types of tools, along with "information hiding", getting rid of raw pointers and other clever but dangerous tricks will help as well.
The problem with an increase in security is that it almost always comes with a tradeoff of higher complexity. Higher complexity means more difficulty tracing. It also means the state space of a general purpose machine ostensibly there to be configured to fulfill the user's goals is a priori heavily constrained.
Point being, I don't see a shift in the direction of security above usability or ease of mentally modeling doing anything but worsening the problem. I could be wrong on that though, but the last 20 or so years of further encroachment by industry on User's perogative to configure their machine as they like doesn't inspire great confidence in me.
I can say I'm totally reading up on that though. I hadn't heard of it before, and it sounds interesting.
Completely agree - hence why I said _usually_. Another example of irrevocable harm is when ML algorithms dictate some medical treatment or social program.
But, _usually_, it's easier to reverse some changed-data somewhere than it is to reverse an actual change-of-state in the physical world. At least, the inherent effort required to do so is less - but policies or obfuscation may make it harder.
To take your second example - if I could then flip the light switch back, and the pool reappeared, then I'd be miffed but not particularly annoyed (assuming I was able to fix that obvious-bug either myself or with an update in a timely fashion). If the pool stayed gone, then yeah, I'd be pissed.
Of course, that whole argument goes out the window when the tech in question isn't controlled by you. Which is often the case.