I idly worry that this, blanket rejection of goto, etc. are giving ammunition to lazy thinkers when the point of such "rules" is to guide thinking of new programmers until they can properly reason about exceptions and nuances for the rules; like trainer wheels for the mind. It's not a mathematical, empirical definition, for sure, but a good professional will have a feeling for the right thing to do to solve a problem cleanly.
I have found that some people never re-examine these rules as life goes on and just become like a walking linter, without any capacity to think more deeply about things. They tend to produce superfluously acceptable code with deeper flaws because they don't think about things holistically.
I have had a coworker cite avoidance of premature optimization frequently as an excuse for not wanting to think about architecture, context or the future, even when things are well defined. He'll whip this out to avoid work, but then spends many hours refactoring his codebase multiple times.
I sat down and read the original paper a few years back with the same intent as the author here, and afterwards I listed a couple of quotes that I found interesting [0]. It didn't really get any traction [1], but maybe someone here will find it interesting.
I have found that some people never re-examine these rules as life goes on and just become like a walking linter, without any capacity to think more deeply about things. They tend to produce superfluously acceptable code with deeper flaws because they don't think about things holistically.
I have had a coworker cite avoidance of premature optimization frequently as an excuse for not wanting to think about architecture, context or the future, even when things are well defined. He'll whip this out to avoid work, but then spends many hours refactoring his codebase multiple times.