AI-aided management decisions can't come soon enough. Decision Model and Notation made big headway on this, but I don't see it discussed. They're fancy decision trees but can handle complex factors and are designed to be intuitive to reason about.
Management is made of leaky abstractions. I think of course that humans should still be the accountable party, but AI can guide baser decisions so that more time can be spent on the hard problems.
Imagine how many thousands of hours managers have wasted at this point discussing WFH. They should have been figuring out their supply chain and labor strategy.
This argument would be more compelling if companies were in general better at driving management decisions using all of the many data analysis techniques already available. Making something lower effort but also harder to understand and more error prone doesn't seem like an obvious win....
I agree! I think management processes need more open experimentation, though. Documenting and creating decision trees makes those decisions more transparent overall. It also aids in transfer learning, something which, from first hand experience, is lacking on ground-scale. Meaning, there's a lot of management knowledge in people's head's that increases bus factor risk. That can be mitigated, whether through my proposed method or otherwise.
Abstractions get leaky when underlying assumptions fail to hold. As an example, comparing insurance rates and coverage from different companies via AI is of limited utility if you ignore counter party risks where the companies fail to keep their end of the contract. Of course you can add that or any specific thing I bring up to the model, but you can’t include everything because the model is always simpler than reality.
The desire for AI decisions to be explainable forces them to use even simpler which makes this worse.