Guest Post: The Decision Command Fallacy (Part 3)
How to avoid unintended consequences of decisions in complex environments
In Part 2 of this article, I explained that, without careful thought, intervention in a complex situation will, more often than not, lead to unintended consequences. If we don’t lift our game, I promise you. we’ll be left stumbling in the dark. Here, I’ll describe two best practices that overcome these problems.
Best practice 1: Understand that unintended consequences exist, and the necessity of understanding all actors’ incentives
In prohibition, the decision makers measured “favorability” in terms of an orderly society. However, the members of that society also found the consumption of alcohol to be favorable over that, and the acquisition of wealth to be even more favorable. In general, the measure of “favorability” applied by the system to its current state need not be the same as the one the decision-maker is using.
In response to the initial system change (the prohibition of alcohol), the members of the affected system compared the choices available to them:
- Don’t comply
along with the favorability of the outcomes associated with each:
- No alcohol, vs.
- Availability of alcohol, chance to make a fortune by bootlegging, chance of going to prison if caught.
Enough of society chose option b) to force the decision maker (the US federal government) to reverse the law after only a few years.
The lesson here is that decision makers dealing with systems that contain feedback loops should not think about achieving outcomes by directly enforcing them. The system will almost always adapt to not only undo the explicitly desired change, but also quite possibly end up in a new state that has many other undesirable characteristics. Black markets (criminal and otherwise) to bloody revolutions, antibiotic resistant bacteria, and biological extinctions all have roots in decisions with unmitigated feedback loops.
To solve this, think about two distinct problems:
- What outcomes am I trying to achieve? and,
- How do I alter the system so that the system—including the people in it— regards those outcomes as the most favorable? If this is done correctly, it will then self-adapt into the desired state and stay there.
Best practice 2: Understand the difference between direct action and changing the favorability surface (which is often more effective)
An example of smart decision making in a system with feedback loops is the Victorian (Australia) state government’s plan to reduce drunk driving from the high levels in the 1970s. Decades of the kind of simple “direct outcome” decisions I referred to above had failed. For example, laws were enacted that required pubs to close at 6pm. Rather than reducing DUI incidents, this resulted in binge drinking just before closing time, and many highly-intoxicated drivers on the road during peak traffic hours, with the expected result.
Bad Decision Making with Feedback Loops
Feedback Loops as the Decision Maker’s Tool
During the 1980s, a different approach was taken. Rather than trying to legislate when people could and couldn’t drink, a mixture of information dissemination and enforcement was used to change the perceived risk associated with driving while drunk. The information campaigns graphically illustrated the horrific results of car accidents involving intoxicated drivers, while increases in actual enforcement resources increased the likelihood that offenders would be caught, fined, and possibly imprisoned. No laws regarding when people could drink, or how much they could drink while driving were changed. Yet these changes altered the favorability assessments made by members of the affected system and the feedback loops drove behavior to deliver exactly the outcomes the decision makers were trying to achieve.
In today’s fast-changing and globalized world, the majority of decisions are like the ones above, yet, by and large we don’t take the time to think through unintended consequences, and tend towards the ineffective “direct” interventions described above. Fortunately, decision intelligence (DI) is ushering in much simpler simulation tools, bringing the power of AI, data, and simulation into software that is easier to use than a spreadsheet.