Counterfactual blindness and the invisible things

This is an article about invisible things that matter.  Because these things are invisible, it’s going to be harder going than usual.   Because they matter, it’s worth it.

There’s a story a friend told me last week about a plumber.  A women calls him in an emergency; there’s water everywhere.  He arrives quickly, takes 10 minutes to fix the problem, and hands the woman a bill for $100.

“A hundred dollars!” she exclaims, “But you only took ten minutes!”.

“Ah, says the plumber.  You didn’t pay me for the ten minutes.  You paid me for my thirty years of experience knowing exactly what to do.”

This thirty years: that’s an invisible thing. Without it, the flood would have been a disaster.

Or what if, as Taleb points out in The Black Swan, some guy had invented cockpit bulletproof doors, and had gone to the trouble of pushing through legislation so that on September 10, 2001, hijackers couldn’t enter?   He would have averted a disaster, but would he have been recognized as a hero?  Probably not.

This is the bullet that didn’t hit you, the bad software architecture you didn’t use, the disease you didn’t catch, the network downtime that didn’t happen, the software project that was delivered on time, the phone network that didn’t fail.    And there are heroes—the inventor, the advocate for disease screening, the architect, the disciplined coder, the IT guy, the phone gal—who go unrecognized, but arguably prevent a thousand more disasters than the rare few who save the day at the eleventh hour and get the glory.

Philosophers call this a counterfactual.  And it’s past time the word entered our vocabulary.  Because without an understanding of the costly path not taken, we’ll continue to suffer from a fallacy we’ll call counterfactual blindness.

In my day-to-day working world, this matters a lot, because I’m like the plumber: the hours I spend on a problem directly might be the most visible, but the false starts and wrong paths I’m avoiding because of the thousand of machine learning systems I’ve built: that’s more invisible and so harder to convey.

In medicine, I’ve noticed that the kudos go to the flashy cures, leaving inexpensive prevention interventions that could save many more lives by the wayside.

Those of us who grew up in AI were taught from an early age to understand the difference between the search path used to find a solution (which can be combinatorically huge) and the path we ultimately find, which might be tiny.  Just because it’s a concise solution doesn’t mean it was easy to find.   It’s all about the fan-out of multiple options, thousands of choices, exploding possibilities.  And in a complex interdependent world, these are gnarlier than ever.  Needles in larger and larger haystacks.

DI can help: a great visual systems model can go a long way in overcoming this problem.  A decision model gives us the best possible collaborative view of the future, letting us work together to explore how multiple realities play out.  So we can see, agree about, and plan for, better futures than those for which we might otherwise settle.

My game theory friend Ruth Fisher says that, in legal parlance,  the counterfactual is called the “but-for world”. “When you calculate damages for an infringement, breach of contract, etc.,”, says Ruth, “you have to compare the value/cost of what actually happened to that which would have happened but-for the infringement, breach, etc. The difference in the two sets of values is the amount of damages. To make your case, you have to create the but-for world, providing credible evidence to support your case.”

In other words:

value of the thing that happened – value / cost of what would have happened “but for” the infringement, breach, etc. = amount of damages

which generalizes to:

Value / cost of today’s reality (sickness, terrorist attack, etc.) – value / cost of the alternative reality (lower sickness for prevention, fewer killed via terrorism) = benefit (cost) of this reality.

However you formulate the problem, we need a well-reasoned understanding of the alternate reality.

So join me in some counterfactual hunting: see if you notice the fallacy playing out as you estimate the cost of choices that impact multiple futures.  And make those paths visible, tangible, real: erase their invisibility through great stories, transfer from related situations,  and great visualizations.  Reward the right heroes.


Lorien Pratt

Pratt has been delivering AI and DI solutions for her clients for over 30 years. These include the Human Genome Project, the Colorado Bureau of Investigation, the US Department of Energy, and the Administrative Office of the US Courts. Formerly a computer science professor, Pratt is a popular international speaker and has given two TEDx talks. Her Quantellia team offers AI, DI, and full-stack software solutions to clients worldwide. Previously a leading technology analyst, Pratt has also authored dozens of academic papers, co-edited the book: Learning to Learn, and co-authored the Decision Engineering Primer. Her next book: Link: How Decision Intelligence makes the Invisible Visible (Emerald Press), is in production. With media appearances such as on TechEmergence and GigaOm, Pratt is also listed on the Women Inventors and Innovator’s Mural. Pratt blogs at

You may also like...