Getting the links right: Sketch graphs, AHP, ML, and more.

Cause-and-effect links live at the heart of complex systems.  And understanding them means we can go beyond historical data to understand situations we haven’t faced before, using piecewise causal links from the past to inform new situations.  This is incredible, because it breaks us from the tyranny of using only historical data in data science, machine learning, AI, and more.    Here’s how three previously separate approaches can be used to help us get the links right.

Over the years, I’ve often been asked “How does decision intelligence compare to the Analytic Hierarchy Process (AHP)?”  AHP  is a widely used approach in a lot of decision consulting.  It’s central, for example, to my friends at Transparent Choice, where they do great work supporting complex decision making.    So here’s my lightning-fast, blog-sized overview of how to think about how these methods relate, and also how they both connect to machine learning (including Deep Learning).

When making difficult decisions in complex environments, it’s usually worth the time and effort to introduce a degree of rigor. The core of this rigor in most decisions is a link from cause to effect, or one that calculates a value that moves us along the path from levers to outcome, as shown in the graphic above.

bp-sketchgraphFor instance, one link might tell us the relationship between the number of articles that a person has written and their suitability for a certain job. The relationship might look like the graph at the right (from World Modeler™). This shows that a person without any publications has zero suitability, and that they are a better and better fit for the job based on this criterion as their article count climbs to about 150. Above 150, additional publications really don’t matter, and if there are too many that is even a liability.

But what if we don’t know the shape of this curve? Imagine we’re trying to determine the best kind of graphic to show a cancer patient as they make decisions about their treatment. Which kind of chart is best? Does a bar chart showing treatment outcomes communicate better than a line graph? Here, we don’t have insight into the relationship. Compared to the publications example above, it’s a “black box”.

Both of these situations are illustrated below. We have something we can measure, and some derived value based on that measure, and we want to know how to get from one to the next.

bp-link1

 

The Analytic Hierarchy Process, invented by Thomas L. Saaty in the 1970s, is an important technique to help us to elicit human knowledge when it’s not easy to simply “draw the curve”. The core of the approach is to ask the patient to compare two graphics at a time. For example, as described in a paper by James G Dolan and Stephen Iadarola, doctors were interested in knowing what kind of graphic was best to explain cancer screening risk to patients. The doctors showed each patient two kinds of data visualizations showing the same risk data of a cancer screening option, and asked them to say, for example, “It’s five times easier to understand the visualization on the left versus the right one”. This data, fed to an AHP system, gave the doctors an overall preference ranking. Here, the “something we can measure” was the type of visualization , and the “new measurement” was the amount (e.g. “five times easier”) that patients preferred it to other options. AHP tells us how to get from one to the other, as shown below:

bp-link2

In other words, we’re converting data about side-by-side visualization preferences, to an overall preference statistic across all patients, and all types of visualizations.

This is a powerful approach in these sort of “black box” situations, where we don’t have the intuition to draw a sketch graph.

It’s worthwhile comparing AHP to a third approach, which is at the core of all statistics and machine learning. To illustrate, in our example we’d simply ask patients: “how much do you like this visualization?” (instead of the side-by-side comparison). Then this data, along with attributes of each  visualization (like size, color, type of chart used, and more) can be fed to one of many systems, including statistical regression, a decision tree learner like CART, or even a deep learning algorithm, to determine the relationship, as shown below.

bp-link3

Sometimes, we want more than a good value for “f?”. The “unknown relationship” from these three techniques is better, all other things being equal, if it’s also understandable. If regression gives an attribute a high coefficient, for example, then that attribute matters more. For instance, black-and-white visualizations might have higher coefficients than those with color. If a neural network gives high weights to an attribute (e.g. color), or a decision tree positions an attribute high in the tree, then those matter more than others.

In summary, the best way to do rigorous decision modeling is to use a toolkit of many approaches, choosing the best one for each job. In many practical settings, understandability and usability matter more than anything else, as these are often the barriers to adoption, way ahead of the accuracy of a particular technique. This is particularly true when there are humans in the loop of the decision-making process.

What’s exciting is that with a decision model, we can see how machine learning, AHP, and machine learning fit into an overall picture of how to make decisions more rigorous, transparent, and agile.

I’d love to hear your thoughts in the comments below.  I’m really excited about the above story, because for the first time I can see how these approaches relate to each other.  Do you agree?

You may also like...