Since thinking so much about levers in the last two posts, I’ve also been pondering the variety of levers I’ve built and seen, and the different purposes they serve in a decision model. In particular, given that our goal is for models to be as easy to understand as possible to facilitate collaborative team alignment, I think that some principles are emerging. Here are a few ideas.
(Note if you are reading this in email: this post is full of animated gifs, and most email readers block those. So I suggest you read this post on its web site.)
First, if we’re thinking about graphical display of systems models, I think it’s helpful to contrast “batch”versus real-time reaction to lever movements.
The first animation below is a Stella model, where you can see that the system parameters are changed first, then you press the “run” button, and then the result is a graphical display of important metrics:
Contrast this with the model I built in a recent blog entry, where we can see intermediate values changing as levers are moved.
I think that, where possible, we should always provide this kind of immediate feedback, because it exposes the internal model dynamics, thus enhancing team alignment and collaboration around a shared mental models.
Here’s another model in the same class of interaction, where we immediately see the impact of various business decisions. Built for the Omega Global Initiative – a new nonprofit deploying cutting-edge algae-based systems around the world – this model uses a number of custom graphical elements to help make the point about how using the Omega platform for multiple purposes changes the economics.
Interestingly, in my copy of Firefox (v39), the first graphic above looks as follows. See how the movement only happens when I let go of each slider? I think this is much less helpful and does a much worse job in driving my intuition about how it fits together. Do you agree?
A final question about this animation: what do you think of the text explanation? Although I’ve long been a proponent of shifting from text- to visual thinking, I’m starting to think that the text is nonetheless a necessary addition to fully understand the model. I think that this automated text is a good way to include both the dynamic nature of the model / sensitivity to assumptions as well as to ensure everyone fully understands it.
Much as I like this immediate feedback, sometimes the dynamics of the model really require simulation code to run over time, as it does in our Liberia model:
Note here how the lever movements don’t produce any results until we press the “start simulation” button. So this is a lot more like Stella. But do note how there are constraints between levers for which we do get immediate feedback. Also, for many problem domains, a geospatial (map) display like this is essential.
In particular, a map can be used in agent-based models, as shown in the following video clip built in the Lumion architectural visualization software.
Note the moving objects on the table inside the window: those are simulated agents, and can be used in many applications in which it is important to understand how entities (viruses, people) move across a geographical area.
Someone asked me the other day whether decision visualization was restricted to displaying models whose behavior was feedforward-only, without feedback effects. Nope. We can get a great view of when tipping points occur in complex systems with a continually-running simulation that can be tweaked while it’s running.
For an example, take a look at this likelihood to recommend (L2R) model I built in World Modeler™ for a telecom company. As you can see, a small change to a couple of parameters lead to an explosion in customers:
This same dynamic plays out in my web-based carbon tax demo. This video shows that making small changes leads to an explosion of revenues and a corresponding reduction in carbon emissions:
Decision model visualization is both brand-new and also an old field, since it stands on the shoulders of systems- and agent-based modeling giants. If our greatest priority is on end-user understanding (Quantellia’s focus), then I think this leads to some new design decisions. What do you think? Thank you for any other pointers to modeling visualizations, thoughts, and/or feedback.