Decision: I do not think it means what you think it means

We use the word “decision” to mean two very different things.  If I say “I’ve decided that the moon is made of green cheese”, or “I’ve decided that the economy will deteriorate next year”, these statements aren’t necessarily about actions I’m going to take.  If, instead, I say, “I’ve decided to go to go to graduate school” or “I’ve decided to institute a new policy”, that’s fundamentally different.

How?  The first kind of decision leads to a fact, either well-supported or not.  It is, essentially, using data and expertise, following its implications (deductively, inductively, or otherwise), and leading to a conclusion (which may have more or less justification: to fit this category it doesn’t have to be right).

The second kind of decision is, instead, based on levers that lead through causal links to outcomes: I’ll decide to go to graduate school if I think it will lead to a career I like, a salary that works for me, accomplishing a life goal, and so forth.

In other words, a systems model.  Often a chaotic, partially invisible, sensitive-to-initial-conditions, subject-to-many influences cause-and-effect model, in all its messy glory.

In contrast, decisions about facts (our first category) are not about how actions lead to outcomes.  These kinds of decisions are important, but only for two reasons, as far as I can tell: 1) entertainment value (it’s good to know how the world works) or 2) they might play a part some other decision, of the second type, somewhere down the road.

So if I’m working with you on a decision modeling project, you need to know: I live in the world of the second kind of decisions (let’s call them “type A”, for “Agency”): those that connect through a linked chain of events that lead to the outcomes you want.  If there are facts to learn along the way, then so be it.  But I’m not going to spend a lot of time on facts that don’t have a chance of leading to outcomes, nor on data analysis that we don’t believe will lead to your goals.

This distinction has deep implications for the history of AI.  For instance, Cycorp focused on a lot of facts.  Which is great, as far as it goes.   Yet a new company, Lucid AI, is moving into the causal world.  From what I can tell, they’ve nailed it. More and more, we’re all talking about agency.

It’s funny, us Old Fogie AI Folks cut our teeth on the Handbook of AI.  Written in the early 1980s, for me it was my introduction to the field, and kept me sane during my coding job on MVS XA at IBM .  There, I remember first hearing about BACON: an AI system prescient not only for its understanding of the popularity of its namesake breakfast item, but also one that is today forming an important hidden link in the fabric of AI.  Pioneered by my friend Pat Langley, BACON discovers new causal rules from data.  And has done so all these years.

Put these developments together with tools like the New York Times’ “You Draw it!” visual relationship drawing widget and all the players in the emerging Decision Intelligence ecosystem, and we can see how the pieces fit together to form a new machine.   It’s not just automated systems, but humans in the loop solving problems that go beyond sales and marketing.  I believe we’re going to see a hockey stick now, driven by “type A” decision support systems.

The outcome I’m looking for is to accelerate our use of this important new technology.  So I’ve made a few decisions that in my own systems model  will get us there.  Watch this space 🙂

 

You may also like...