What’s really frightening about Artificial Intelligence? It’s not what you think.

OK, I’ll admit it. AI scares me.  But not for the usual reasons: I’m not too concerned about robots taking over the earth or even the Singularity, as are many of my friends.  What does frighten me is the distraction that AI represents from the problems that matter.  The ones that need our judgment, our ethics, our humanity, our instincts, our rational subconscious, where we keep humans in the loop. These problems are best solved through a collaboration, where we use computer help where it’s best applied, giving us better data, evidence-based analysis, “moneyballing” governmentAnne Milgram’s great work fighting crime with data, or Ruth Fisher’s game-theoretic analysis of everything from the dynamics of the U.S. health care system to cap-and-trade carbon schemes.

In a recent HBR article, Artificial Intelligence is Almost Ready for Business, Brad Power explains the lay of the land.  He correctly points out that “the explosive growth of complex and time-sensitive data enables decisions that can give you a competitive advantage, but these decisions depend on analyzing at a speed, volume, and complexity that is too great for humans. AI is filling this gap as it becomes ingrained in the analytics technology infrastructure in industries like health care, financial services, and travel.” [emphasis mine] Power goes on to quote Tom Davenport in describing a number of use cases where humans are simply too slow.

There’s value in these solutions, I’ll grant you that.  And they’re driving tremendous benefit to Silicon Valley and beyond.  But for every new automated trading model or Internet Thing, there are a hundred children dying of malnutrition, who don’t have to.  Or a conflict killing tens of thousands, which doesn’t have to be.  And increasing income inequality. And government programs in which taxpayers are investing hundreds of millions of dollars whose results aren’t even measured, let alone continuously improved.

These situations could benefit tremendously from the same underlying technology used to drive those fully-automated use cases.  Yet they are playing second fiddle.  Why? Is it because including humans in the loop is just too hard?   I’m not sure of the reason.  But what I do know is that our priorities seem upside-down.  That we need to flip AI on its head, for a while at least, and focus on Intelligence Augmentation (IA) first

Why are human-in-the-loop use cases playing second fiddle to fully autonomous AI? Click To Tweet

First of all, it’s an easier ask.  All we have to do is to improve a tiny bit – a percent or two –  on very complex decisions in a number of domains that have exceeded a “complexity ceiling”, leading to brain freeze and a worldwide epidemic of unintended consequences.  One of many situations that yield themselves to this approach is optimizing aid donations using systems analysis to understand how to create maximum benefit per dollar. There are many more.

Even doing this in a commercial setting has tremendous value, because it frees up good people to do good things, and to consider multiple bottom lines when in the past even one was impossible.  There’s a diminishing returns pattern here, and we’re on the steep part of the curve in many commercial domains.  Just a few examples are: global program management, collaborative decision-making in model-intensive environments, telecom network optimization, sustainability decision-making, and multi-touchpoint, social-network-based customer experience management.   Moving the needle a little bit in these and many other spaces is low-hanging fruit: it’s not hard to do.

Second, the knock-on benefits are huge.  If you connect the dots, you realize that poverty, for instance, affects everyone, in higher health-care, policing, security, and other costs, not to mention the human element.  Whether it’s visible to us or not, we live inside a globally interconnected system, in a way that is unprecedented, and we ignore the unintended consequences at our peril. (For my complex systems friends: often a small, cheap intervention can shift to a new part of state space, producing a nonlinear benefit over time that dwarfs the initial cost.  Identifying that intervention is the holy grail, but more often than not, data alone isn’t enough to find the solution, because we need a full systems model.)

Third, it’s deeply satisfying work, which can make you happy, possibly even more than money.

And finally, it’s worth repeating: there’s a way to use machine learning, big data, expert knowledge, modern UX design, the cloud, modern neurobiological understanding of visual-spatial thinking, and complex systems analysis to solve these problems  in a way that goes way beyond simple “factivism“.   You really can’t sink your teeth into a more technically interesting space than that, and the benefits are enormous.

You may also like...