If AI Doesn’t Work, then How Could it Kill Us?
Forbes reports that “most AI projects fail”; Techrepublic puts the number at 85%. How do we reconcile this reality with claims of AI as representing an “existential threat” such as those from Elon Musk and futurism.com? Often called AGI (Artificial General Intelligence), the idea is a general-purpose, human-like intelligence like Hal from 2001 or the Terminator.
Here’s what’s going on.
First, all new technology by its nature is disruptive, and AI today is no exception. The cotton gin, the telephone, and certainly the internet and personal computers all created massive shifts in employment and commerce. Wise governments adapt and retrain to try to minimize impact on workers. Sometimes this goes well, sometimes not so much, but that’s not what most are talking about with AGI, which goes to the next level.
Second, unique to AGI is its ability to capture the imagination, as great fodder for Hollywood and more: driving an entire post-apocalyptic genre. AGI stories have become so powerful that some claim they have reached the level of global propaganda. We never saw blockbuster “disruptive telephony” Hollywood hits; AGI stories are unique in this regard.
Third, AI has been genuinely successful in recent years. When combined with our ability to send, store, and analyze data, AI has driven big improvements in natural language processing (the likes of Alexa and Siri are the most familiar representatives here), and in marketing and advertising (think Google and Facebook ads, plus Amazon recommendations).
AI summer/winter patterns caused by “overshoot” and ignoring unsolved problems
But here’s where we go off track: it’s natural to simply extrapolate the growth of recent successes and to assume that there are no big barriers along the way from AI to AGI.
This is a mistake that AI has made twice before: first, with over-enthusiasm at the 1956 Dartmouth workshop where the term “Artificial Intelligence” was coined, and a second time during the 1980s when nations worldwide invested heavily in AI, such as $750 million into Japan’s Fifth Generation initiative. Both waves were over-hyped, and ultimately went bust, leading to two subsequent “AI Winters”.
The pattern is repeating today. Each AI “Summer” of the past was characterized by some breakthrough that led to new capabilities that felt more “intelligent” to us than previous systems. And that’s legitimately exciting. Each AI Winter, in contrast, is created by our inability to see the brick walls of blocking capabilities to get to the next level, and to blithely extrapolate today’s success past and through unsolved problems. It looks like this:
This picture shows a series of “hype cycles” (I’ve written before about the dynamics that cause them). In each one, a combination of AI’s unique ability to capture our imagination with the innate complexity of AI technology leads to “overshoot”: a careless thinker (and a clickbait-incented media) simply extrapolates the success seen in the last few years to future success. Yet, baked into each success “bump” is its own brick wall of limitation: one for symbolic and the other for subsymbolic AI. For example, a widely held belief is that the first wave died on the limitations of single-layer Perceptrons, and the second wave died on the limitations of logic to capture intelligence plus lack of algorithms for deep learning.
The next wave
Today, we’re on the tail end of the third (Deep Learning/NLP) wave. It is fundamentally limited by a focus on data as the only source of intelligence; ignores the need for AI to embrace how actions lead to outcomes, treats human-in-the-loop use cases as second-class citizens to fully autonomous ones, largely ignores the interface between UI and subject-matter experts, and prioritizes new algorithms ahead of applied AI. The good news is that, as with all paradigm shifts, current thinking is crumbling and a valuable new paradigm is emerging to take its place.
But here’s the thing: we can’t afford another AI winter. We’ve got to get straight to next summer because the climate can’t wait. Nor can the pandemic. Nor can the thousands of use cases in health care, finance, telecommunications, and more where AI can lead to a better world, if only it would stop failing.
And that, my friends, is the danger of AGI. It’s distracting from these noble pursuits. But doesn’t have to be: AGI is at least one paradigm shift away, for now. And we need the power of AI to move away from marketing to work hand-in-hand with us to solve the really hard and urgent invisible, exponential, multi-link and multi-disciplinary problems we face.
The next AI summer is about Human-in-the-Loop AI, UI, context, and knock-on effects
Particularly unique to this next wave is the need to more seriously consider the UI to our AI, which includes a better way of understanding the context that an AI system serves within an organization and its consequences inside and outside of that organization. I know, it’s tempting as technologists to avoid the people and process sides of the people/process/technology three-legged stool. But no technology succeeds without serious treatment of how these three elements interact; to think otherwise is to see AI, incorrectly, as a “silver bullet”. That being said, we still do need AI, because it knows how to make sense of data. Data matters; it’s just chocolate chips, not chocolate cake.
AI research friends: instead of new algorithms, we need general-purpose methods that help humans work together with AI as it solves the big problems. Poverty, conflict, democracy, climate, inequality: these are, at their core, problems of systems: often complex and dynamic, and massively interdisciplinary. To solve Covid-19, for instance we need vaccines, but also knowledge of behavior, aerosolized particles, government policy, building shape, air conditioning systems, human movement patterns, and much more. To do so requires a new method for mapping and simulation to understand how the decisions that we make and actions we take in one domain ripple through all of the others.
Because that’s how the world is shaped; unlike our disciplines today it does not place artificial barriers between my breath, your movement, the air in the room we share, and my opinion of a mask mandate. These elements are interconnected; their separation is an artifact.
To understand how they connect, we need human knowledge about how these systems work, and we’re going to need to get that knowledge into AI (systems knowledge is not, by and large, contained in data). This is what mouse inventor Doug Englebart called the intelligence amplification question, and is, again, a UI challenge, not one requiring a new AI algorithm. The good news is that the smart AI people see this future and are shifting in this direction; for example here’s today’s announcement of Peter Norvig moving to Stanford’s human-centered AI group.
So you can be rest assured that we’re a long way from AGI. And, better still, AI, managed right, can do us a world of good. Let’s get straight to the next summer, please. Here’s how.