What should we do about “bad” AI?

OpenAI has built an AI that it suspects may be too dangerous to release to the public. What should it do?

Sorry to say, but the technology train is hard to stop. Whatever OpenAI decides, we need to recognize that AI, like all powerful technologies inevitably generates unintended negative consequences: it’s a natural phase in their evolution to mass adoption. So we take protective measures: cars kill people, so we invent seat belts…machines in factories are dangerous, so we stand up OSHA.

AI stands out in two ways: 1) compared to, say, a cotton gin, it’s hard for the public to understand it; and 2) it is moving incredibly fast.

For these reasons we are challenged like never before to discover and invent mechanisms to ensure we maximize AI benefits while minimizing downsides. Interestingly, AI (when combined with DI) can help us with this process, in giving us more accurate models than ever before of likely cause-and-effect links. So we need to turn the technology back on itself, and take on the understanding of chains of events that technology sets in motion in a more powerful way than ever before.

Key best practices include:

  1. Modeling “soft” links, like the degree to which innovation is inhibited in societies that might embrace facial recognition, as well its positive security benefits.
  2. Considering multiple mechanisms: regulations sure but also market forces, social “shame” contagion, and more. In many political climates today, legislation will take too long.
  3. Avoiding an over-reaction: genuinely embracing—and believing in—the existence of solutions that stimulate innovation upsides while limiting downside negatives.
  4. Inviting non-AI-scientists into the AI planning and technical realm. Social scientists, psychologists, behaviorists, politicians, economists and more have important knowledge regarding the impacts of technology. The recent announcement of an AI that might be able to detect “gay” vs. “non-gay” faces indicates social blindness in this regard. A basic knowledge of gay persecution in some countries worldwide, along with an understanding of deployment of facial recognition, might have steered these folks on a different path…
  5. Getting serious about democratizing AI: making it easier to understand. Due to its academic roots, AI’s culture is very technical. It’s like if all car salesmen were automotive engineers, and didn’t understand that you can drive a car without understanding how a carburetor works. This isn’t so: you can understand the basics of most of AI in a few minutes.

Bottom line: The emerging best practice kit for AI projects should include modeling both intended and unintended consequences of this increasingly powerful technology. This should be a standard module in data science training. Note that this best practice goes considerably beyond avoiding AI bias, to a much larger discipline of understanding how to analyze the ripple effects of your system on society at large.

And we should not give up our “heads in the clouds” enthusiasm for the sake of “feet on the ground” understanding. Embracing both is the challenge before us today.

Avatar

Lorien Pratt

Pratt has been delivering AI and DI solutions for her clients for over 30 years. These include the Human Genome Project, the Colorado Bureau of Investigation, the US Department of Energy, and the Administrative Office of the US Courts. Formerly a computer science professor, Pratt is a popular international speaker and has given two TEDx talks. Her Quantellia team offers AI, DI, and full-stack software solutions to clients worldwide. Previously a leading technology analyst, Pratt has also authored dozens of academic papers, co-edited the book: Learning to Learn, and co-authored the Decision Engineering Primer. Her next book: Link: How Decision Intelligence makes the Invisible Visible (Emerald Press), is in production. With media appearances such as on TechEmergence and GigaOm, Pratt is also listed on the Women Inventors and Innovator’s Mural. Pratt blogs at www.lorienpratt.com.

You may also like...