Machine Learning Classic Mistakes, Best Practices, and Epic Fails: experiences from over 30 years of applied projects

I and my teams have been delivering machine learning projects for about three decades, from work on the Human Genome Project in the 1980s through sensor fusion for the Department of Energy for hazardous waste remediation in the 1990s, to more recent work for the Administrative Office of the US Courts, the DoD, for human resources applications, web site analysis, stock price prediction, medical devices, call centers, animal language (just starting up this year) and much more: over several dozen projects in all.

In this time, I’m (ironically?) proud of myself (earlier on) and my team (more recently) to have been pushing the envelope hard enough to be making a number of mistakes. Some of them more than once. And some we’ve started to observe in our customers (especially before we start working with them 🙂 ).

These mistakes form patterns, which if avoided can save years from a project and millions of dollars of unnecessary cost. So this post is the first in a series to share with you these experiences, with the goal that you can be radically more effective. Messages to my younger self…if only I’d known them earlier on…

Some of the advice is subtle: a 1%, no-cost change in direction can mean massive savings and reduced risk.

The mother of all fails

The first “epic fail” is also a root cause that in turn has led to a cascade of other fails, which I’ll cover in later blog posts in this series. Simply put: the “fail” pattern consists of not understanding the fact that machine learning is really multiple subfields:

  1. Academic research (old (pre-2012 ish): 95% new: 65% (I made these numbers up))
  2. Applied systems (old:  5%; new: 30%)
    • ML-centric companies (Netflix, Google, Pandora)
    • Large enterprises
    • Small / medium sized businesses
    • Governments and nonprofits
  3. ML for data scientists (old: 0%; new: 3%)
    • AWS ML/Azure ML/Nvidia DIGITS/H2O Stream: Drag-and-drop data flow, MS programs in data science.
  4. ML software engineering (old: 0%; today: 1%)
    • How to elicit business requirements that ML can satisfy
    • How to architect a ML solution to achieve business objectives
    • How to calculate ROI, at first and incrementally throughout the project
    • How to build models
    • How to integrate models into a production system
  5. Decision intelligence (old: 0%; today: 1%)
    • How to integrate models (prediction) into a larger decision (action->outcome) framework
    • ML for the masses (democratizing the technology)
    • Solving the wicked problems / saving us from ourselves
    • Catching bias / unintended consequences of AI systems

The classic mistakes in this space all derive from not accurately assessing your situation, and so applying a “one-sized fits all” approach. For example:

  • Hiring only academics for an applied project (you need one, at most. The rest can be software engineers that can be retrained for machine learning.)
  • Ignoring software engineering best practices. 
  • Not understanding that there are software engineering best practices that are unique to machine learning.
  • Ignoring the decision/ action context within which the machine learning system exists.
  • Assuming that data scientists are also software engineers.

Machine Learning’s history in academia leads to mistakes within applied teams

The biggest error in the above category arises from the situation illustrated below:

The vast majority of the history of machine learning was spent in academia. This has led those of us that are building solutions that solve business, government, or societal problems into some habits that are misleading at best, and increase risk and cost substantially at worst. These habits are only starting to unwind today.

So the bottom line is it’s not really about epic fails; I’m really writing about how to be an epic hero in innovative AI projects.

Subscribe here to further posts in this series to learn more.

You may also like...