Are Machine Learning and Big Data all about just advertising and marketing?

When Mark and I opened the Quantellia office in Mountain View last year, I noticed two things as I reconnected to my friends in the Machine Learning community. First, it seemed like the vast majority of my ML friends had landed in the Valley, and many were working, or had worked, at Google. In contrast, when I started my PhD work in 1986, there were a plethora of opportunities: Bellcore’s Advanced Research Group, Xerox Parc, SRI, IBM’s Almaden and Yorktown Heights, NEC, AT&T Area 11, Microsoft Research, US West’s Advanced Technology Group in Boulder, and more all beckoned to me as tempting destinations.   And the use cases were all over the map as well, from disease diagnosis, to speech recognition, to the hazardous waste recognition work I ended up doing at the Colorado School of Mines, and much more.

Over the course of my PhD program (1996-1993) at Rutgers, however, these opportunities steadily diminished, such that by the time I was on the job market in 1992, the best prospect was a faculty position, which I gladly accepted

My second observation last year was that the use cases seemed to have focused on just a few, very lucrative ones, and that many other applications of this great technology had been left by the wayside. In what appeared to be a bit of winner-take-all (and with the usual disclosure of any generalization made from random personal observations) I observed marketing, advertising, and customer analysis were winning the day at the peril of other application areas.

I looked into this decrease in the “species diversity” of machine learning use cases a bit more thoroughly last year, in preparation for my “1,000s of use cases ” talk at Mlconf in November.

I found a survey from Tibco Jaspersoft, summarized in the graphic below.  It shows that Big Data use cases are predominantly in customer analytics and experience analytics, with smaller numbers covering threat, risk, and regulatory compliance. With brand sentiment and product placement optimization also making the list, and with Google being a market dominator, you might well ask “what about the other use cases?”

I showed this graphic at the  Mlconf talk , too, and asked for a show of hands for what use cases were being used by participants in the room (about 300).  The number was even narrower than this list.

So, a bit of objective evidence for what I thought I was seeing.  Do you agree?  I’d love your thoughts about this in the comments below. (An aside: I found this article recently, showing that even in marketing, there’s still a lot of non-evidence-based thinking going on)

So the question to ask is “what’s going on?”  A second graphic from the same survey, shown below, points towards some answers.

Here, it’s clear that organizations choose not to use Big Data for a variety of reasons, including their emphasis on relational data, a lack of understanding, a feeling that it doesn’t apply to their situation, or a lack of business justification.

So, at the risk of making a gross generalization (and at the risk of conflating Big Data with Machine Learning, which are not identical), it appears that the last decade has been characterized by squeezing the maximum value from a relatively small set of use cases.

In the last few months, the tide seems to be turning, however. There’s an increasing sense of the underutilized potential of this technology.   Dato’s Guistrin, agrees, for instance, with my personal observations, saying that “Most of the big machine learning use cases today revolve around familiar stories, such as building recommender systems, targeted advertising, fraud detection, and financial modeling. But down the road, all sorts of possibilities will open up.” (emphasis mine)

So the other day I was watching Steve Jurvetson speak on deep learning, identifying it as “the single most important technology trend”. Take a look at the slide from his talk at the top of this article.  Says Jurvetson, “Any product as a consumer that excites you over the next years, and makes you think that was magical, how did they do that, it was probably based on [deep learning]”.

My interpretation: a decades-old logjam is breaking, and we are at the cusp of an explosion of new opportunities to use machine learning technology. If data is a “sixth sense”, then machine learning is our “perceptual engine” that lets us interpret what would otherwise be a tsunami of information, in many useful ways.   With decision intelligence, we formalize this, by providing a structured methodology for translating new problems into a form where machine learning can be of value.

So stay tuned, it’s getting interesting. And if you’d like to find out more, then take a look at my free eBook on Decision Intelligence which you can receive for subscribing to this blog.

Avatar

Lorien Pratt

Pratt has been delivering AI and DI solutions for her clients for over 30 years. These include the Human Genome Project, the Colorado Bureau of Investigation, the US Department of Energy, and the Administrative Office of the US Courts. Formerly a computer science professor, Pratt is a popular international speaker and has given two TEDx talks. Her Quantellia team offers AI, DI, and full-stack software solutions to clients worldwide. Previously a leading technology analyst, Pratt has also authored dozens of academic papers, co-edited the book: Learning to Learn, and co-authored the Decision Engineering Primer. Her next book: Link: How Decision Intelligence makes the Invisible Visible (Emerald Press), is in production. With media appearances such as on TechEmergence and GigaOm, Pratt is also listed on the Women Inventors and Innovator’s Mural. Pratt blogs at www.lorienpratt.com.

You may also like...