Time for big data and machine learning to take a bold step forward…into 1946
We’ve been lost in data for half a century. Time to move on, friends.
Robert McNamara, secretary of defense from 1961 through 1968, came to the post from Ford Motor company, where he was ultimately appointed president. His success in both positions came from bringing the discipline of statistical control from the Air Force into Ford, where he and his “whiz kid” colleagues turned the company around.
What is statistical control? It’s the idea that you use data to manage your systems. The idea, as shown in the image above, is that data would inform your understanding of systems, which would then guide your decisions.
Back in the 1950s, as my colleague Mark Zangari shows in his talk last spring at USF, anybody who wanted to use data had to overcome some pretty big challenges, as shown in the picture below (I like the data miner especially).
As you know, we’ve lived through a revolution that’s completely changed the availability of data. We’ve solved the backbone of this problem, and now have mature tools.
But isn’t it time to move on? How can we build software that allows us to start with the data, then uses it to actually support better decisions? Take a look at the picture at the top of this article. It’s time to move forward into the original statistical control vision: to use computers to help with the next step in the chain.
Surprisingly, there hasn’t been software today to help us with the next step of using data: in an approach that was well-understood in the 1940s.
[bctt tweet=”Standard practice is to upload data to our brains, then abandon computer help for complex decisions.”]
This is why data visualization is so important. But wait a second: shouldn’t the computer keep helping us understand those charts and graphs, specifically to understand the outcome of the decisions we make, given that data?