Thinking of Starting an Enterprise AI Project?

Are you thinking: “I’ve heard that AI can provide some value to my organization?” but wondering where to start? You might be a CIO or other technical leader, looking to avoid being left in the dust behind this important technology. Based on our experience delivering dozens of AI projects (thousands of models) over 30 years, there are three approaches that you can take, each with their pros, cons, classic mistakes, and best practices.

Buy

What it is: In this strategy, you buy an AI solution from a vendor. It might be packaged within an application, or you might obtain AI value through an API, perhaps to a microservice or other SaaS offering. There are tens of thousands of AI vendors out there, and this space is growing at double-digit rates per year. So you might have been inundated with sales calls, and it’s tempting to believe that these vendors can help you. Often they can. My favorite vendor in this space is SAP, due to its enterprise focus and hundreds of deployed ML use cases (they’re also one of the nicest companies to work with I’ve ever met, by big margin).

Biggest upsides: Risk and cost reduction, if the application matches your needs, compared to “rolling your own”. Your vendor has made an investment in building their system for others, and by doing so has learned important lessons on their own nickel, not yours.

Biggest downsides: The AI system is general-purpose, not built with your specific data / situation in mind (although increasingly, vendors are using machine learning transfer to overcome this obstacle). You are also dependent on the vendor for what may be a mission-critical system, and so if they take a different direction or drop support for your use case, you’re back to square one.

Classic mistake: Automating too much, instead of asking “what part should be automated, and what part should I keep manual?”

Best practices: Traditional good vendor management: evaluate product, company, fit-for-purpose. Plus use Decision Intelligence to determine how the AI fits into an action that you will take that leads to an outcome. Ask: “Do I really need AI for this, or is there another, possibly simpler, solution?” Insist that your vendor speak in your language, not in the language of data science. If you don’t understand what they’re saying, stop them and ask them to explain. Don’t confuse someone who is smart about tech with someone who knows how their tech fits into your business. Ask questions until you understand. Don’t give up.

Build

What it is: In this strategy, you stand up a data science team to use your own internal data to build a model that is exclusive to your needs.

Biggest upsides: Maximum control of the project. Competitive differentiation: nobody else in your industry has built this system. Agility and extensibility.

Biggest downsides: Risk and cost of project delay or failure. Cost and lack of availability of applied AI experts. Managing an AI project requires special skills, and these applied AI skills are fundamentally different than what is taught in data science programs. Do you know how to map your system’s performance to your business value? Do you know what data to cleanse, and what data governance efforts are wasted? Do you know how to connect your AI model to actions you might take and business outcomes you wish to achieve? Answering these questions is at least important-if not moreso-than obtaining a good ROC curve.

Classic mistake: Spending too much time cleansing non-predictive data, and too little time cleansing predictive data. Not knowing how to tell the difference.

Best practices: Hire a team of software engineers, and let your machine learning expert be a contractor, like my team here at Quantellia. Start with the decision, not the data. Use data governance best practices specifically designed for AI projects (operational and analytical data cleansing are very different things). Start with a Proof-of-Concept, and use it to prove the potential value of your solution. Built an ROI model early, and update it continuously. Budget about 10x the cost of the POC for a production system. Build an initial model and incrementally improve it, rather than aiming too high for the first deliverable. Work incrementally. If your team doesn’t have anyone that has deployed multiple AI systems at scale, then hire an outside consultant for periodic technical reviews. Don’t be fooled by someone who knows the “inside of the box” (how machine learning works) to think that they necessarily also understand ML deployment best practices.

Hybrid

What it is: Using a combination of internal and vendor/consultant resources to build a custom AI solution.

Biggest upside: Potentially this is the best of both worlds: you retain project control, competitiveness, and adaptability, while reducing risk and cost by using a vendor to support your project who’s worked on similar projects before.

Biggest downsides: Requires close collaboration, trust, and good communication with your vendor. You are, essentially, on the same team working towards a shared goal. Keeping this interface working smoothly requires a dedicated resource on your end.

Classic mistake: Usually in this model, you are in charge of preparing the data, and the vendor handles building the AI system based on that data. The classic mistake here (which must underline because I have seen dozens and dozens of times) is to either a) spend too much time on QA of the wrong data at the same time that you b) spend too little time on QA of the right data. So you repeatedly send the vendor data that has not been adequately QA’d.

Best practices: Find a vendor whose people you like, who have built many projects before, and who are experienced in both data management as well as AI (or, even better, AI-specific data preparation). Meet with them in person, at project start, to ensure that everyone understands the decision that the AI will support, and the model performance needed to support that decision. Also ensure everyone understands data requirements. Document those requirements, and keep that document up to date. Assign a data QA person on your side who is responsible for ensuring that the vendor isn’t wasting precious time building models based on incorrect data. Develop automated data test tools (or ask your vendor to). Consider people, process, technology, actions, and outcomes in your planning.

Conclusion

As you get started with your first enterprise AI project, there are several strategic choices that you can make, each with their pros and cons, as shown here. Standing above all of these is a best practice spoken to me by many CIOs, which is “just get started”. AI can seem daunting, but it doesn’t have to be: a simple POC can go a long way to helping you to determine if this direction can provide value for you and your organization. For many, this value has been massive. For others, projects have faltered because of a lack of appreciation for the success factors that aren’t taught in data science schools. I’m happy to help if you like; drop me a line!

You may also like...