Given that Canvass is an artificial intelligence (AI) and machine learning (ML) software firm, we believe in the power of ML. However, there’s a discouraging statistic we can’t overlook. According to Gartner, 85% of ML projects fail. Worse yet, the research company predicts that this trend will continue through 2022.
Does this point to some weakness in ML itself? No, it points to weaknesses in the way it’s applied to projects. There are many predictable ways that ML projects fail, which can be avoided with proper expertise and caution. We’ve experienced this personally; as we work with different companies, we notice the same patterns occurring over and over again.
The mistakes that lead to failed ML projects are all easy to make. Let’s run through them, so you can make sure that your ML project avoids the most common pitfalls.
Most companies are engaged in some form of digital transformation, which means they’re generating data. Companies may feel an impulse to use that data for ML projects. This is triggered by the incorrect perception that ML can pull insights from any information you throw at it.
Machine learning can do remarkable things with data, but it has to be ML-ready or “clean” data. “Volume of data isn’t everything,” says Dudon Wai, product manager at Canvass. “It can be garbage in, garbage out. Just because you have a lot of it, doesn’t mean it’s useful.”
And there are many ways that data can fail this test. For example, the data might not be representative of your everyday operations. You might leave the sensor on a manufacturing asset running when the machine is off, and thus the data collected becomes corrupted by those periods of inactivity.
In addition, the data needs to be multifaceted enough that ML can detect meaningful patterns in it. Perhaps you’d like to use ML to optimize your turbines’ energy consumption and reduce your energy costs and greenhouse emissions. This is one of the top three use cases our customers are pursuing since energy represents almost 20% of their output costs. To understand your turbines’ thermal efficiency, you’d need to identify the optimal control parameters that would minimize your turbines’ total fuel consumption. But, if you’re only using a few set data points to build out your ML model, the results won’t resonate. Mastering a complex system based on only observing a few of its elements isn’t realistic.
Knowing whether your data is ready is an art in and of itself. Yet, your data needs to become ML-ready before you proceed with any ML project.
Machine learning is exciting technology. This leads some companies to embrace the idea that they’ll do something with ML before knowing what that something is. Companies examine current business objectives or recurring issues and assume that ML should be able to take care of it. “Because it’s new and there’s a lot of hype around it, people are trying to jump on the bandwagon,” says Wai.
However, ML isn’t good for absolutely everything. Among the use cases for ML, there’s a variety of difficulty levels. Some ML business wins can happen after a few weeks of work — others will take longer. Some possible ML applications have never been tried, and, as such, should be regarded as experiments. In certain cases, a problem that might be solved with ML could be solved more cheaply in another way.
It’s important to lay the groundwork to determine the business or operations challenge you are looking to solve. One of the key drivers that trap AI in pilot purgatory is that the project results didn’t warrant the time and effort to scale it further. When selecting an AI use case, determine whether you can answer these questions:
By going through this process, you should be able to understand if machine learning is the best way to approach your pressing issues. Often, it will be. But if you throw ML at an arbitrarily chosen problem, there’s no guarantee that it will be worth the investment.
To some extent, machine learning has become democratized. There are many more ML tools than there were even a few years ago, and data science knowledge has propagated. This means that a skilled data scientist can take on a reasonably sophisticated ML project on their laptop.
However, having your data science team working on an AI project in isolation can lead your company down the longest route to success. Unless you’re experienced in its application, you can run into unexpected snags. And unfortunately, you can also get knee-deep into a project before realizing that you haven’t prepared correctly. It’s imperative to ensure that the domain experts — your process engineers or plant operators — are not sidelined in the process because they understand its intricacies and the context of related data. Unfortunately, we’ve seen companies get knee-deep into a project before bringing in the right human resources to the table. At this point, the project has to be abandoned, or a consultant has to be called. “A lot of companies fall into this trap of treating it as a data science project instead of an operations project,” Wai states.
To review, there are three common machine learning-related issues that we consistently encounter:
The answer to these problems is to do the opposite at every stage. Understand whether your data is ML-ready. Once you’ve made sure, apply that data to ML use cases that produce an impact for your enterprise. Be sure that you have the specialized knowledge required to carry out the project.
If you do this correctly, your machine learning project can avoid the 85% failure rate and can be part of the successful 15%. Also, once you get one successful project off the ground, it becomes much easier to expand, doing more and more with ML.
What does that look like? One of our clients, a world leader in the geothermal energy field, saves hundreds of thousands of dollars each year by optimizing asset utilization, reducing unplanned downtime, and preventing energy loss. And this isn’t an unusual result for us.
At Canvass, we always start with the first step: inspecting a client’s data to see how close they are to ML readiness. We’d be happy to do this for you.