In this first installment of our blog series: Building a Reliable and Scalable Predictive Maintenance Strategy, we'll introduce two approaches that can be taken.
Unplanned downtime poses a significant challenge to efficient service, leading to costly and unsustainable operations. Predictive maintenance - offering actionable insights to anticipate and address component issues before causing downtime - has been a sought after solution. However, the perceived mystery behind data science and AI can be daunting enough to deter many service organizations from pursuing a predictive maintenance solution - and in some cases, even after they have embraced such initiatives, failing to achieve their full potential.
How can you move past AI's difficult reputation and harness it to gain real, tangible, and scalable results? The crucial aspect perhaps lies in how we capture the "intelligence."
Two approaches emerge:
A data driven approach relies on historical performance, failure data, engineering specs, and real-time sensory data to create condition-based alarms. Without technical knowledge or authority, it's hard to ascertain what is normal behavior and what's not - often leading to a high rate of false alarms, further eroding trust on the AI and machine learning models.
The other approach involves analyzing data with domain experts and engineering principles. The advantage of this approach is that recommendations are guided by a clear understanding of what "good" looks like. The challenge here is that it's difficult to scale due to the availability and capacity of such domain experts.
Which approach do you follow?
What are some successes or challenges that you've faced with your preferred approach?
In our next installment, we'll share the approach WindESCo is taking to tackle asset health monitoring.