Select Page

The foray of GE (Predix), Siemens (Mindsphere) and SAP (Hanna) into the burgeoning machine learning predictive maintenance category has created some confusion in the marketplace. While many analysts claim that it’s too early to forecast who will eventually dominate the market, industrial plants with a near-term time horizon do not have the luxury of waiting on the sidelines to select a predictive maintenance solution.

This article provides guidelines on several critical questions to consider when evaluating big data machine learning solutions.

Let’s start with the commercially available Supervised machine learning solutions for industrial predictive maintenance. The supervised machine learning algorithm needs to create a virtual copy of the physical machinery and then learn or simulate the system’s behavior. In this way, deviations from expected system behavior are detected and used to predict machine failure.

Selecting a Supervised machine learning based asset monitoring vendor is a complex process. If implemented correctly, there is potential for significant improvement in production yields.

However, there is a corresponding risk of over-investing in the wrong solution that results in some unintended consequences that we will explore.

The four questions to ask before committing to a big data machine/asset monitoring vendor:

1)  Is the solution compatible with your existing (and planned) asset infrastructure?

A typical industrial plant purchases its machinery (Compressors, Pumps, Motors, Turbines, Generators) from multiple OEM’s. The first question that one needs to ask is whether the machine learning solution is agnostic to equipment and sensor type?

GE has publicly claimed that the Predix Cloud can connect with any equipment, but it is not clear whether Predix based applications will achieve the same level of performance with other OEM machinery as it does with GE’s. When evaluating a machine learning vendor, one needs to understand any limitations with respect to its ability to learn different asset type from a range of OEM vendors.

2) What are the implications for a long-term commitment to the vendor?

When purchasing a big data machine learning solution, the industrial plant should understand the vendor’s product roadmap. Why? Because machine learning is an expensive investment that cannot be easily switched out. The problems when dealing with emerging technologies is that you may have to accept a fluid product roadmap, until “work-in-progress” issues are resolved.

Several respected analysts covering the industry are taking a wait-and-see approach. In some cases, there is skepticism about the maturity of vendor solutions offerings. For instance, Isaac Brown a Lux Research analysts released a somewhat controversial report stating that: “Predix is not as fully developed as GE represents it to be, has minimal market penetration, and has not been battle-tested at scale.” Note also that GE has originally planned to build its own large Predix cloud, and has recently backpeddled and is now relying on the Microsoft Azure cloud infrastructure.

Before making a commitment to a vendor, explore the potential risks, benefits, and implications of an extended agreement with a machine learning solution.

3) Do you understand the vendor’s business model?

There is a good reason why the big data machine learning / Industrial IoT category has attracted vast amounts of investments. According to McKinsey Consulting, by 2025 the potential economic benefit of IoT is up to 11 trillion dollars. McKinsey estimates that factories will receive up to 3.7 trillion in value, from areas such as operation management and predictive maintenance. As technology companies vie for a slice of this market, there will be different ways for them to monetize the opportunity.

Whether the vendor is building a technology platform based on an annuity-based payment structure (think Microsoft and the enterprise desktop OS) or a service-based business model (think IBM Global Services), it is important to understand the financial implications of this model and how it impacts your bottom line. In the case of Predix, GE is positioning Predix as the cloud-based platform for industrial internet and evangelizing this platform to create an entire technology ecosystem of third party solutions. This is an added complexity that requires further investigation on the part of industrial plant owners.

4) Is Supervised machine learning scalable throughout your facility?

Perhaps the most challenging issue facing supervised big data machine learning is the complexity associated with setting up a “digital twin.” In order for Supervised learning to occur, the vendor needs to build a digital replica or clone of the physical factory machine.

First, this requires access to the actual blueprints of the physical plant – a human resource consuming process. The vendor dedicates billable consultants to oversee the learning process and the plant needs to dedicate internal resources. As you can see in the image below, the plant engineering and maintenance staff play an integral role in the deployment of a supervised machine learning solution. Since each virtual clone needs to be a completely accurate replica of the physical machine, this adds a costly labor overhead burden to the industrial plant.

Second, the development of a Supervised machine module is time-consuming and iterative. Long System learning, manual model re-calibration, and Sisyphean model monitoring form the basis of Supervised machine learning.

Most importantly, each piece of customer equipment requires a unique machine learning process. In parallel to building the virtual replica of the asset, the analytics model is developed and deployed. In order for the machine to learn the asset, iterative model development and monitoring is required until the machine learning has acquired the system knowledge. This model is then fed into the Analytics Application which provides a further layer of iterative feedback in order to recalibrate the model.

The Unsupervised Alternative to Machine Learning

The application of Unsupervised machine learning algorithms to predictive asset management is a relatively recent phenomenon. Unsupervised approach allows the algorithm to learn using only data inputs. The algorithm works without the need to match the input with corresponding human inserted information (labels). In the Unsupervised model, the computer finds the structure of relationships between inputs (patterns). For example, the computer detects patterns between anomalous sensor behavior. The algorithm analyzes data from sensors that are directly related to each other, such as temperature, pressure, vibration sensors controlling a process and from other sensors in the industrial plant.

What does this mean for predictive asset maintenance?

The most significant difference between Supervised and Unsupervised is that an Unsupervised machine learning model is sensor, vendor, asset age and machine agnostic. The Artificial Intelligence algorithm is automatic trained to identify anomalies in data without the need to learn the underlying system. Instead of requiring a time consuming digital clone of a physical asset, the algorithm detects anomalies autonomously, patterns of anomalous behavior and is able to predict asset failure hours or days before its occurrence (without human feedback).

Without the need to build virtual clones of the physical asset, costly human input into model building is no longer necessary. In fact, as in the case of Presenso, deployment can be done remotely via the cloud, with no onsite maintenance required.

Supervised versus Unsupervised Predictive Maintenance

Industrial IoT and big data machine learning are expected to revolutionize machine asset management. As with every revolution, the beneficiaries of the Industry 4.0 transformation will be those entities that recognize its potential while demonstrating prudent business practices and careful strategic planning.

Like this article? Please share it using these icons....Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn
Deddy Lavid

Deddy Lavid

Experienced R&D Manager. Recognized expert in the field of Machine Learning and Big Data architecture. His work spans the full spectrum from researching isolated data problems to building complex production systems. At Rafael, he led a team of algorithm developers in large Software projects of national importance. Holds honors M.Sc. computer and information science.

[:en]

[:]