It may be unsettling for some readers, but it only takes a slight stretch to argue that the Digital Twin has already passed its prime and is on its way to “legacy” status. Gartner forecast that the Digital Twin will be one of the top 10 technology trends for both 2017 and 2018. Since we know that Gartner is never wrong, how do we reconcile the exuberance for the Digital Twin with emerging anecdotal evidence from factory owners that are struggling with its adoption?
From a distance, the Digital Twin is a game changer for an industrial facility. According to Gartner, the definition of the Digital Twin is “a dynamic software model of a physical thing or system that relies on sensor data to understand its state, respond to changes, improve operations and add value.”
Although there are numerous applications for the Digital Twin, I will focus on the high impact category – predictive asset maintenance. Industry juggernauts such as GE (Predix), SAP (Hana) and Siemens (MindSphere) are pouring billions of dollars in R&D to secure a piece of the industrial IoT market. The idea itself is simple. By creating a virtual replica (simulator) of the physical machine, the factory owner can feed real-time sensors data to the simulator, view simulated machine performance in real-time and be alerted to degradation or machine failure.
GE’s marketing collateral refers to an interesting wind turbine use case. In the example they provide, real-time data from the turbine is analyzed and used to make changes in its angle or position to improve optimization. As a solution for predictive asset maintenance, GE Predix can analyze a “fleet” of turbines and detect anomalies in sensors to predict potential failure.
What’s the catch?
Before we address this directly, let’s start by reviewing how the Digital Twin is operationalized. At a basic level, there are two prerequires to deploy a Digital Twin: (1) Access to accurate blueprints or the actual designers of the physical machine and (2) a technology platform that can generate the insights required for real-time decision making.
Both requirements are more complicated than is commonly recognized. Even if the factory has up-to-date blueprints, the process of creating 3D models is time and labor intensive. Form a resource perspective, implementation of a Digital Twin will require a cadre of vendor-supplied big data engineers, designers and consultants, and support from the facility’s maintenance and process technicians. Because of this cost issue, the Digital Twin is not scalable within the huge installed base of relatively inexpensive plant machinery that requires a Predictive Maintenance solution.
Most significantly, the underlying technology of the Digital Twin is no longer on the cutting edge of Artificial Intelligence. The so-called Supervised Machine Learning model at the core of the Digital Twin requires the algorithm to “learn” the machine behavior from the data labels of historic downtime records. This learning process is iterative, time-consuming and requires human input.
A more advanced form of Machine Learning is the Unsupervised model. In the case of the machine asset, all the factory sensor data is sent to the cloud and analyzes in real-time. One of the key differentiators between the Supervised and Unsupervised model is that in the case of Unsupervised, the algorithm is agnostic with respect to sensor type or asset class, and requires no human input. In other words, instead of expending vast resources to learn a factory machine, the Unsupervised approach is trained to automatically build machine models and look for abnormal sensor behavior (and anomalous behavior patterns) with no human input or understanding of the underlying machine asset.
I am not suggesting that the Digital Twin and Supervised Machine Learning is a legacy system comparable to the IBM mainframe. First, the mainframe was a technological advancement that lasted for many decades during a period of relatively slow innovation. More importantly, there are many applications to the Digital Twin that have not been fully explored and are likely to provide it with oxygen for years to come.
Specifically, the Digital Twin has untapped potential in areas outside of high-cost and greenfield industrial asset maintenance. For instance, every Tesla vehicle is coupled with a Digital Twin that transmits data to the factory in real-time. If there is a transmission problem, it can be fixed with a software download. With Tesla, one does not have to rely on the honesty and experience of a motor mechanic to diagnose engine failure because the system can pinpoint failing systems and recommend corrective actions.
If we continue with the automobile example, the Digital Twin is the technology equivalent of a Hummer: costly, prestigious and aspirational. The Hummer appeals to a micro-segment of society, whose needs are not reflective of those of the general population. Let’s put aside purchasing and maintenance costs. We can assume that most readers’ zip code is not 90210 and do not live in an active military zone. Once the thrill of ownership passes, there are faster and more practical ways for carpool or a trip to the local supermarket than driving a Hummer.
General Motors retired the Hummer because it outlived its commercial viability. Looking to the future, will the Smart Factory be built on an expensive infrastructure with a limited shelf life? Perhaps the time has come for an adaptive AI solution for asset management that will adjust and morph over time. Unsupervised Machine Learning.
Returning to the original question, “is the digital twin already a legacy system?” If we explore the untapped possibilities using the Tesla model, we can expect to see new applications for the Digital Twin for many years to come. However, in the more narrowly defined predictive asset maintenance category, the Digital Twin may still find its place but it’s doubtful whether we will witness widespread adoption in its current iteration.