At a strategic level, there is almost universal management buy-in for the importance of big data. We no longer ask when the age of big data will arrive, because it is already here. However, many companies are struggling to get operational and financial benefit from their big data investments. This article will review the challenges management face when implementing big-data initiatives and provide a framework for how to optimize the value from ongoing data investments.
The current state: data rich, information poor
According to Gartner, 70% of data that is captured is not used. In other words, the majority of investments in infrastructure to collect, house and access data provide no operational benefit or financial return. Only 27% of executives report that their big data initiatives are profitable.
This result is surprising given that big data is a corporate priority for both Enterprises and SMB’s. In a 2016 Capgemini study, over three quarters of respondents stated that the CEO or Executive team believe that big data has the potential to improve business results in areas of new revenue, operational efficiency and cost cutting.
Data mining capability versus organizational constraints
Every day we add quintillions of bytes to our existing data warehousing facilities. Our ability to access, analyze and operationalize big data lags far behind our ability to collect it. Let’s look at some of the constraints:
1. Not knowing which variable to track: For every 100 variables that are collected, there are less than 10 that are critical to the business at any given time. How do you select the most important and impactful variables and how do you ensure that you replace which ones to analyze when more important variables emerge?
2. Insufficient Business Intelligence tools: Although great progress has been made in building a solid infrastructure for storing data, companies typically rely on antiquated BI tools for access and analysis. The first culprit is the mismatch between requirements and functionality that results in poor tool selection. We often find that decisions about Enterprise-wide BI tools are made by IT professionals with insufficient input from the Operational Technology (OT) team or the business side.
3. Problems with data hygiene: Pervasive data quality problems add a further layer of complexity to the real-time and actionable use of data. For instance, in the Utilities/Energy/Chemicals verticals, 40% of executives cite poor data quality as a top challenge for big data initiatives. Integrating disparate data sources is time intensive. More importantly, basing operational decisions on corrupt data can result in faulty decision making.
4. Not enough skilled data scientists: Organizations simply lack the resources to manage big data. The technical competencies to store, clean and access data is the easy part. It is expensive to build internal Enterprise-wide competencies so that an organization can tap into the insights from latent and hidden relationships between data variables.
Framework for building a big data competency
Big data has the potential for disruption. At the same time, implementing a big data initiative does not, and should not require radical change within your organization. First, you need to be realistic about the limitations of your organization and the common challenges endemic to big data. Without making excuses, recognize that everyone is facing the same issue: how to make real-time decisions based on complex data.
Our approach to big data is incremental and reflects the challenges of information overload:
People: The dearth of talented data professionals (Business intelligence engineers, Big data architects, data scientist etc.) able to support a business should not prevent the adoption of big data programs and initiatives. Instead, there needs to be an emphasis on tools based on unsupervised learning algorithms. Unsupervised Machine Learning tools, can replace, and even surpass, human expertise in anomaly detection, correlation detection and pattern recognition.
Process: Processing needs to be automated. This limits instances of human error and also reduces the amount of labor-intensive data management tasks. Automated abnormal events detection shows any deviation in any physical parameter. Advanced monitoring provides early warning detection for machine degradation and reduces operating and monitoring costs.
Technology: The ultimate power of machine learning and predictive analytics is to use historical data for learning and predict future behavior based on it. When selecting technologies, we recommend including tools that can access time-series databases and can then generate statistical models for forecasting.
Introducing Presenso: A breakthrough solution for predictive machine asset management
Presenso is a fully automated and unsupervised monitoring solution that predicts asset failures hours or days before they occur. Unlike supervised monitoring systems, no advance human input or expert knowledge is required.
How is Presenso different from other solutions? Using Artificial Intelligence and adaptive algorithms, all machine sensor data is analyzed in real time. With early detection of anomalous sensor data behavior, Presenso uses Machine Learning to identify assets degradation or potential failure. This detection occurs well in advance of rules-based detection systems that only generate alerts after the occurrence of breaches in control thresholds – in other words, when it is too late.
By alerting the facility to potential failure, preventive action or repairs can be performed and completed before machine downtime or factory shutdowns occur.
The Presenso cloud based solution is easy to install, requires no advance knowledge of machine operation or structure, is sensor agnostic and there are no limits to the number of sensors that can be monitored.
Interested in learning more about Presenso? Click here to schedule a complimentary demo.