• Comment:

    UPTIME will develop a versatile and interoperable unified predictive maintenance platform for industrial & manufacturing assets from sensor data collection to optimal maintenance action implementation. Through advanced prognostic algorithms, it predicts upcoming failures or losses in productivity. Then, decision algorithms recommend the best action to be performed at the best time to optimize total maintenance and production costs and improve OEE.

    UPTIME innovation is built upon the predictive maintenance concept and the technological pillars (i.e. Industry 4.0, IoT and Big Data, Proactive Computing) in order to result in a unified information system for predictive maintenance. UPTIME open, modular and end-to-end architecture aims to enable the predictive maintenance implementation in manufacturing firms with the aim to maximize the expected utility and to exploit the full potential of predictive maintenance management, sensor-generated big data processing, e-maintenance, proactive computing and industrial data analytics. UPTIME solution can be applied in the context of the production process of any manufacturing company regardless of their processes, products and physical models used.

    Key components of UPTIME Platform include:

    1. SENSE deals with data agreegation from heterogeneous sources and provides configurable diagnosis capabilities on the edge.
    2. DETECT deals with intelligent diagnosis to provide a reliable interpretation of the asset's health.
    3. PREDICT deals with advanced prognostic capabilities using genering or tailored algorithms.
    4. ANALYZE deals with analysis of maintenance-related data from legacy and operational system.
    5. FMECA (Failure Mode, Effects and Criticality Analysis) deals with estimation of possible failure modes and risk criticalities evolution.
    6. DECIDE deals with continuously improved recommendations based on historical data and real-time prognostic results.
    7. VISUALIZE deals with configurable visualization to facilitate data analysis and decision making.
  • Comment:

    One of the main learnings is that Data quality needs to be ensured from the beginning of the process. This implies spending some more time, effort and money to carefully select the sensor type, data format, tags, and correlating information. This turns to be particular true when dealing with human-generated data. It means that if the activity of input of data from operators is felt as not useful, time consuming, boring and out of scope, this will inevitably bring bad data.

    Quantity of data is another important aspect as well. A stable and controlled process has less variation. Thus, machine learning requires large sets of data to yield accurate results. Also this aspect of data collection needs to be designed for example some months, even years in advance, before the real need emerges.

    This experience turns out into some simple, even counterintuitive guidelines:

    1. Anticipate the installation of sensors and data gathering. The best way is doing it during the first installation of the equipment or at its first revamp activity. Don’t underestimate the amount of data you will need, in order to improve a good machine learning. This of course needs also to provide economic justification, since the investment in new sensors and data storing will find payback after some years.

    2. Gather more data than needed.
    A common practice advice is to design a data gathering campaign starting from the current need. This could lead though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological description of the system under design. Of course, this could not be feasible in all real-life situations, but a good strategy could be populating the machine with as much sensors as possible.

    3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating Excel files distributed across different PCs into common shared databases, taking care of making a good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).

    Finally, the third important learning is that Data Scientists and Process Experts still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly. This is also an aspect that needs to be taken into account and carefully planned. Companies need definitely to close the “skills” gaps and there are different strategies applicable:

    1. train Process Experts on data science;

    2. train Data Scientists on subject matter;

    3. develop a new role of Mediators, which stays in between and shares a minimum common ground to enable clear communication in extreme cases.

    Results:

    Quantity and quality of data need to be ensured from the beginning of the process. It is important to gather more data than needed and to have a high-quality dataset. Machine learning requires large sets of data to yield accurate results. Data collection needs however to be designed before the real need emerges. Moreover, it is important having a common ground to share information and knowledge between data scientists and process experts since in many cases they still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly. 

     

    Installation of sensor infrastructure: during the initial design to incorporate the new sensors into the existing infrastructure, it is necessary to take into consideration the extreme physical conditions present inside the milling station, which require special actions to avoid sensors being damaged or falling off. A flexible approach is adopted, which involves the combination of internal and external sensors to allow the sensor network prone to less failure. Quantity and quality of data: it is necessary to have a big amount of collected data for the training of algorithms. Moreover, the integration of real-time analytics and batch data analytics is expected to provide a better insight into the ways the milling and support rollers work and behave under various circumstances. 

     

    Quantity and quality of data: the available data in the FFT use case mainly consists of legacy data from specific measurement campaigns. The campaigns were mainly targeted to obtain insights about the effect of operational loads on the health of the asset, which is therefore quite suitable to establish the range and type of physical parameters to be monitored by the UPTIME system. UPTIME_SENSE is capable of acquiring data of mobile assets in transit using different modes of transport. While this would have been achievable from a technical point of view, the possibility to perform field trials was limited by the operational requirements of the end-user. Therefore, only one field trial in one transport mode (road transport) was performed, which yielded insufficient data to develop useful state detection capability. Due to the limited availability of the jig, a laboratory demonstrator was designed to enable partially representative testing of UPTIME_SENSE under lab conditions, to allow improvement of data quantity and diversity and to establish a causal relationship between acquired data and observed failures to make maintenance recommendations.