• Results:

    The SERENA project has provided a deep dive into an almost complete IIoT platform, leveraging all the knowledge from all the partners involved, merging in the platform many competencies and technological solutions. The two main aspects of the project for COMAU are the container-based architecture and the analytics pipeline. Indeed, those are the two components that more have been leveraged internally, and which have inspired COMAU effort in developing the new versions of its IIoT offering portfolio. The predictive maintenance solutions, developed under the scope of the project, have confirmed the potential of that kind of solutions, confirming the expectations. 

    On the contrary, another takeaway was the central need for a huge amount of data, possibly labelled, containing at least the main behaviour of the machine. That limits the generic sense that predictive maintenance is made by general rules which can easily be applied to any kind of machine with impressive results. 

    More concretely, predictive maintenance and, in general, analytics pipelines have the potential to be disruptive in the industrial scenario but, on the other hand, it seems to be a process that will take some time before being widely used, made not by few all-embracing algorithms, but instead by many vertical ones and applicable to a restricted set of machines or use cases, thus the most relevant.  

    The SERENA system is very versatile accommodating many different use cases. Prognostics need a deep analysis to model the underlying degradation mechanisms and the phenomena that cause them. Creating a reasonable RUL calculation for the VDL WEW situation was expected to be complicated. This expectation became true as the accuracy of the calculated RUL was not satisfactory. Node-Red is a very versatile tool, well-chosen for the implementation of the gateway. However, the analysis of production data revealed useful information regarding the impact of newly introduced products on the existing production line. 
     

    From the activities developed within the SERENA project, it became clearer the relation between the accuracy and good performance of a CMM machine and the good functioning of the air bearings system. It was proved or confirmed by TRIMEK’s machine operators and personnel that airflow or pressure inputs and values out of the defined threshold directly affects the machine accuracy. This outcome was possible from the correlation made with the datasets collected from the sensors installed in the machine axes and the use of the tetrahedron artifact to make a verification of the accuracy of the machine, thanks to the remote real-time monitoring system deployed for TRIMEK’s pilot. This has helped to reflect the importance of a cost and time-effective maintenance approach and the need to be able to monitor critical parameters of the machine, both for TRIMEK as a company and for the client’s perspective.

     

    Another lesson learned throughout the project development is related to the AI maintenance-based techniques, as it is required multiple large datasets besides the requirement of having failure data to develop accurate algorithms; and based on that CMM machine are usually very stable this makes difficult to develop algorithms for a fully predictive maintenance approach in metrology sector, at least with a short period for collection and assessment.

     

    In another sense, it became visible that an operator’s support system is a valuable tool for TRIMEK’s personnel, mainly the operators (and new operators), as an intuitive and interacting guide for performing maintenance task that can be more exploited by adding more workflows to other maintenance activities apart from the air bearings system. Additionally, a more customized scheduler could also represent a useful tool for daily use with customers.

     

    As with any software service or package of software, it needs to be learned to implement it and use it, however, TRIMEK’s personnel is accustomed to managing digital tools.  

    All the experiments conducted have been interpreted within their business implication: a reliable system (i.e. robust SERENA platform and robust data connection) and an effective Prediction System (i.e. data analytics algorithm able to identify mixing head health status in advance) will have an impact on main KPI related to the foaming machine performances. In particular:

    • Overall equipment effectiveness increased 0.4% with an intended achievement of 15% 
    • Mean time to repair reduced from 3.5 hours to 3 hours
    • Mean time between failures increased from 180 days to over 360 days
    • Total cost of maintenance was reduced from €17400 to €8000.

     

    Besides practical results, SERENA provided some important lesson to be transferred into its operative departments:

     

    Data Quality 

    Finding the relevant piece of information hidden in large amounts of data, turned to be more difficult than initially thought. One of the main learnings is that Data Quality needs to be ensured since the beginning and this implies spending some more time, effort and money to carefully select sensor type, data format, tags, and correlating information. This turns particularly true when dealing with human-generated data: if the activity of input data from operators is felt as not useful, time-consuming, boring and out of scope, this will inevitably bring bad data.

    Some examples of poor quality are represented by:

    a.         Missing data

    b.         Poor data description or no metadata availability

    c.         Data not or scarcely relevant for the specific need

    d.         Poor data reliability

     

    The solutions are two: 1) train people on the shop floor to increase their skills on Digitalization in general and a Data-Based decision process specifically; 2) design more ergonomic human-machine interfaces, involving experts in the HMI field with the scope of reducing time to insert data and uncertainty during data input.

    These two recommendations can bring in having a better design of dataset since the beginning (which ensure machine-generated data quality) and reduce the possibility of errors, omissions and scarce accuracy in human-generated data.

     

    Data Quantity

    PU foaming is a stable, controlled process and it turned to have less variation: thus, machine learning requires large sets of data to yield accurate results. Also, this aspect of data collection needs to de designed in advance, months, even years before the real need will emerge. This turns into some simple, even counterintuitive guidelines: 

    1.         Anticipate the installation of sensors and data gathering. The best is doing it at the equipment first installation or its first revamp activity. Don’t underestimate the amount of data you need to improve good machine learning. This, of course, also needs to provide economic justification since the investment in new sensors and data storing will find payback after some years.

    2.         Gather more data than needed. Common practice advice is to design a data-gathering campaign starting from the current need. This could lead, though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological descriptions of the system under design. Of course, this could not be feasible in real-life situations, but a good strategy could be to populate the machine with as many sensors as possible.

    3.         Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating excel files spread in individuals’ PC into commonly shared databases, making good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).

     

    Skills

    Data Scientists and Process Experts are not yet talking the same language and it takes significant time and effort from mediators to make them communicate properly. This is also an aspect that needs to be taken into account and carefully planned: companies need definitely to close the “skills” gaps and there are different strategies applicable: train Process Experts on data science, train data scientists on the Subject matter; develop a new role of Mediators, which stays in between and shares a minimum common ground to enable the extreme cases to communicate.

     

    In this case edge analytics performed by a low-cost edge device (Raspberry Pi) proved it is feasible to practice predictive maintenance systems like SERENA. However, performance issues were caused by having a large sampling frequency (16kHz) and background processes running on the device, which affected the measurements. By lowering the sampling frequency, this issue can be reduced. Another way would be to obtain different hardware with buffer memory on the AD-card. In addition, a real-time operating system implementation would also work. It was found that the hardware is inexpensive to invest in, but the solution requires a lot of tailoring, which means that expenses grow through the number of working hours needed to customize the hardware into the final usage location. If there are similar applications thatimplement the same configuration, cost-effectiveness increases. Furthermore, the production operators, maintenance technicians, supervisors, and staff personnel at the factory need to be further trained for the SERENA system and its features and functionalities.