RebootIoTFactory | Reboot IoT Factory
01-03-2018
-30-04-2021
01-03-2018
-30-04-2021
01-10-2018
-30-09-2020
01-01-2019
-30-06-2023
01-01-2019
-31-12-2022
01-01-2019
-31-07-2022
One the one hand, some pilot owners expect that no particular skills will be requested after the QU4LITY project development implementations. For example:
New job profiles and associated skills are: Digital Business Processes Analyst, Expert in Machine Learning Algorithms, DevOps Development knowledge, Data scientist (programming and statistical knowledge), Artificial Inteligence knowledge, Cybersecurity expert, Ontology architects and modellers in MBSE, Digitalized systems Shopfloor worker, Digital and connectivity engineering, New systems integration Manufacturing Engineer, Cloud -Data Formats - Data analytics Engineer, Product, manufacturing and quality global knowledge.
Re- and upskilling needs were identified in the following areas: AI and Data analytics; Agile development, Multi disciplinary project management (IT, mechanical, electrical engineering); Design Thinking; Standardization; Data Analysis and Data Space technology for Manufacturing; IT Skills : Docker environment and languages like phyton of json; Data Analytics : basic skills , BI softwares
Programming languages such as C#, C ++, HTML, Java, Microsoft .NET and SQL Server ; data tools for data cleaning and preprocessing, data parsing, data feature engineering; machine to machine (M2M) data and protocols; Machine Learning Skilling for all languages/ ML Systems; Data analysis skills
The following knowledge delivery mechanism where identified as relevant: AR/VR, gamification, on-the-job training, vocational training, MooCs (Massive Open Online Courses)
01-11-2018
-31-10-2022
01-10-2018
-31-03-2022
In order for CoLLaboratE to successfully realize its vision, several prerequisites were set in the form of major Scientific and Technological Objectives throughout the project duration. These are summarized in the following points:
Objective 1: To equip the robotic agents with basic collaboration skills easily adaptable to specific tasks
Objective 2: To develop a framework that enables non-experts teaching human-robot collaborative tasks from demonstration
Objective 3: The development of technologies that will enable autonomous assembly policy learning and policy improvement
Objective 4: To develop advanced safety strategies allowing effective human robot cooperation with no barriers and ergonomic performance monitoring
Objective 5: To develop techniques for controlling the production line while making optimal use of the resources by generating efficient production plans, employing reconfigurable hardware design, and utilising AGV’s with increased autonomy
Objective 6: To investigate the impact of Human-Robot Collaboration to the workers’ job satisfaction, as well as test easily applicable interventions in order to increase trust, satisfaction and performance
Objective 7: To validate CoLLaboratE system’s ability to facilitate genuine collaboration between robots and humans
The CoLLaboratE project will have profound impact on strengthening the competitiveness and growth of companies in the manufacturing sector:
- CoLLaboratE developed a co-production cell for manufacturing production lines, capable to perform assembly operations through human-robot collaboration. This cell is the result of inter-disciplinary technological advances that were realized during the project, in a series of highly significant areas related to robotics and artificial intelligence. The proposed system has been demonstrated and evaluated at TRL6, being ready for commercial take-up, allowing this assembled knowledge to be in turn, rapidly integrated in real production lines of industries and SMEs.
- CoLLaboratE developed technologies for autonomous and collaborative assembly learning and teaching methods by non-experts so that no explicit robot programming is required. As the products of industries, such as LCD TV’s rapidly evolve, flexibility so as to easily adapt in a new assembly task regarding a new product, is a major quality sought for modern assembly lines. Robots that need several months to be programmed and start working on the task are a rather unrealistic solution. Given the (a) time-consuming programming process typically required for industrial robots and (b) difficulties posed to robots from uncertainties in small parts assembly, cheap labour hands of low cost countries (LCCs) have so far been typically utilized instead of robotic solutions, through LCC assembly outsourcing strategies.
- CoLLaboratE service portfolio included a set of innovative fast and flexible manufacturing techniques, combining the benefits of the reconfigurable hardware design and modern ICT technologies (e.g. AΙ, learning toolkit, digitization of assembling processes)
- CoLLaboratE introduced novel AGVs on shop floors with enhanced capabilities, that apart from motion planning and obstacle detection, they are also capable of detecting the intentions of human users in the factory in order to provide flexibility and facilitate the production process, along with optimal use of resources.
- CoLLaboratE reduced delivery times and costs, whereas robot assembly techniques will also allow a much greater degree of customization and product variability. As it is highlighted in the euRobotics AISBL Strategic Research Agenda, the use of robotics in production is a key factor in making manufacturing within Europe economically viable; locating manufacturing in Europe through robotic solutions that will suppress LCC outsourcing is a major goal for the near future. Through flexible assembly lines, the manufacturing companies will be offered with great leverage over their innovation capacity and integration of new knowledge into their products.
- CoLLaboratE paved the way for a new era in industrial assembly lines, where robots will present genuine collaboration with the human workers and will allow manufacturing industries to establish in-house robotic-based assembly lines, capable to rapidly adapt in continuously evolving products. Through its advances, SMEs holding robotic-based assembly lines, will benefit by acting as subcontractors for large industries, since they will be a viable alternative to LCC outsourcing.
It becomes clear that the CoLLaboratE project has profound potential to strengthen the competitiveness and growth of companies and bring back production to Europe, by implementing novel artificial intelligence technologies and integrating robots with collaborative skills in the production, meeting a specific, highly important need of European, as well as worldwide manufacturing industries toward their future growth and sustainability.
The target users for the CoLLaboratE system are manufacturing industries in need of flexible and affordable automation systems to boost their global competitiveness. Successful completion of CoLLaboratE will allow SMEs and large manufacturing companies in Europe to easily program assembly tasks and flexibly adapt to changes in the production pipeline. Such ease of use and rapid integration time of robotic assembly systems is expected to pave the way for step change in the adoption of not only collaborative robots, but a complete collaborative environment provided by the CoLLaboratE solution.
01-12-2018
-30-11-2022
01-10-2018
-30-09-2021
01-09-2018
-31-08-2022
01-10-2018
-31-03-2022
ROSSINI develops and demonstrates technologies enabling a significant advancement in HRC. They are:
These technologies will be then integrated into the ROSSINI Platform architecture.
Expected achievements: 15% increase in OECD Job Quality Index through work environment and safety improvement; 20% reduction in production reconfiguration time and cost; reduction of heavy works impacts and costs: increase in the overall job satisfaction and job attractiveness; increased value-chain integration and stakeholder satisfaction
HRC applications pose several challenges to the manufacturing industry which sees an increased need for automation and scalability, notably in SMEs. Moreover, at the moment, HRC applications imply also huge investments in terms of effort, time and intellectual capital to integrate robots and sensors into the manufacturing workflow which can’t be afforded by most of the European SMEs, notably if the production combines low volume with high mix. Trough ROSSINI project, implementation of real and cost effective HRC contributes to redesign workplaces combining automation and lean manufacturing concept, with a drastic reduction of conversion and reconfiguration costs.
The development of the Rossini Modular KIT allowed to bring significant advance in terms of tecnology and awareness on collaborative robotics for all Europe. In particular, the set of efficient and modular tools developed within Rossini enable and ease different specific activities: from the hazarda assessment evaluation untill the multiple detection of humans on a monitored area where robots are working. Nevertheless, important activities still need quite a few effort in terms of knowledge, development and collaboration: from the necessity to refine (or define new well-establised interfaces) untill the identification of solutions able to bring the human-robot iteraction even closer (and more trustworthy), also in terms of standadization (that still present several gaps on this topic).
In particular, the following topic have been identified as important issues to tackle in future activities and research:
ROSSINI helps European factories to attract skilled workforce in factories because of the attention paid to job quality and employee satisfaction
01-07-2015
-30-06-2018
01-10-2017
-31-03-2021
CETMA was able to exploit the possibilities that customized simulation offers to SME’s specialized in boat hulls manufacturing. Thanks to cloud resources, enough power computing is available to analyze different scenarios in a few days instead of several weeks.
Designers of CATMARINE and SKA are now able to achieve high-quality products by analyzing different manufacturing scenarios without wasting time, money and material. The platform is able to optimize the resin injections points/vents and verify the presence of defects in the final product, thus ensuring a complete and correct mold-filling.
Tools developed and deployed on CloudiFacturing platform during this experiment allow end-users who operate water quenches to derive operational conditions of the water quench and thus, increase precision and repeatability of the process and decrease its dependence on the experience of the operator. The outcomes of the experiment also bring new knowledge to the whole process. Additionally, the developed technologies allow using numerical modeling and simulation in the design of a new generation of water quenches and their components.
The realized progress advances the state of the art in several aspects:
A recent study addressed that VARTM is among the fastest growing technology in the composite industry.
The cloud-based solution will reduce the cost of entering VARTM technology by 40% to 60%. This will lead to wider adoption of the technology with significant advantages in terms of quality, repeatability, and reduction of environmental and safety impact for many SMEs around the EU.
In addition to this, affordable simulations will lead to fast prototyping, experimentation, and bolder design.
From a business point of view, this project allowed CATMARINE to reduce its Time To Market (TTM), reduce its process design cost and material waste. Looking in perspective, this technology will allow the shipyard to offer its customer innovative solutions, especially in the building of complex geometry and expansive material as carbon fiber and epoxy resin.
For SKA, VARTM simulation will be a new service to add to its catalog as for FEM analysis.
"In our field, it is no simple task to find a partner who is not only a top expert in a given field but also able to take initiative in finding innovative solutions. It is only in more challenging situations
when cooperation with real professionals is genuinely appreciated. CloudiFacturing project has certainly been the right choice", Ondřej Tůma, Managing Director, FERRAM STROJÍRNA s.r.o.
The establishment of the cloud cluster solution for running SyMSpace has greatly improved the engineering capacity of LCM which no longer includes the bottleneck of local computation resources. Being able to reserve as many cloud computing resources as necessary at any given time takes away the need to invest into hardware and maintenance just for covering peak loads. This means a great improvement of customer satisfaction.
Additionally, already five industrial partners are testing SyMSpace on a pay-per-use basis in the cloud. The main USPs are the low barrier for access (no upfront costs, no long-time installation, etc.) and the low total costs.
A certain interest has also been coming from academic players. As SyMSpace will see an open source release in the near future, this seems to be an interesting opportunity to run and eventually publish or contribute own algorithms from the academic research around electric machines. With the help of the cloud solution, the access barrier for remote partners has completely fallen away: beginning of September 2018, the solution was presented to a partner at the WEMPEC consortium at University of Madison, Wisconsin who is considering applying the SyMSpace cloud solution for research in magnetic bearings.
This example shows how the international contact, also across research areas has been facilitated using the cloud cluster solution of SyMSpace. This is an important driver for the innovation potential at LCM which, as a trans-academic-industrial player heavily relies on close contacts with other research institutions.
The demonstration of the faster and more reliable prototyping production process is expected to be a major improvement both internally and externally at Hanning. However, due to the delays and difficulties experienced in the implementation of the winding process, this step still needs to be realized. The potential, however, is impressive as demonstrated in the KPI metrics below.
Table 12: experiment 1 impact summary
The combined use of a simulation solution based on a numerical model and of remote HPC resources has allowed the development of a new design and development process of catamaran hulls.
The solution, which relies on Cloudifacturing platform, gives two kinds of advantages. Technically, more efficient and less defect-prone processes can be conceived and analyzed before their adoption. From an economic point of view, SME can afford the use of HPC resources without heavy investments.
"Thanks to the CloudiFacturing technology we can now take a look inside the VARTM process and switch our manufacturing process to a more safe, reliable and cheaper one", Gabriele Totisco, Catmarine CEO.
The experiment setup was well prepared and discussed by the experiment partners LCM and Hanning. The implementation of the solution in the CloudiFacturing Marketplace, however, was unclear and needed significant efforts during the early stages of the experiment. This partly concerned the technical solution and partly was felt in the definition and in conveying the experiment structure to the consortium.
While this has added some unplanned efforts to the experiment which then reduced the time available for the practical implementations, the cause seems quite unavoidable for wave 1 of the experiments since the development of the Marketplace happened in parallel to the experiments. This situation will, therefore, automatically improve for the experiments in wave 2.
01-10-2017
-31-03-2021
GESTAMP, besides getting familiar with Z-BRE4K’s solution validation and assessment methodology, got a better understanding of internal reflection and readiness to apply predictive maintenance solutions to its plants while new mitigation actions related to process flaws and defects identification were developed during the Z-BRE4K. Also, they have understood the importance of solution validation and assessment methodology defined in Z-BRE4K.
PHILIPS supports the idea of predictive maintenance, “listening to the machines” and understands that the key to success is close contact between technology providers and experts where data integration/architecture and machine learning are both very important projects.
SACMI-CDS found out the importance of collaboration not only with a mechanical engineering/maintenance-related professionals but also with different technical background experts that together can improve multi-tasking and combining shopfloor and office-related activities as well as scheduling of activities during the work journey.
In general, after the solution implementation (TRL5), testing the system on the shop floor (TRL6) and validation of the Z-BRE4K solution (TRL7) at end users, the very final lesson learnt can be summarised as follows:
01-10-2017
-30-09-2020
01-09-2017
-31-08-2021
01-11-2017
-30-04-2021
01-10-2017
-31-03-2021
01-09-2017
-31-08-2020
UPTIME will develop a versatile and interoperable unified predictive maintenance platform for industrial & manufacturing assets from sensor data collection to optimal maintenance action implementation. Through advanced prognostic algorithms, it predicts upcoming failures or losses in productivity. Then, decision algorithms recommend the best action to be performed at the best time to optimize total maintenance and production costs and improve OEE.
UPTIME innovation is built upon the predictive maintenance concept and the technological pillars (i.e. Industry 4.0, IoT and Big Data, Proactive Computing) in order to result in a unified information system for predictive maintenance. UPTIME open, modular and end-to-end architecture aims to enable the predictive maintenance implementation in manufacturing firms with the aim to maximize the expected utility and to exploit the full potential of predictive maintenance management, sensor-generated big data processing, e-maintenance, proactive computing and industrial data analytics. UPTIME solution can be applied in the context of the production process of any manufacturing company regardless of their processes, products and physical models used.
Key components of UPTIME Platform include:
One of the main learnings is that Data quality needs to be ensured from the beginning of the process. This implies spending some more time, effort and money to carefully select the sensor type, data format, tags, and correlating information. This turns to be particular true when dealing with human-generated data. It means that if the activity of input of data from operators is felt as not useful, time consuming, boring and out of scope, this will inevitably bring bad data.
Quantity of data is another important aspect as well. A stable and controlled process has less variation. Thus, machine learning requires large sets of data to yield accurate results. Also this aspect of data collection needs to be designed for example some months, even years in advance, before the real need emerges.
This experience turns out into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best way is doing it during the first installation of the equipment or at its first revamp activity. Don’t underestimate the amount of data you will need, in order to improve a good machine learning. This of course needs also to provide economic justification, since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed.
A common practice advice is to design a data gathering campaign starting from the current need. This could lead though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological description of the system under design. Of course, this could not be feasible in all real-life situations, but a good strategy could be populating the machine with as much sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating Excel files distributed across different PCs into common shared databases, taking care of making a good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Finally, the third important learning is that Data Scientists and Process Experts still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly. This is also an aspect that needs to be taken into account and carefully planned. Companies need definitely to close the “skills” gaps and there are different strategies applicable:
train Process Experts on data science;
train Data Scientists on subject matter;
develop a new role of Mediators, which stays in between and shares a minimum common ground to enable clear communication in extreme cases.
Quantity and quality of data: the available data in the FFT use case mainly consists of legacy data from specific measurement campaigns. The campaigns were mainly targeted to obtain insights about the effect of operational loads on the health of the asset, which is therefore quite suitable to establish the range and type of physical parameters to be monitored by the UPTIME system. UPTIME_SENSE is capable of acquiring data of mobile assets in transit using different modes of transport. While this would have been achievable from a technical point of view, the possibility to perform field trials was limited by the operational requirements of the end-user. Therefore, only one field trial in one transport mode (road transport) was performed, which yielded insufficient data to develop useful state detection capability. Due to the limited availability of the jig, a laboratory demonstrator was designed to enable partially representative testing of UPTIME_SENSE under lab conditions, to allow improvement of data quantity and diversity and to establish a causal relationship between acquired data and observed failures to make maintenance recommendations.
Installation of sensor infrastructure: during the initial design to incorporate the new sensors into the existing infrastructure, it is necessary to take into consideration the extreme physical conditions present inside the milling station, which require special actions to avoid sensors being damaged or falling off. A flexible approach is adopted, which involves the combination of internal and external sensors to allow the sensor network prone to less failure. Quantity and quality of data: it is necessary to have a big amount of collected data for the training of algorithms. Moreover, the integration of real-time analytics and batch data analytics is expected to provide a better insight into the ways the milling and support rollers work and behave under various circumstances.
Quantity and quality of data need to be ensured from the beginning of the process. It is important to gather more data than needed and to have a high-quality dataset. Machine learning requires large sets of data to yield accurate results. Data collection needs however to be designed before the real need emerges. Moreover, it is important having a common ground to share information and knowledge between data scientists and process experts since in many cases they still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly.
01-09-2017
-29-02-2020
01-11-2017
-31-10-2020
The Predictive Cognitive Maintenance Decision Support System (PreCoM) enables its users to detect damages, estimate damage severity, predict damage development, follow up, optimize maintenance (for reducing unnecessary stoppages) and get recommendations (on what, why, where, how and when to perform maintenance). PreCoM is a cloud-based smart PdM system using vibration as a condition monitoring parameter. Some accelerometers for measuring vibration (of both rotating and nonrotating components), as well as other sensors (i.e. for temperature), have been installed in machines’ significant components (i.e. components whose failures either expensive or dangerous). Over 20 hardware and software modules (common to all considered and equivalent use cases) are integrated into a single automatic and digitised system that gathers, stores, processes and securely sends data, providing recommendations necessary for planning and optimizing maintenance and manufacturing schedules. The PreCoM system includes loops and sub-systems for data acquisition, data/sensor quality control, predictive algorithm, scheduling algorithm, follow up tool, self-healing ability for specific problems, and end-user information interface.
To develop and apply statistical models for supporting PdM, it is always crucial to have as much as possible failure data, which is not easy to find in the companies’ databases. Furthermore, advancing and integrating different technologies in a single automatic and digitised smart PdM system is a challenge that requires close collaboration between research and industry players.
01-10-2017
-31-03-2021
01-10-2017
-30-09-2020
In this case edge analytics performed by a low-cost edge device (Raspberry Pi) proved it is feasible to practice predictive maintenance systems like SERENA. However, performance issues were caused by having a large sampling frequency (16kHz) and background processes running on the device, which affected the measurements. By lowering the sampling frequency, this issue can be reduced. Another way would be to obtain different hardware with buffer memory on the AD-card. In addition, a real-time operating system implementation would also work. It was found that the hardware is inexpensive to invest in, but the solution requires a lot of tailoring, which means that expenses grow through the number of working hours needed to customize the hardware into the final usage location. If there are similar applications thatimplement the same configuration, cost-effectiveness increases. Furthermore, the production operators, maintenance technicians, supervisors, and staff personnel at the factory need to be further trained for the SERENA system and its features and functionalities.
From the activities developed within the SERENA project, it became clearer the relation between the accuracy and good performance of a CMM machine and the good functioning of the air bearings system. It was proved or confirmed by TRIMEK’s machine operators and personnel that airflow or pressure inputs and values out of the defined threshold directly affects the machine accuracy. This outcome was possible from the correlation made with the datasets collected from the sensors installed in the machine axes and the use of the tetrahedron artifact to make a verification of the accuracy of the machine, thanks to the remote real-time monitoring system deployed for TRIMEK’s pilot. This has helped to reflect the importance of a cost and time-effective maintenance approach and the need to be able to monitor critical parameters of the machine, both for TRIMEK as a company and for the client’s perspective.
Another lesson learned throughout the project development is related to the AI maintenance-based techniques, as it is required multiple large datasets besides the requirement of having failure data to develop accurate algorithms; and based on that CMM machine are usually very stable this makes difficult to develop algorithms for a fully predictive maintenance approach in metrology sector, at least with a short period for collection and assessment.
In another sense, it became visible that an operator’s support system is a valuable tool for TRIMEK’s personnel, mainly the operators (and new operators), as an intuitive and interacting guide for performing maintenance task that can be more exploited by adding more workflows to other maintenance activities apart from the air bearings system. Additionally, a more customized scheduler could also represent a useful tool for daily use with customers.
As with any software service or package of software, it needs to be learned to implement it and use it, however, TRIMEK’s personnel is accustomed to managing digital tools.
The SERENA project has provided a deep dive into an almost complete IIoT platform, leveraging all the knowledge from all the partners involved, merging in the platform many competencies and technological solutions. The two main aspects of the project for COMAU are the container-based architecture and the analytics pipeline. Indeed, those are the two components that more have been leveraged internally, and which have inspired COMAU effort in developing the new versions of its IIoT offering portfolio. The predictive maintenance solutions, developed under the scope of the project, have confirmed the potential of that kind of solutions, confirming the expectations.
On the contrary, another takeaway was the central need for a huge amount of data, possibly labelled, containing at least the main behaviour of the machine. That limits the generic sense that predictive maintenance is made by general rules which can easily be applied to any kind of machine with impressive results.
More concretely, predictive maintenance and, in general, analytics pipelines have the potential to be disruptive in the industrial scenario but, on the other hand, it seems to be a process that will take some time before being widely used, made not by few all-embracing algorithms, but instead by many vertical ones and applicable to a restricted set of machines or use cases, thus the most relevant.
The SERENA system is very versatile accommodating many different use cases. Prognostics need a deep analysis to model the underlying degradation mechanisms and the phenomena that cause them. Creating a reasonable RUL calculation for the VDL WEW situation was expected to be complicated. This expectation became true as the accuracy of the calculated RUL was not satisfactory. Node-Red is a very versatile tool, well-chosen for the implementation of the gateway. However, the analysis of production data revealed useful information regarding the impact of newly introduced products on the existing production line.
All the experiments conducted have been interpreted within their business implication: a reliable system (i.e. robust SERENA platform and robust data connection) and an effective Prediction System (i.e. data analytics algorithm able to identify mixing head health status in advance) will have an impact on main KPI related to the foaming machine performances. In particular:
Besides practical results, SERENA provided some important lesson to be transferred into its operative departments:
Data Quality
Finding the relevant piece of information hidden in large amounts of data, turned to be more difficult than initially thought. One of the main learnings is that Data Quality needs to be ensured since the beginning and this implies spending some more time, effort and money to carefully select sensor type, data format, tags, and correlating information. This turns particularly true when dealing with human-generated data: if the activity of input data from operators is felt as not useful, time-consuming, boring and out of scope, this will inevitably bring bad data.
Some examples of poor quality are represented by:
a. Missing data
b. Poor data description or no metadata availability
c. Data not or scarcely relevant for the specific need
d. Poor data reliability
The solutions are two: 1) train people on the shop floor to increase their skills on Digitalization in general and a Data-Based decision process specifically; 2) design more ergonomic human-machine interfaces, involving experts in the HMI field with the scope of reducing time to insert data and uncertainty during data input.
These two recommendations can bring in having a better design of dataset since the beginning (which ensure machine-generated data quality) and reduce the possibility of errors, omissions and scarce accuracy in human-generated data.
Data Quantity
PU foaming is a stable, controlled process and it turned to have less variation: thus, machine learning requires large sets of data to yield accurate results. Also, this aspect of data collection needs to de designed in advance, months, even years before the real need will emerge. This turns into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best is doing it at the equipment first installation or its first revamp activity. Don’t underestimate the amount of data you need to improve good machine learning. This, of course, also needs to provide economic justification since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed. Common practice advice is to design a data-gathering campaign starting from the current need. This could lead, though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological descriptions of the system under design. Of course, this could not be feasible in real-life situations, but a good strategy could be to populate the machine with as many sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating excel files spread in individuals’ PC into commonly shared databases, making good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Skills
Data Scientists and Process Experts are not yet talking the same language and it takes significant time and effort from mediators to make them communicate properly. This is also an aspect that needs to be taken into account and carefully planned: companies need definitely to close the “skills” gaps and there are different strategies applicable: train Process Experts on data science, train data scientists on the Subject matter; develop a new role of Mediators, which stays in between and shares a minimum common ground to enable the extreme cases to communicate.
01-10-2017
-30-09-2020
01-10-2017
-30-09-2020
PROGRAMS aims at developing a HW/SW suite of solutions capable of:
PROGRAMS solution will allow SMEs to access the benefits of Predicitive Maintenance wilth limited costs.
Several challenges limit the succesfull application of Predictive Maintenance in factories:
Correct determination of best maintenance strategies and computation of components RUL requires the collection of a vast amount of data in a format that must be easily accessible and analyzed.
Robot components show a slow degradation of their performances: data collection must begin as soon as possible.
Predictive maintenance requires different skills and thus new professional figures will have to to be trained:
The main innovation will be represented by the introduction in production of MPFQ model fused with AQ control loops: Functional Integration and Correlation between Material, Quality, Process and Appliance Functions.