MICRO-FAST | A FAST process and production system for high-throughput, highly flexible and cost-efficient volume production of miniaturised components made of a wide range of materials
01-09-2013
-28-02-2017
01-09-2013
-28-02-2017
01-09-2012
-31-08-2016
The data-driven digital twin in production line is a solution to measure, analyze and react oriented to zero defect manufacturing and to the maximization of the OEE improving the sustainability and profit of factory of future.
The application of Smart Prod ACTIVE system has been demonstrated and validated at foundry level. In the frame of HPDC production process, Operator and Process manager take advantage by adopting a centralized remote control system supporting process monitoring and quality prediction in real time. The decision is supported by cause-effect correlations, and proper reactions suggested by a continuously updated meta-model. Re-usability and flexibility of the Smart Prod ACTIVE system also allow agile re-start in case of small batches production
The same aproach is valid of any multi-stages production chain to produce part for all industrial sectors.
The first successuful testing and applications are referred to SMEs
The challenge is the sensorization and data acquisition in the existing equipment not properly designed.
The data miming, real-time alerting and cognitive predictive meta-modelling is not a chimera.
01-01-2015
-31-12-2017
01-12-2014
-30-11-2018
01-01-2015
-31-12-2017
01-02-2015
-31-01-2018
11-01-2015
-31-10-2018
11-01-2015
-31-07-2020
HORSE initiated the transformation of technology labs into Competence Centres closer to industry and the regional ecosystem. HORSE also launched the concept of 'Competence Centre Networking', where not only expertise was shared, but the experience of existing Technology Labs/Competence Centres was used for the establishment of new ones. As such, HORSE is the predecessor of the DIH concept, and was actually involved in the mentoring program of new DIHs, assisting in the establishment of even more such Innovation Hubs. Regarding the technologies developed, HORSE designed a reference Industry 4.0 architecture, which is applicable in manufacturing environments and taking into account all the different resources (humans, robots, machinery, sensors, etc.) that are present in a production shop floor.
With the pilot and open call experiments, which include SMEs as the manufacturing end-users, HORSE proved that using the technologies and approaches developed in the project, robotics is feasible and a good investment not only for large companies but also for manufacturing SMEs.
HORSE developed not only technologies, but also methodologies and approaches that have been tested and can be resused for a number of different aspects, like requirements analysis, system design, system validation and business modelling.
01-09-2016
-31-08-2019
See also D2.6 Lessons Learned and updated requirements report II and D8.8 Final Evaluation Report of the COMPOSITION IIMS Platform
1. Early design decisions on deployment and communication protocols were made. (Docker, MQTT, AMQP). Deciding on the deployment and communication platforms has made test deployment and integration work easier to manage.
2. Inception design (from the DoA) did not specify some components, e.g., for operational management or configuration. The architecture needed additional components to cover system configuration and monitoring.
3. Blockchain is still not a plug-and-play technology and requires a substantial amount of low-level configuration.
4. The Matchmaker should match agents (requester and suppliers). Moreover, the Matchmaker should match a request with the best available offer.
5. Use cases need to be solidly anchored in the real world of the actors and end users. They must not solely represent what is feasible from a technical point of view, but also reflect non-functional requirements such as regulations and business practices. Otherwise, the business cases would become unsustainable for further exploitation.
01-10-2016
-30-09-2019
The assembly collaborative robot considers both the operation being performed and operator’s anthropometric characteristics for control program selection and part positioning. Besides, the workplace includes multimodal interactions with both the dual arm assembly and logistic robots as well as with the Manufacturing Execution System. Verbal interaction includes natural speaking (i.e. Spanish language) and voice-based feedback messages, while nonverbal interaction is based on gesture commands considering both left and right-handed workers and multichannel notifications (e.g. push notifications, emails, etc.). Furthermore, the maintenance technician is assisted by on event Intervention request alerts, maintenance decision support dashboard and AR/VR based step by step on the job guidance.
The proposed solution comprises an adaptive smart tool and an AR instruction application using HoloLens wearable devices and a framework for ensuring digital continuity starting from the data recorded in the system for manufacturing engineering up to the execution and analysis phase
An AR based solution is proposed for instructions visualization enabling also on-job training activities and guidance. Regarding the ergonomics, an autonomous tool-trolley has been integrated including voice command and AR based gesture steering.
A collaborative robotic cell has been implemented for the deburring operation where the robot executes the most exhausting phases, while the worker focuses on final quality inspection. Regarding the assembly process, an AR solution, using ultra-real animations has been implemented to guide operators through tasks. Additional AR functionalities include the visualization of textual information (tips, best practices…), access to technical documents and voice recording
Trust is identified as a key indicator in the pilot. Trust experiments are critical when introducing automation mechanisms that co-operate with workers.
Workers’ opinion is key, especially for decision and acceptance, during the design and development of adaptive automation solutions.
Adaptation within automation mechanisms is reported to be an enhancement at the workplace, according to workers.
The introduction of the is perceived by workers as helpful, especially when productive tasks are exhausting and may provoke health issues. They are not received with reluctance but as supportive in workers’ tasks at the workplace. Regarding AR, it is generally considered as very useful, although the HMD (HoloLens) are too heavy for long time tasks.
01-10-2016
-30-09-2019
01-10-2016
-30-09-2019
01-09-2016
-30-11-2019
01-10-2016
-30-09-2019
The COROMA modular platform is an innovative approach that has developed seven fuctional modules to improve the performance of already existing robotic systems:
COROMA provides the flexibility that European metalworking and advanced material manufacturing companies require to compete in the rapidly evolving global market.
COROMA will have a positive impact on employment in the European industry, as:
In this way, the overall results of the project are:
01-10-2016
-30-09-2019
A use case derived from Continental’s measurement lab has been used for validation, revealing the importance of task properties careful choice, time to familiarize employees to such system and assuring sensitive data security.
Within Factory2Fit there were 2 use cases for the codesign process piloted at Continental plant Limbach-Oberfrohna. One pilot was carried out for the workplace design and one for the work process design. An evaluation of the method selection and execution showed that there was good acceptance among the workers who contributed to the design process. To reach positive results during the codesign process it is essential to assess the boundary conditions and the group structure very well.
The developed tool could be extended to become a part of a bigger communication platform, between the equipment provider and their customers, aiming at strengthening their relationship.
SoMeP was piloted at Prima Power, unveiling that the integration of production information and messaging is valuable and time-saving in getting guidance. Gamification can motivate workers to share knowledge (Zikos et al., 2019). The use of social media will require organizational policies e.g. in moderating the content (Aromaa et al., 2019).
Worker Feedback Dashboard was piloted in three factories with ten workers. For user acceptance, it has been crucial that the workers participated in planning how to use the solution, and what kind of work practices were related to its use. The pilot evaluation results indicate that there are potential lead users for the Worker Feedback Dashboard. Introducing the solution would facilitate showing the impacts and could then encourage those who may be more doubtful to join.
ARAG solution was piloted in a factory of United Technologies Corporation (UTC). The validation results reflected the potential of the solution, technicians’ acceptability to solutions specifically designed for supporting them in complex operations. Recent studies have shown that gamification tools can be utilized in industrial AR solutions for reducing technicians’ learning curve and increasing their cognition (Tsourma et al., 2019).
On-the-job learning tool was piloted in a UTC factory producing air handling units. What is learned, is that in order to display the content more understandable, users must be able to interact with it, by viewing the components CAD files and make or read remarks.
01-10-2016
-30-09-2019
The smart HMI was tested in E80, with expert and nonexpert operators working on an AGV in real working environment. The availability of such a tool to guide the use and maintenance of complex vehicles was strongly appreciated by workers, since it simplifies the interaction with the vehicle proprietary user interface and enable structured access to knowledge of specific procedures that have been carried out empirically. Moreover, expert operators are now able to take advantage of their experience and plan ad hoc maintenance plan, customized on the current status of the fleet.
Further studies are being carried out to employ the virtual training in order to train the customers’ operators without having to wait for the delivery of a newly bought machine, or without having to block a productive machine for training purposes. The ADAPT module will be further developed in order to evaluate integration with the recently released MAESTRO Active HMI, which already incorporates personalization features such as language settings. Finally, discussions are underway with the commercial area in order to verify whether the use of wearables by customer’s workers can be promoted in order to improve their well-being at home.
Even if robots are well known in Europe there is a lack of knowledge on their real potential and on the existence of tools able to simplify their programming and reconfiguration. Most Industries need to be supported in the process of introducing such tools in the plants. Advanced tools for training, are essential
01-10-2016
-31-10-2019
FAR-EDGE demostrates the feasibility and business value of Edge Computing (EC) applied to manufacturing, using Distributing Ledger Technology (DLT) as the key enabler. DLT allows several autonomous local processes to cooperate as peers in the scope of the same global process, the required state synchronization and common business loging being implemented by Smart Contracts. This approach, if applied correctly, results in totally decentralized and fail-safe CPS that are still easily monitored, controlled and managed centrally.
Adoption of new and disruptive technology is not an easy task for SME, due to lack of budget and of internal skills. To mitigate this problem, FAR-EDGE defines speficic migration strategies that may help SMEs plan their Industry 4.0 journey, with the support of FAR-EDGE assets.
In FAR-EDGE, the value of DLT is in distributed consensus: enabling the intelligent edge nodes of a system to agree - or disagree - on state transitions, to the effect that any change in global state (e.g., the assignment of a task to a specific node) must be approved collectively; moreover, individual nodes may fail or go offline without compromising the system as a whole. However, standard DLT platforms also maintain an immutable log of transactions (the Blockchain) that is replicated on all nodes, which may represent a significant performance bottleneck in many real-world applications. What we learned from the FAR-EDGE experimentation is that, in most factory automation scenarios, the historical memory of any transaction may be cleared once consensus is safely reached on it - something that is not supported by any current DLT implementation. This form of short-lived state persistency is something that is probably worth experimenting with in the future.
01-10-2016
-30-09-2019
Evaluation studies carried out at the premises of Airbus showed positive results for both of the Exoskeleton and KIT services developed for this specific use case. Both physical and mental fatigue of workers were reduced, as an outcome of the Exoskeleton and KIT services respectively. Workers were keen enough to adopt the new technologies to their everyday working activities.
Both WOS and KIT services have been evaluated at COMAU in real life applications, showing that WOS has been well accepted by both operators and engineers as a valuable tool for eliminating motion waste and improve workplace ergonomics in production lines. The evaluation of KIT showed that the developed solution helps to reduce cognitive load of operators, reduce faults and improve efficiency.
KIT, Exoskeleton service and OAST have been evaluated at ROYO premises in real working conditions. KIT has been characterized as a valuable tool that reduces cognitive load and helps workers eliminate uncertainties at the assembly process. The combination of Exoskeleton service with OAST has helped to reduce the physical and mental stress of the operators at the palletization area of ROYO.
01-10-2016
-31-03-2020
With the provision of the right tools, every person can become part of a manufacturing system, including people with disabilities. These people are eager to work and feel productive which can affect global industry very positively.
The developed tool, has been applied in VOLVO, utilizing an offline production line, similar to the actual one. The perception time of the model has been reduced. Notably, more aspects of the model have been taken into consideration compared to the conventional simulation representation currently used. Finally, collaborative design of the simulation model has been rendered feasible, as a team of production managers can group and discuss on the same 3D model by making annotations.
01-11-2015
-31-10-2017
01-09-2017
-28-02-2021
UPTIME will develop a versatile and interoperable unified predictive maintenance platform for industrial & manufacturing assets from sensor data collection to optimal maintenance action implementation. Through advanced prognostic algorithms, it predicts upcoming failures or losses in productivity. Then, decision algorithms recommend the best action to be performed at the best time to optimize total maintenance and production costs and improve OEE.
UPTIME innovation is built upon the predictive maintenance concept and the technological pillars (i.e. Industry 4.0, IoT and Big Data, Proactive Computing) in order to result in a unified information system for predictive maintenance. UPTIME open, modular and end-to-end architecture aims to enable the predictive maintenance implementation in manufacturing firms with the aim to maximize the expected utility and to exploit the full potential of predictive maintenance management, sensor-generated big data processing, e-maintenance, proactive computing and industrial data analytics. UPTIME solution can be applied in the context of the production process of any manufacturing company regardless of their processes, products and physical models used.
Key components of UPTIME Platform include:
One of the main learnings is that Data quality needs to be ensured from the beginning of the process. This implies spending some more time, effort and money to carefully select the sensor type, data format, tags, and correlating information. This turns to be particular true when dealing with human-generated data. It means that if the activity of input of data from operators is felt as not useful, time consuming, boring and out of scope, this will inevitably bring bad data.
Quantity of data is another important aspect as well. A stable and controlled process has less variation. Thus, machine learning requires large sets of data to yield accurate results. Also this aspect of data collection needs to be designed for example some months, even years in advance, before the real need emerges.
This experience turns out into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best way is doing it during the first installation of the equipment or at its first revamp activity. Don’t underestimate the amount of data you will need, in order to improve a good machine learning. This of course needs also to provide economic justification, since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed.
A common practice advice is to design a data gathering campaign starting from the current need. This could lead though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological description of the system under design. Of course, this could not be feasible in all real-life situations, but a good strategy could be populating the machine with as much sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating Excel files distributed across different PCs into common shared databases, taking care of making a good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Finally, the third important learning is that Data Scientists and Process Experts still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly. This is also an aspect that needs to be taken into account and carefully planned. Companies need definitely to close the “skills” gaps and there are different strategies applicable:
train Process Experts on data science;
train Data Scientists on subject matter;
develop a new role of Mediators, which stays in between and shares a minimum common ground to enable clear communication in extreme cases.
Quantity and quality of data: the available data in the FFT use case mainly consists of legacy data from specific measurement campaigns. The campaigns were mainly targeted to obtain insights about the effect of operational loads on the health of the asset, which is therefore quite suitable to establish the range and type of physical parameters to be monitored by the UPTIME system. UPTIME_SENSE is capable of acquiring data of mobile assets in transit using different modes of transport. While this would have been achievable from a technical point of view, the possibility to perform field trials was limited by the operational requirements of the end-user. Therefore, only one field trial in one transport mode (road transport) was performed, which yielded insufficient data to develop useful state detection capability. Due to the limited availability of the jig, a laboratory demonstrator was designed to enable partially representative testing of UPTIME_SENSE under lab conditions, to allow improvement of data quantity and diversity and to establish a causal relationship between acquired data and observed failures to make maintenance recommendations.
Installation of sensor infrastructure: during the initial design to incorporate the new sensors into the existing infrastructure, it is necessary to take into consideration the extreme physical conditions present inside the milling station, which require special actions to avoid sensors being damaged or falling off. A flexible approach is adopted, which involves the combination of internal and external sensors to allow the sensor network prone to less failure. Quantity and quality of data: it is necessary to have a big amount of collected data for the training of algorithms. Moreover, the integration of real-time analytics and batch data analytics is expected to provide a better insight into the ways the milling and support rollers work and behave under various circumstances.
Quantity and quality of data need to be ensured from the beginning of the process. It is important to gather more data than needed and to have a high-quality dataset. Machine learning requires large sets of data to yield accurate results. Data collection needs however to be designed before the real need emerges. Moreover, it is important having a common ground to share information and knowledge between data scientists and process experts since in many cases they still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly.
01-10-2016
-31-03-2020
Twenty-six results were identified in total and their beneficiaries hold exclusively the ownership rights, with the exemption of a single shared result. All these results add up to the integrated Z-Fact0r solution which demonstrates certain innovative and unique features. As a matter of fact, Z-Fact0r is a holistic & innovative ZDM solution which increases overall quality & customer satisfaction by reducing production costs, energy & material consumption, while utilizing almost all the state-of-the art techniques for implementing ZDM. Zero-Defect Manufacturing is currently the new trend in multi-line manufacturing, and therefore companies offering solutions similar to the Z-Fact0r emerge. However, at the moment, there are very few (if any) platforms -such as the Z-Fact0r- available in the market, offering such integrated quality control for the industry.
Eleven results were developed by SMEs in total and their beneficiaries hold exclusively the ownership rights. All SMEs conducted a patent search in order to investigated whether the future exploitation of their results could be at risk due to competition in the respective fields of activity. The technical partners reported that they do not recognize in the market any already patented solutions that could impede the commercialization of their components. The reason for that is that the components which are similar to their innovations and are either commercialized, patented or published, have different focus areas and different target markets. This finding is very important for the SMEs as it signifies that they could bring their Z-Fact0r results up to TRL9 and put them in the market, thus, fostering their financial stance and increasing their in-house know-how and expertise.
Concerning Microsemi use case: Firstly, the knowledge and physical equipment already in place will have a lasting legacy within the MICROSEMI facility at Caldicot. This understanding and equipment will continue to be used on the original use case and expanded upon as explained in the next section.
There was a large lack of understanding at the beginning of this project about what could be achieved by the type of approach being undertaken within the project and whether it could produce results in a manufacturing environment. Now we have an answer, and the Z-Fact0r approach has been a great success.
NECO, as end user, has contributed to the estimation of the benefits Z-Fact0r platform would provide to the manufacturers. In the NECO case:
For bringing the Z-Fact0r solution to TRL 9, NECO would definitely support the Consortium intention of trying to find other funding opportunities for the future, especially in the form of other calls from the European Commission.
For manufacturing of metallic parts with less stringent dimensional requirements, the 3D scanning equipment is a fundamental part of the solution, especially if there are many parts being produced daily with the same geometry. This would allow for an automated inspection to be put in place and, if the production rate is too high, a significative number of samples could still be controlled and used for the PREDICT and PREVENT modules to be used. Of course, sensors in the machines need to be installed for enough data to be available in the machine learning processes. Then, the results could be applied not only to the samples but to all parts being produced. For products with larger added value, the robot deburring is a compulsory tool to use, as it eliminates the need of human intervention in rather sensitive tasks as are the elimination of defects originating in the production line.
01-10-2016
-30-09-2019
The project developed a zero-defects manufacturing process for large composite parts. Various monitoring systems analyse key steps in the process (lay up, infusion, curing) to provide immediate feedback to the process. Efficiency increases of 30% have been realized.
01-11-2017
-28-02-2021
The Predictive Cognitive Maintenance Decision Support System (PreCoM) enables its users to detect damages, estimate damage severity, predict damage development, follow up, optimize maintenance (for reducing unnecessary stoppages) and get recommendations (on what, why, where, how and when to perform maintenance). PreCoM is a cloud-based smart PdM system using vibration as a condition monitoring parameter. Some accelerometers for measuring vibration (of both rotating and nonrotating components), as well as other sensors (i.e. for temperature), have been installed in machines’ significant components (i.e. components whose failures either expensive or dangerous). Over 20 hardware and software modules (common to all considered and equivalent use cases) are integrated into a single automatic and digitised system that gathers, stores, processes and securely sends data, providing recommendations necessary for planning and optimizing maintenance and manufacturing schedules. The PreCoM system includes loops and sub-systems for data acquisition, data/sensor quality control, predictive algorithm, scheduling algorithm, follow up tool, self-healing ability for specific problems, and end-user information interface.
To develop and apply statistical models for supporting PdM, it is always crucial to have as much as possible failure data, which is not easy to find in the companies’ databases. Furthermore, advancing and integrating different technologies in a single automatic and digitised smart PdM system is a challenge that requires close collaboration between research and industry players.
01-10-2017
-31-03-2021
In this case edge analytics performed by a low-cost edge device (Raspberry Pi) proved it is feasible to practice predictive maintenance systems like SERENA. However, performance issues were caused by having a large sampling frequency (16kHz) and background processes running on the device, which affected the measurements. By lowering the sampling frequency, this issue can be reduced. Another way would be to obtain different hardware with buffer memory on the AD-card. In addition, a real-time operating system implementation would also work. It was found that the hardware is inexpensive to invest in, but the solution requires a lot of tailoring, which means that expenses grow through the number of working hours needed to customize the hardware into the final usage location. If there are similar applications thatimplement the same configuration, cost-effectiveness increases. Furthermore, the production operators, maintenance technicians, supervisors, and staff personnel at the factory need to be further trained for the SERENA system and its features and functionalities.
From the activities developed within the SERENA project, it became clearer the relation between the accuracy and good performance of a CMM machine and the good functioning of the air bearings system. It was proved or confirmed by TRIMEK’s machine operators and personnel that airflow or pressure inputs and values out of the defined threshold directly affects the machine accuracy. This outcome was possible from the correlation made with the datasets collected from the sensors installed in the machine axes and the use of the tetrahedron artifact to make a verification of the accuracy of the machine, thanks to the remote real-time monitoring system deployed for TRIMEK’s pilot. This has helped to reflect the importance of a cost and time-effective maintenance approach and the need to be able to monitor critical parameters of the machine, both for TRIMEK as a company and for the client’s perspective.
Another lesson learned throughout the project development is related to the AI maintenance-based techniques, as it is required multiple large datasets besides the requirement of having failure data to develop accurate algorithms; and based on that CMM machine are usually very stable this makes difficult to develop algorithms for a fully predictive maintenance approach in metrology sector, at least with a short period for collection and assessment.
In another sense, it became visible that an operator’s support system is a valuable tool for TRIMEK’s personnel, mainly the operators (and new operators), as an intuitive and interacting guide for performing maintenance task that can be more exploited by adding more workflows to other maintenance activities apart from the air bearings system. Additionally, a more customized scheduler could also represent a useful tool for daily use with customers.
As with any software service or package of software, it needs to be learned to implement it and use it, however, TRIMEK’s personnel is accustomed to managing digital tools.
The SERENA project has provided a deep dive into an almost complete IIoT platform, leveraging all the knowledge from all the partners involved, merging in the platform many competencies and technological solutions. The two main aspects of the project for COMAU are the container-based architecture and the analytics pipeline. Indeed, those are the two components that more have been leveraged internally, and which have inspired COMAU effort in developing the new versions of its IIoT offering portfolio. The predictive maintenance solutions, developed under the scope of the project, have confirmed the potential of that kind of solutions, confirming the expectations.
On the contrary, another takeaway was the central need for a huge amount of data, possibly labelled, containing at least the main behaviour of the machine. That limits the generic sense that predictive maintenance is made by general rules which can easily be applied to any kind of machine with impressive results.
More concretely, predictive maintenance and, in general, analytics pipelines have the potential to be disruptive in the industrial scenario but, on the other hand, it seems to be a process that will take some time before being widely used, made not by few all-embracing algorithms, but instead by many vertical ones and applicable to a restricted set of machines or use cases, thus the most relevant.
The SERENA system is very versatile accommodating many different use cases. Prognostics need a deep analysis to model the underlying degradation mechanisms and the phenomena that cause them. Creating a reasonable RUL calculation for the VDL WEW situation was expected to be complicated. This expectation became true as the accuracy of the calculated RUL was not satisfactory. Node-Red is a very versatile tool, well-chosen for the implementation of the gateway. However, the analysis of production data revealed useful information regarding the impact of newly introduced products on the existing production line.
All the experiments conducted have been interpreted within their business implication: a reliable system (i.e. robust SERENA platform and robust data connection) and an effective Prediction System (i.e. data analytics algorithm able to identify mixing head health status in advance) will have an impact on main KPI related to the foaming machine performances. In particular:
Besides practical results, SERENA provided some important lesson to be transferred into its operative departments:
Data Quality
Finding the relevant piece of information hidden in large amounts of data, turned to be more difficult than initially thought. One of the main learnings is that Data Quality needs to be ensured since the beginning and this implies spending some more time, effort and money to carefully select sensor type, data format, tags, and correlating information. This turns particularly true when dealing with human-generated data: if the activity of input data from operators is felt as not useful, time-consuming, boring and out of scope, this will inevitably bring bad data.
Some examples of poor quality are represented by:
a. Missing data
b. Poor data description or no metadata availability
c. Data not or scarcely relevant for the specific need
d. Poor data reliability
The solutions are two: 1) train people on the shop floor to increase their skills on Digitalization in general and a Data-Based decision process specifically; 2) design more ergonomic human-machine interfaces, involving experts in the HMI field with the scope of reducing time to insert data and uncertainty during data input.
These two recommendations can bring in having a better design of dataset since the beginning (which ensure machine-generated data quality) and reduce the possibility of errors, omissions and scarce accuracy in human-generated data.
Data Quantity
PU foaming is a stable, controlled process and it turned to have less variation: thus, machine learning requires large sets of data to yield accurate results. Also, this aspect of data collection needs to de designed in advance, months, even years before the real need will emerge. This turns into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best is doing it at the equipment first installation or its first revamp activity. Don’t underestimate the amount of data you need to improve good machine learning. This, of course, also needs to provide economic justification since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed. Common practice advice is to design a data-gathering campaign starting from the current need. This could lead, though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological descriptions of the system under design. Of course, this could not be feasible in real-life situations, but a good strategy could be to populate the machine with as many sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating excel files spread in individuals’ PC into commonly shared databases, making good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Skills
Data Scientists and Process Experts are not yet talking the same language and it takes significant time and effort from mediators to make them communicate properly. This is also an aspect that needs to be taken into account and carefully planned: companies need definitely to close the “skills” gaps and there are different strategies applicable: train Process Experts on data science, train data scientists on the Subject matter; develop a new role of Mediators, which stays in between and shares a minimum common ground to enable the extreme cases to communicate.
01-10-2017
-31-03-2021
PROGRAMS aims at developing a HW/SW suite of solutions capable of:
PROGRAMS solution will allow SMEs to access the benefits of Predicitive Maintenance wilth limited costs.
Several challenges limit the succesfull application of Predictive Maintenance in factories:
Correct determination of best maintenance strategies and computation of components RUL requires the collection of a vast amount of data in a format that must be easily accessible and analyzed.
Robot components show a slow degradation of their performances: data collection must begin as soon as possible.
Predictive maintenance requires different skills and thus new professional figures will have to to be trained:
01-10-2017
-30-09-2020
01-10-2017
-31-03-2021
GESTAMP, besides getting familiar with Z-BRE4K’s solution validation and assessment methodology, got a better understanding of internal reflection and readiness to apply predictive maintenance solutions to its plants while new mitigation actions related to process flaws and defects identification were developed during the Z-BRE4K. Also, they have understood the importance of solution validation and assessment methodology defined in Z-BRE4K.
PHILIPS supports the idea of predictive maintenance, “listening to the machines” and understands that the key to success is close contact between technology providers and experts where data integration/architecture and machine learning are both very important projects.
SACMI-CDS found out the importance of collaboration not only with a mechanical engineering/maintenance-related professionals but also with different technical background experts that together can improve multi-tasking and combining shopfloor and office-related activities as well as scheduling of activities during the work journey.
In general, after the solution implementation (TRL5), testing the system on the shop floor (TRL6) and validation of the Z-BRE4K solution (TRL7) at end users, the very final lesson learnt can be summarised as follows:
01-10-2017
-30-09-2021
CETMA was able to exploit the possibilities that customized simulation offers to SME’s specialized in boat hulls manufacturing. Thanks to cloud resources, enough power computing is available to analyze different scenarios in a few days instead of several weeks.
Designers of CATMARINE and SKA are now able to achieve high-quality products by analyzing different manufacturing scenarios without wasting time, money and material. The platform is able to optimize the resin injections points/vents and verify the presence of defects in the final product, thus ensuring a complete and correct mold-filling.
Tools developed and deployed on CloudiFacturing platform during this experiment allow end-users who operate water quenches to derive operational conditions of the water quench and thus, increase precision and repeatability of the process and decrease its dependence on the experience of the operator. The outcomes of the experiment also bring new knowledge to the whole process. Additionally, the developed technologies allow using numerical modeling and simulation in the design of a new generation of water quenches and their components.
The realized progress advances the state of the art in several aspects:
A recent study addressed that VARTM is among the fastest growing technology in the composite industry.
The cloud-based solution will reduce the cost of entering VARTM technology by 40% to 60%. This will lead to wider adoption of the technology with significant advantages in terms of quality, repeatability, and reduction of environmental and safety impact for many SMEs around the EU.
In addition to this, affordable simulations will lead to fast prototyping, experimentation, and bolder design.
From a business point of view, this project allowed CATMARINE to reduce its Time To Market (TTM), reduce its process design cost and material waste. Looking in perspective, this technology will allow the shipyard to offer its customer innovative solutions, especially in the building of complex geometry and expansive material as carbon fiber and epoxy resin.
For SKA, VARTM simulation will be a new service to add to its catalog as for FEM analysis.
"In our field, it is no simple task to find a partner who is not only a top expert in a given field but also able to take initiative in finding innovative solutions. It is only in more challenging situations
when cooperation with real professionals is genuinely appreciated. CloudiFacturing project has certainly been the right choice", Ondřej Tůma, Managing Director, FERRAM STROJÍRNA s.r.o.
The establishment of the cloud cluster solution for running SyMSpace has greatly improved the engineering capacity of LCM which no longer includes the bottleneck of local computation resources. Being able to reserve as many cloud computing resources as necessary at any given time takes away the need to invest into hardware and maintenance just for covering peak loads. This means a great improvement of customer satisfaction.
Additionally, already five industrial partners are testing SyMSpace on a pay-per-use basis in the cloud. The main USPs are the low barrier for access (no upfront costs, no long-time installation, etc.) and the low total costs.
A certain interest has also been coming from academic players. As SyMSpace will see an open source release in the near future, this seems to be an interesting opportunity to run and eventually publish or contribute own algorithms from the academic research around electric machines. With the help of the cloud solution, the access barrier for remote partners has completely fallen away: beginning of September 2018, the solution was presented to a partner at the WEMPEC consortium at University of Madison, Wisconsin who is considering applying the SyMSpace cloud solution for research in magnetic bearings.
This example shows how the international contact, also across research areas has been facilitated using the cloud cluster solution of SyMSpace. This is an important driver for the innovation potential at LCM which, as a trans-academic-industrial player heavily relies on close contacts with other research institutions.
The demonstration of the faster and more reliable prototyping production process is expected to be a major improvement both internally and externally at Hanning. However, due to the delays and difficulties experienced in the implementation of the winding process, this step still needs to be realized. The potential, however, is impressive as demonstrated in the KPI metrics below.
Table 12: experiment 1 impact summary
The combined use of a simulation solution based on a numerical model and of remote HPC resources has allowed the development of a new design and development process of catamaran hulls.
The solution, which relies on Cloudifacturing platform, gives two kinds of advantages. Technically, more efficient and less defect-prone processes can be conceived and analyzed before their adoption. From an economic point of view, SME can afford the use of HPC resources without heavy investments.
"Thanks to the CloudiFacturing technology we can now take a look inside the VARTM process and switch our manufacturing process to a more safe, reliable and cheaper one", Gabriele Totisco, Catmarine CEO.
The experiment setup was well prepared and discussed by the experiment partners LCM and Hanning. The implementation of the solution in the CloudiFacturing Marketplace, however, was unclear and needed significant efforts during the early stages of the experiment. This partly concerned the technical solution and partly was felt in the definition and in conveying the experiment structure to the consortium.
While this has added some unplanned efforts to the experiment which then reduced the time available for the practical implementations, the cause seems quite unavoidable for wave 1 of the experiments since the development of the Marketplace happened in parallel to the experiments. This situation will, therefore, automatically improve for the experiments in wave 2.
01-10-2018
-31-03-2022
ROSSINI develops and demonstrates technologies enabling a significant advancement in HRC. They are:
These technologies will be then integrated into the ROSSINI Platform architecture.
Expected achievements: 15% increase in OECD Job Quality Index through work environment and safety improvement; 20% reduction in production reconfiguration time and cost; reduction of heavy works impacts and costs: increase in the overall job satisfaction and job attractiveness; increased value-chain integration and stakeholder satisfaction
HRC applications pose several challenges to the manufacturing industry which sees an increased need for automation and scalability, notably in SMEs. Moreover, at the moment, HRC applications imply also huge investments in terms of effort, time and intellectual capital to integrate robots and sensors into the manufacturing workflow which can’t be afforded by most of the European SMEs, notably if the production combines low volume with high mix. Trough ROSSINI project, implementation of real and cost effective HRC contributes to redesign workplaces combining automation and lean manufacturing concept, with a drastic reduction of conversion and reconfiguration costs.
The development of the Rossini Modular KIT allowed to bring significant advance in terms of tecnology and awareness on collaborative robotics for all Europe. In particular, the set of efficient and modular tools developed within Rossini enable and ease different specific activities: from the hazarda assessment evaluation untill the multiple detection of humans on a monitored area where robots are working. Nevertheless, important activities still need quite a few effort in terms of knowledge, development and collaboration: from the necessity to refine (or define new well-establised interfaces) untill the identification of solutions able to bring the human-robot iteraction even closer (and more trustworthy), also in terms of standadization (that still present several gaps on this topic).
In particular, the following topic have been identified as important issues to tackle in future activities and research:
ROSSINI helps European factories to attract skilled workforce in factories because of the attention paid to job quality and employee satisfaction