MICRO-FAST | A FAST process and production system for high-throughput, highly flexible and cost-efficient volume production of miniaturised components made of a wide range of materials
01-09-2013
-28-02-2017
01-09-2013
-28-02-2017
01-01-2015
-31-12-2017
01-12-2014
-30-11-2018
01-02-2015
-31-01-2018
01-01-2015
-31-12-2017
01-10-2016
-30-09-2019
Evaluation studies carried out at the premises of Airbus showed positive results for both of the Exoskeleton and KIT services developed for this specific use case. Both physical and mental fatigue of workers were reduced, as an outcome of the Exoskeleton and KIT services respectively. Workers were keen enough to adopt the new technologies to their everyday working activities.
Both WOS and KIT services have been evaluated at COMAU in real life applications, showing that WOS has been well accepted by both operators and engineers as a valuable tool for eliminating motion waste and improve workplace ergonomics in production lines. The evaluation of KIT showed that the developed solution helps to reduce cognitive load of operators, reduce faults and improve efficiency.
KIT, Exoskeleton service and OAST have been evaluated at ROYO premises in real working conditions. KIT has been characterized as a valuable tool that reduces cognitive load and helps workers eliminate uncertainties at the assembly process. The combination of Exoskeleton service with OAST has helped to reduce the physical and mental stress of the operators at the palletization area of ROYO.
01-10-2016
-31-03-2020
With the provision of the right tools, every person can become part of a manufacturing system, including people with disabilities. These people are eager to work and feel productive which can affect global industry very positively.
The developed tool, has been applied in VOLVO, utilizing an offline production line, similar to the actual one. The perception time of the model has been reduced. Notably, more aspects of the model have been taken into consideration compared to the conventional simulation representation currently used. Finally, collaborative design of the simulation model has been rendered feasible, as a team of production managers can group and discuss on the same 3D model by making annotations.
01-10-2016
-30-09-2019
A use case derived from Continental’s measurement lab has been used for validation, revealing the importance of task properties careful choice, time to familiarize employees to such system and assuring sensitive data security.
Within Factory2Fit there were 2 use cases for the codesign process piloted at Continental plant Limbach-Oberfrohna. One pilot was carried out for the workplace design and one for the work process design. An evaluation of the method selection and execution showed that there was good acceptance among the workers who contributed to the design process. To reach positive results during the codesign process it is essential to assess the boundary conditions and the group structure very well.
The developed tool could be extended to become a part of a bigger communication platform, between the equipment provider and their customers, aiming at strengthening their relationship.
SoMeP was piloted at Prima Power, unveiling that the integration of production information and messaging is valuable and time-saving in getting guidance. Gamification can motivate workers to share knowledge (Zikos et al., 2019). The use of social media will require organizational policies e.g. in moderating the content (Aromaa et al., 2019).
Worker Feedback Dashboard was piloted in three factories with ten workers. For user acceptance, it has been crucial that the workers participated in planning how to use the solution, and what kind of work practices were related to its use. The pilot evaluation results indicate that there are potential lead users for the Worker Feedback Dashboard. Introducing the solution would facilitate showing the impacts and could then encourage those who may be more doubtful to join.
ARAG solution was piloted in a factory of United Technologies Corporation (UTC). The validation results reflected the potential of the solution, technicians’ acceptability to solutions specifically designed for supporting them in complex operations. Recent studies have shown that gamification tools can be utilized in industrial AR solutions for reducing technicians’ learning curve and increasing their cognition (Tsourma et al., 2019).
On-the-job learning tool was piloted in a UTC factory producing air handling units. What is learned, is that in order to display the content more understandable, users must be able to interact with it, by viewing the components CAD files and make or read remarks.
01-10-2016
-30-09-2019
01-09-2016
-30-11-2019
01-10-2016
-30-09-2019
The project developed a zero-defects manufacturing process for large composite parts. Various monitoring systems analyse key steps in the process (lay up, infusion, curing) to provide immediate feedback to the process. Efficiency increases of 30% have been realized.
01-11-2015
-31-10-2017
01-10-2017
-31-03-2021
In this case edge analytics performed by a low-cost edge device (Raspberry Pi) proved it is feasible to practice predictive maintenance systems like SERENA. However, performance issues were caused by having a large sampling frequency (16kHz) and background processes running on the device, which affected the measurements. By lowering the sampling frequency, this issue can be reduced. Another way would be to obtain different hardware with buffer memory on the AD-card. In addition, a real-time operating system implementation would also work. It was found that the hardware is inexpensive to invest in, but the solution requires a lot of tailoring, which means that expenses grow through the number of working hours needed to customize the hardware into the final usage location. If there are similar applications thatimplement the same configuration, cost-effectiveness increases. Furthermore, the production operators, maintenance technicians, supervisors, and staff personnel at the factory need to be further trained for the SERENA system and its features and functionalities.
From the activities developed within the SERENA project, it became clearer the relation between the accuracy and good performance of a CMM machine and the good functioning of the air bearings system. It was proved or confirmed by TRIMEK’s machine operators and personnel that airflow or pressure inputs and values out of the defined threshold directly affects the machine accuracy. This outcome was possible from the correlation made with the datasets collected from the sensors installed in the machine axes and the use of the tetrahedron artifact to make a verification of the accuracy of the machine, thanks to the remote real-time monitoring system deployed for TRIMEK’s pilot. This has helped to reflect the importance of a cost and time-effective maintenance approach and the need to be able to monitor critical parameters of the machine, both for TRIMEK as a company and for the client’s perspective.
Another lesson learned throughout the project development is related to the AI maintenance-based techniques, as it is required multiple large datasets besides the requirement of having failure data to develop accurate algorithms; and based on that CMM machine are usually very stable this makes difficult to develop algorithms for a fully predictive maintenance approach in metrology sector, at least with a short period for collection and assessment.
In another sense, it became visible that an operator’s support system is a valuable tool for TRIMEK’s personnel, mainly the operators (and new operators), as an intuitive and interacting guide for performing maintenance task that can be more exploited by adding more workflows to other maintenance activities apart from the air bearings system. Additionally, a more customized scheduler could also represent a useful tool for daily use with customers.
As with any software service or package of software, it needs to be learned to implement it and use it, however, TRIMEK’s personnel is accustomed to managing digital tools.
The SERENA project has provided a deep dive into an almost complete IIoT platform, leveraging all the knowledge from all the partners involved, merging in the platform many competencies and technological solutions. The two main aspects of the project for COMAU are the container-based architecture and the analytics pipeline. Indeed, those are the two components that more have been leveraged internally, and which have inspired COMAU effort in developing the new versions of its IIoT offering portfolio. The predictive maintenance solutions, developed under the scope of the project, have confirmed the potential of that kind of solutions, confirming the expectations.
On the contrary, another takeaway was the central need for a huge amount of data, possibly labelled, containing at least the main behaviour of the machine. That limits the generic sense that predictive maintenance is made by general rules which can easily be applied to any kind of machine with impressive results.
More concretely, predictive maintenance and, in general, analytics pipelines have the potential to be disruptive in the industrial scenario but, on the other hand, it seems to be a process that will take some time before being widely used, made not by few all-embracing algorithms, but instead by many vertical ones and applicable to a restricted set of machines or use cases, thus the most relevant.
The SERENA system is very versatile accommodating many different use cases. Prognostics need a deep analysis to model the underlying degradation mechanisms and the phenomena that cause them. Creating a reasonable RUL calculation for the VDL WEW situation was expected to be complicated. This expectation became true as the accuracy of the calculated RUL was not satisfactory. Node-Red is a very versatile tool, well-chosen for the implementation of the gateway. However, the analysis of production data revealed useful information regarding the impact of newly introduced products on the existing production line.
All the experiments conducted have been interpreted within their business implication: a reliable system (i.e. robust SERENA platform and robust data connection) and an effective Prediction System (i.e. data analytics algorithm able to identify mixing head health status in advance) will have an impact on main KPI related to the foaming machine performances. In particular:
Besides practical results, SERENA provided some important lesson to be transferred into its operative departments:
Data Quality
Finding the relevant piece of information hidden in large amounts of data, turned to be more difficult than initially thought. One of the main learnings is that Data Quality needs to be ensured since the beginning and this implies spending some more time, effort and money to carefully select sensor type, data format, tags, and correlating information. This turns particularly true when dealing with human-generated data: if the activity of input data from operators is felt as not useful, time-consuming, boring and out of scope, this will inevitably bring bad data.
Some examples of poor quality are represented by:
a. Missing data
b. Poor data description or no metadata availability
c. Data not or scarcely relevant for the specific need
d. Poor data reliability
The solutions are two: 1) train people on the shop floor to increase their skills on Digitalization in general and a Data-Based decision process specifically; 2) design more ergonomic human-machine interfaces, involving experts in the HMI field with the scope of reducing time to insert data and uncertainty during data input.
These two recommendations can bring in having a better design of dataset since the beginning (which ensure machine-generated data quality) and reduce the possibility of errors, omissions and scarce accuracy in human-generated data.
Data Quantity
PU foaming is a stable, controlled process and it turned to have less variation: thus, machine learning requires large sets of data to yield accurate results. Also, this aspect of data collection needs to de designed in advance, months, even years before the real need will emerge. This turns into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best is doing it at the equipment first installation or its first revamp activity. Don’t underestimate the amount of data you need to improve good machine learning. This, of course, also needs to provide economic justification since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed. Common practice advice is to design a data-gathering campaign starting from the current need. This could lead, though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological descriptions of the system under design. Of course, this could not be feasible in real-life situations, but a good strategy could be to populate the machine with as many sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating excel files spread in individuals’ PC into commonly shared databases, making good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Skills
Data Scientists and Process Experts are not yet talking the same language and it takes significant time and effort from mediators to make them communicate properly. This is also an aspect that needs to be taken into account and carefully planned: companies need definitely to close the “skills” gaps and there are different strategies applicable: train Process Experts on data science, train data scientists on the Subject matter; develop a new role of Mediators, which stays in between and shares a minimum common ground to enable the extreme cases to communicate.
01-10-2017
-30-09-2020
01-11-2018
-31-10-2022
Benefits:
01-11-2019
-31-10-2023
01-01-2020
-31-12-2023
The Battery Pilot will aim at demonstrating that the DigiPrime platform can unlock a sustainable business case targeting the remanufacturing and re-use of second life Li-Ion battery cells with a cross-sectorial approach linking the e-mobility sector and the renewable energy sector, specifically focusing on solar and wind energy applications.
As the proactive exploitation of the DigiPrime platform enables the car-monitored SOH tracing and availability, less testing is needed to assess the residual capacity of the battery. Moreover, by knowing the structure of the battery packs, a decision support system can be implemented to adjust the de-and remanufacturing strategy accordingly and select the most proper cells for re-assembly second-life modules, thus unlocking a systematic circular value chain for Li-ion battery cells re-use. Furthermore, excessively degraded cells which cannot be re-used can be sent to high-value recycling, based on the knowledge of their material compositions.
01-12-2019
-30-11-2022
01-01-2021
-30-06-2024
The method provides a modular approach for knowledge graph population and curation from complex and heterogeneous industrial/manufacturing domains. It helps different stakeholders to represent and share their mental models in a uniform standard representation that can be interpreted by both humans and AI agents.
The main contribution is a novel machine learning architecture that can process both vision data and sensor time-series data to concurrently to obtain greater defect detection accuracy.
High accuracy defect detection, which leads to reduced scrap rates and cost savings
Besides addressing the current lack of dynamic knowledge graph embedding methods, the partner’s work also includes a lean updating approach to efficiently recognize knowledge graph modifications.
Knowledge graph embeddings can be computed on the fly to use them in downstream applications within dynamic domains such as manufacturing.
Even though state-of-the-art process orchestration tools are equipped with logging mechanisms, they do not come with semantically enhanced digital shadows of process knowledge. The aim is to close this gap. Process engineers do not need to become knowledge graph experts. They can just plug in our extension to automatically retrieve knowledge graph representations of their processes.
The current state of the art is limited in terms of team modelling and dynamic adaptability. The project’ approach addresses both by integrating methods for process modelling and knowledge-update mechanisms to come up with an enriched digital shadow that goes beyond static models.
This software tool is designed for real-time process management and ensures the effective execution of these teaming processes. It performs continuous observation and analysis, allowing for the immediate detection of any deviations or disruptions in the team workflows. This recognition supports a prompt response, facilitating adjustments to maintain the operation of the platform.
Even though state-of-the-art process orchestration tools are equipped with logging mechanisms, they do not come with semantically enhanced digital shadows of process knowledge. The aim to close this gap. The knowledge graphs representation not only provides semantic linking/integration of heterogeneous manufacturing knowledge (process, product, resources) but also supports reusability, extensibility, and interoperability.
The development of advanced machine diagnostics software based on machine learning is intricately linked with hardware development. Accurate information obtained from machines and connected devices is crucial to the correct functioning of the software. Software sockets have been developed to link incoming data from injection machines with a data language that the system can understand.
The developed software sockets allow a seamless data acquisition from in-plant machines so data can be immediately consumed by the system. The data ingestion allows the AI models to be trained and used in a more simplified manner.
Usage of wide-angle cameras to monitor a very large work place and analyse multiple workers simultaneously, creating a digital shadow of a human-centred work process.
Rapid on-the-fly analysis of multi-person scenes on a large scale reduces occupational health officers' analysis time and allows for accurate work situation descriptions, as well as simplified follow-up improvement comparisons.
01-01-2023
-31-12-2026
11-01-2015
-31-10-2018
01-10-2016
-30-09-2019
The assembly collaborative robot considers both the operation being performed and operator’s anthropometric characteristics for control program selection and part positioning. Besides, the workplace includes multimodal interactions with both the dual arm assembly and logistic robots as well as with the Manufacturing Execution System. Verbal interaction includes natural speaking (i.e. Spanish language) and voice-based feedback messages, while nonverbal interaction is based on gesture commands considering both left and right-handed workers and multichannel notifications (e.g. push notifications, emails, etc.). Furthermore, the maintenance technician is assisted by on event Intervention request alerts, maintenance decision support dashboard and AR/VR based step by step on the job guidance.
The proposed solution comprises an adaptive smart tool and an AR instruction application using HoloLens wearable devices and a framework for ensuring digital continuity starting from the data recorded in the system for manufacturing engineering up to the execution and analysis phase
An AR based solution is proposed for instructions visualization enabling also on-job training activities and guidance. Regarding the ergonomics, an autonomous tool-trolley has been integrated including voice command and AR based gesture steering.
A collaborative robotic cell has been implemented for the deburring operation where the robot executes the most exhausting phases, while the worker focuses on final quality inspection. Regarding the assembly process, an AR solution, using ultra-real animations has been implemented to guide operators through tasks. Additional AR functionalities include the visualization of textual information (tips, best practices…), access to technical documents and voice recording
Trust is identified as a key indicator in the pilot. Trust experiments are critical when introducing automation mechanisms that co-operate with workers.
Workers’ opinion is key, especially for decision and acceptance, during the design and development of adaptive automation solutions.
Adaptation within automation mechanisms is reported to be an enhancement at the workplace, according to workers.
The introduction of the is perceived by workers as helpful, especially when productive tasks are exhausting and may provoke health issues. They are not received with reluctance but as supportive in workers’ tasks at the workplace. Regarding AR, it is generally considered as very useful, although the HMD (HoloLens) are too heavy for long time tasks.
01-10-2016
-30-09-2019
01-10-2016
-30-09-2019
The smart HMI was tested in E80, with expert and nonexpert operators working on an AGV in real working environment. The availability of such a tool to guide the use and maintenance of complex vehicles was strongly appreciated by workers, since it simplifies the interaction with the vehicle proprietary user interface and enable structured access to knowledge of specific procedures that have been carried out empirically. Moreover, expert operators are now able to take advantage of their experience and plan ad hoc maintenance plan, customized on the current status of the fleet.
Further studies are being carried out to employ the virtual training in order to train the customers’ operators without having to wait for the delivery of a newly bought machine, or without having to block a productive machine for training purposes. The ADAPT module will be further developed in order to evaluate integration with the recently released MAESTRO Active HMI, which already incorporates personalization features such as language settings. Finally, discussions are underway with the commercial area in order to verify whether the use of wearables by customer’s workers can be promoted in order to improve their well-being at home.
Even if robots are well known in Europe there is a lack of knowledge on their real potential and on the existence of tools able to simplify their programming and reconfiguration. Most Industries need to be supported in the process of introducing such tools in the plants. Advanced tools for training, are essential
01-09-2016
-31-08-2019
See also D2.6 Lessons Learned and updated requirements report II and D8.8 Final Evaluation Report of the COMPOSITION IIMS Platform
1. Early design decisions on deployment and communication protocols were made. (Docker, MQTT, AMQP). Deciding on the deployment and communication platforms has made test deployment and integration work easier to manage.
2. Inception design (from the DoA) did not specify some components, e.g., for operational management or configuration. The architecture needed additional components to cover system configuration and monitoring.
3. Blockchain is still not a plug-and-play technology and requires a substantial amount of low-level configuration.
4. The Matchmaker should match agents (requester and suppliers). Moreover, the Matchmaker should match a request with the best available offer.
5. Use cases need to be solidly anchored in the real world of the actors and end users. They must not solely represent what is feasible from a technical point of view, but also reflect non-functional requirements such as regulations and business practices. Otherwise, the business cases would become unsustainable for further exploitation.
01-09-2017
-28-02-2021
UPTIME will develop a versatile and interoperable unified predictive maintenance platform for industrial & manufacturing assets from sensor data collection to optimal maintenance action implementation. Through advanced prognostic algorithms, it predicts upcoming failures or losses in productivity. Then, decision algorithms recommend the best action to be performed at the best time to optimize total maintenance and production costs and improve OEE.
UPTIME innovation is built upon the predictive maintenance concept and the technological pillars (i.e. Industry 4.0, IoT and Big Data, Proactive Computing) in order to result in a unified information system for predictive maintenance. UPTIME open, modular and end-to-end architecture aims to enable the predictive maintenance implementation in manufacturing firms with the aim to maximize the expected utility and to exploit the full potential of predictive maintenance management, sensor-generated big data processing, e-maintenance, proactive computing and industrial data analytics. UPTIME solution can be applied in the context of the production process of any manufacturing company regardless of their processes, products and physical models used.
Key components of UPTIME Platform include:
One of the main learnings is that Data quality needs to be ensured from the beginning of the process. This implies spending some more time, effort and money to carefully select the sensor type, data format, tags, and correlating information. This turns to be particular true when dealing with human-generated data. It means that if the activity of input of data from operators is felt as not useful, time consuming, boring and out of scope, this will inevitably bring bad data.
Quantity of data is another important aspect as well. A stable and controlled process has less variation. Thus, machine learning requires large sets of data to yield accurate results. Also this aspect of data collection needs to be designed for example some months, even years in advance, before the real need emerges.
This experience turns out into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best way is doing it during the first installation of the equipment or at its first revamp activity. Don’t underestimate the amount of data you will need, in order to improve a good machine learning. This of course needs also to provide economic justification, since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed.
A common practice advice is to design a data gathering campaign starting from the current need. This could lead though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological description of the system under design. Of course, this could not be feasible in all real-life situations, but a good strategy could be populating the machine with as much sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating Excel files distributed across different PCs into common shared databases, taking care of making a good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Finally, the third important learning is that Data Scientists and Process Experts still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly. This is also an aspect that needs to be taken into account and carefully planned. Companies need definitely to close the “skills” gaps and there are different strategies applicable:
train Process Experts on data science;
train Data Scientists on subject matter;
develop a new role of Mediators, which stays in between and shares a minimum common ground to enable clear communication in extreme cases.
Quantity and quality of data: the available data in the FFT use case mainly consists of legacy data from specific measurement campaigns. The campaigns were mainly targeted to obtain insights about the effect of operational loads on the health of the asset, which is therefore quite suitable to establish the range and type of physical parameters to be monitored by the UPTIME system. UPTIME_SENSE is capable of acquiring data of mobile assets in transit using different modes of transport. While this would have been achievable from a technical point of view, the possibility to perform field trials was limited by the operational requirements of the end-user. Therefore, only one field trial in one transport mode (road transport) was performed, which yielded insufficient data to develop useful state detection capability. Due to the limited availability of the jig, a laboratory demonstrator was designed to enable partially representative testing of UPTIME_SENSE under lab conditions, to allow improvement of data quantity and diversity and to establish a causal relationship between acquired data and observed failures to make maintenance recommendations.
Installation of sensor infrastructure: during the initial design to incorporate the new sensors into the existing infrastructure, it is necessary to take into consideration the extreme physical conditions present inside the milling station, which require special actions to avoid sensors being damaged or falling off. A flexible approach is adopted, which involves the combination of internal and external sensors to allow the sensor network prone to less failure. Quantity and quality of data: it is necessary to have a big amount of collected data for the training of algorithms. Moreover, the integration of real-time analytics and batch data analytics is expected to provide a better insight into the ways the milling and support rollers work and behave under various circumstances.
Quantity and quality of data need to be ensured from the beginning of the process. It is important to gather more data than needed and to have a high-quality dataset. Machine learning requires large sets of data to yield accurate results. Data collection needs however to be designed before the real need emerges. Moreover, it is important having a common ground to share information and knowledge between data scientists and process experts since in many cases they still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly.
01-11-2017
-28-02-2021
The Predictive Cognitive Maintenance Decision Support System (PreCoM) enables its users to detect damages, estimate damage severity, predict damage development, follow up, optimize maintenance (for reducing unnecessary stoppages) and get recommendations (on what, why, where, how and when to perform maintenance). PreCoM is a cloud-based smart PdM system using vibration as a condition monitoring parameter. Some accelerometers for measuring vibration (of both rotating and nonrotating components), as well as other sensors (i.e. for temperature), have been installed in machines’ significant components (i.e. components whose failures either expensive or dangerous). Over 20 hardware and software modules (common to all considered and equivalent use cases) are integrated into a single automatic and digitised system that gathers, stores, processes and securely sends data, providing recommendations necessary for planning and optimizing maintenance and manufacturing schedules. The PreCoM system includes loops and sub-systems for data acquisition, data/sensor quality control, predictive algorithm, scheduling algorithm, follow up tool, self-healing ability for specific problems, and end-user information interface.
To develop and apply statistical models for supporting PdM, it is always crucial to have as much as possible failure data, which is not easy to find in the companies’ databases. Furthermore, advancing and integrating different technologies in a single automatic and digitised smart PdM system is a challenge that requires close collaboration between research and industry players.
01-10-2017
-31-03-2021
GESTAMP, besides getting familiar with Z-BRE4K’s solution validation and assessment methodology, got a better understanding of internal reflection and readiness to apply predictive maintenance solutions to its plants while new mitigation actions related to process flaws and defects identification were developed during the Z-BRE4K. Also, they have understood the importance of solution validation and assessment methodology defined in Z-BRE4K.
PHILIPS supports the idea of predictive maintenance, “listening to the machines” and understands that the key to success is close contact between technology providers and experts where data integration/architecture and machine learning are both very important projects.
SACMI-CDS found out the importance of collaboration not only with a mechanical engineering/maintenance-related professionals but also with different technical background experts that together can improve multi-tasking and combining shopfloor and office-related activities as well as scheduling of activities during the work journey.
In general, after the solution implementation (TRL5), testing the system on the shop floor (TRL6) and validation of the Z-BRE4K solution (TRL7) at end users, the very final lesson learnt can be summarised as follows:
01-10-2018
-31-03-2022
In order for CoLLaboratE to successfully realize its vision, several prerequisites were set in the form of major Scientific and Technological Objectives throughout the project duration. These are summarized in the following points:
Objective 1: To equip the robotic agents with basic collaboration skills easily adaptable to specific tasks
Objective 2: To develop a framework that enables non-experts teaching human-robot collaborative tasks from demonstration
Objective 3: The development of technologies that will enable autonomous assembly policy learning and policy improvement
Objective 4: To develop advanced safety strategies allowing effective human robot cooperation with no barriers and ergonomic performance monitoring
Objective 5: To develop techniques for controlling the production line while making optimal use of the resources by generating efficient production plans, employing reconfigurable hardware design, and utilising AGV’s with increased autonomy
Objective 6: To investigate the impact of Human-Robot Collaboration to the workers’ job satisfaction, as well as test easily applicable interventions in order to increase trust, satisfaction and performance
Objective 7: To validate CoLLaboratE system’s ability to facilitate genuine collaboration between robots and humans
The CoLLaboratE project will have profound impact on strengthening the competitiveness and growth of companies in the manufacturing sector:
- CoLLaboratE developed a co-production cell for manufacturing production lines, capable to perform assembly operations through human-robot collaboration. This cell is the result of inter-disciplinary technological advances that were realized during the project, in a series of highly significant areas related to robotics and artificial intelligence. The proposed system has been demonstrated and evaluated at TRL6, being ready for commercial take-up, allowing this assembled knowledge to be in turn, rapidly integrated in real production lines of industries and SMEs.
- CoLLaboratE developed technologies for autonomous and collaborative assembly learning and teaching methods by non-experts so that no explicit robot programming is required. As the products of industries, such as LCD TV’s rapidly evolve, flexibility so as to easily adapt in a new assembly task regarding a new product, is a major quality sought for modern assembly lines. Robots that need several months to be programmed and start working on the task are a rather unrealistic solution. Given the (a) time-consuming programming process typically required for industrial robots and (b) difficulties posed to robots from uncertainties in small parts assembly, cheap labour hands of low cost countries (LCCs) have so far been typically utilized instead of robotic solutions, through LCC assembly outsourcing strategies.
- CoLLaboratE service portfolio included a set of innovative fast and flexible manufacturing techniques, combining the benefits of the reconfigurable hardware design and modern ICT technologies (e.g. AΙ, learning toolkit, digitization of assembling processes)
- CoLLaboratE introduced novel AGVs on shop floors with enhanced capabilities, that apart from motion planning and obstacle detection, they are also capable of detecting the intentions of human users in the factory in order to provide flexibility and facilitate the production process, along with optimal use of resources.
- CoLLaboratE reduced delivery times and costs, whereas robot assembly techniques will also allow a much greater degree of customization and product variability. As it is highlighted in the euRobotics AISBL Strategic Research Agenda, the use of robotics in production is a key factor in making manufacturing within Europe economically viable; locating manufacturing in Europe through robotic solutions that will suppress LCC outsourcing is a major goal for the near future. Through flexible assembly lines, the manufacturing companies will be offered with great leverage over their innovation capacity and integration of new knowledge into their products.
- CoLLaboratE paved the way for a new era in industrial assembly lines, where robots will present genuine collaboration with the human workers and will allow manufacturing industries to establish in-house robotic-based assembly lines, capable to rapidly adapt in continuously evolving products. Through its advances, SMEs holding robotic-based assembly lines, will benefit by acting as subcontractors for large industries, since they will be a viable alternative to LCC outsourcing.
It becomes clear that the CoLLaboratE project has profound potential to strengthen the competitiveness and growth of companies and bring back production to Europe, by implementing novel artificial intelligence technologies and integrating robots with collaborative skills in the production, meeting a specific, highly important need of European, as well as worldwide manufacturing industries toward their future growth and sustainability.
The target users for the CoLLaboratE system are manufacturing industries in need of flexible and affordable automation systems to boost their global competitiveness. Successful completion of CoLLaboratE will allow SMEs and large manufacturing companies in Europe to easily program assembly tasks and flexibly adapt to changes in the production pipeline. Such ease of use and rapid integration time of robotic assembly systems is expected to pave the way for step change in the adoption of not only collaborative robots, but a complete collaborative environment provided by the CoLLaboratE solution.