Projects overview
SCOTT | Secure COnnected Trustable Things
01-05-2017
-31-10-2020
QU4LITY | Digital Reality in Zero Defect Manufacturing
01-01-2019
-31-07-2022
One the one hand, some pilot owners expect that no particular skills will be requested after the QU4LITY project development implementations. For example:
- all systems should remain accessible by the majority of the workers without specific expertise or knowledge (where for instance each correlation system has to remain within a blackbox and only provide rules outputs for production lines).
- The AR app and the first training on the machine will be enough for start the production with new operators on the line.
- In essence the job profile will remain the same, however, the operators need to understand & be able to work with these new technologies. This requires some basic knowledge on the (digitalized) systems, for the operators a lot can be captured in SOP’s (standard operation procedures), but the technical support staff should also have some basic knowledge on the workings and the hardware/software side of the systems in order to be able to support the shopfloor where needed
New job profiles and associated skills are: Digital Business Processes Analyst, Expert in Machine Learning Algorithms, DevOps Development knowledge, Data scientist (programming and statistical knowledge), Artificial Inteligence knowledge, Cybersecurity expert, Ontology architects and modellers in MBSE, Digitalized systems Shopfloor worker, Digital and connectivity engineering, New systems integration Manufacturing Engineer, Cloud -Data Formats - Data analytics Engineer, Product, manufacturing and quality global knowledge.
Re- and upskilling needs were identified in the following areas: AI and Data analytics; Agile development, Multi disciplinary project management (IT, mechanical, electrical engineering); Design Thinking; Standardization; Data Analysis and Data Space technology for Manufacturing; IT Skills : Docker environment and languages like phyton of json; Data Analytics : basic skills , BI softwares
Programming languages such as C#, C ++, HTML, Java, Microsoft .NET and SQL Server ; data tools for data cleaning and preprocessing, data parsing, data feature engineering; machine to machine (M2M) data and protocols; Machine Learning Skilling for all languages/ ML Systems; Data analysis skills
The following knowledge delivery mechanism where identified as relevant: AR/VR, gamification, on-the-job training, vocational training, MooCs (Massive Open Online Courses)
- For newcomers to the field of Zero Defect Manufacturing, MOOCs are the way to go, since they can cover more aspects of the Industry4.0 and ZDM, not just the data science part. For the work force already in place, vocational training or on the job training would be recommended, herewith quickly adapting to the new working situations. On the job training would be enough to transmit knowledge of the technology. The solution develop has to be as user friendly as possible and be quickly understandable either on the HMI aspect and on the hardware side.
ZDMP | Zero Defect Manufacturing Platform
01-01-2019
-30-06-2023
EFPF (European Factory Platform) | European Connected Factory Platform for Agile Manufacturing
01-01-2019
-31-12-2022
SHAREWORK | Safe and effective HumAn-Robot coopEration toWards a better cOmpetiveness on cuRrent automation lacK manufacturing processes.
01-11-2018
-31-10-2022
MANUELA | Additive Manufacturing using Metal Pilot Line
01-10-2018
-31-03-2023
UPTIME | UNIFIED PREDICTIVE MAINTENANCE SYSTEM
01-09-2017
-28-02-2021
UPTIME will develop a versatile and interoperable unified predictive maintenance platform for industrial & manufacturing assets from sensor data collection to optimal maintenance action implementation. Through advanced prognostic algorithms, it predicts upcoming failures or losses in productivity. Then, decision algorithms recommend the best action to be performed at the best time to optimize total maintenance and production costs and improve OEE.
UPTIME innovation is built upon the predictive maintenance concept and the technological pillars (i.e. Industry 4.0, IoT and Big Data, Proactive Computing) in order to result in a unified information system for predictive maintenance. UPTIME open, modular and end-to-end architecture aims to enable the predictive maintenance implementation in manufacturing firms with the aim to maximize the expected utility and to exploit the full potential of predictive maintenance management, sensor-generated big data processing, e-maintenance, proactive computing and industrial data analytics. UPTIME solution can be applied in the context of the production process of any manufacturing company regardless of their processes, products and physical models used.
Key components of UPTIME Platform include:
- SENSE deals with data agreegation from heterogeneous sources and provides configurable diagnosis capabilities on the edge.
- DETECT deals with intelligent diagnosis to provide a reliable interpretation of the asset's health.
- PREDICT deals with advanced prognostic capabilities using genering or tailored algorithms.
- ANALYZE deals with analysis of maintenance-related data from legacy and operational system.
- FMECA (Failure Mode, Effects and Criticality Analysis) deals with estimation of possible failure modes and risk criticalities evolution.
- DECIDE deals with continuously improved recommendations based on historical data and real-time prognostic results.
- VISUALIZE deals with configurable visualization to facilitate data analysis and decision making.
One of the main learnings is that Data quality needs to be ensured from the beginning of the process. This implies spending some more time, effort and money to carefully select the sensor type, data format, tags, and correlating information. This turns to be particular true when dealing with human-generated data. It means that if the activity of input of data from operators is felt as not useful, time consuming, boring and out of scope, this will inevitably bring bad data.
Quantity of data is another important aspect as well. A stable and controlled process has less variation. Thus, machine learning requires large sets of data to yield accurate results. Also this aspect of data collection needs to be designed for example some months, even years in advance, before the real need emerges.
This experience turns out into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best way is doing it during the first installation of the equipment or at its first revamp activity. Don’t underestimate the amount of data you will need, in order to improve a good machine learning. This of course needs also to provide economic justification, since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed.
A common practice advice is to design a data gathering campaign starting from the current need. This could lead though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological description of the system under design. Of course, this could not be feasible in all real-life situations, but a good strategy could be populating the machine with as much sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating Excel files distributed across different PCs into common shared databases, taking care of making a good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Finally, the third important learning is that Data Scientists and Process Experts still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly. This is also an aspect that needs to be taken into account and carefully planned. Companies need definitely to close the “skills” gaps and there are different strategies applicable:
-
train Process Experts on data science;
-
train Data Scientists on subject matter;
-
develop a new role of Mediators, which stays in between and shares a minimum common ground to enable clear communication in extreme cases.
Quantity and quality of data: the available data in the FFT use case mainly consists of legacy data from specific measurement campaigns. The campaigns were mainly targeted to obtain insights about the effect of operational loads on the health of the asset, which is therefore quite suitable to establish the range and type of physical parameters to be monitored by the UPTIME system. UPTIME_SENSE is capable of acquiring data of mobile assets in transit using different modes of transport. While this would have been achievable from a technical point of view, the possibility to perform field trials was limited by the operational requirements of the end-user. Therefore, only one field trial in one transport mode (road transport) was performed, which yielded insufficient data to develop useful state detection capability. Due to the limited availability of the jig, a laboratory demonstrator was designed to enable partially representative testing of UPTIME_SENSE under lab conditions, to allow improvement of data quantity and diversity and to establish a causal relationship between acquired data and observed failures to make maintenance recommendations.
Installation of sensor infrastructure: during the initial design to incorporate the new sensors into the existing infrastructure, it is necessary to take into consideration the extreme physical conditions present inside the milling station, which require special actions to avoid sensors being damaged or falling off. A flexible approach is adopted, which involves the combination of internal and external sensors to allow the sensor network prone to less failure. Quantity and quality of data: it is necessary to have a big amount of collected data for the training of algorithms. Moreover, the integration of real-time analytics and batch data analytics is expected to provide a better insight into the ways the milling and support rollers work and behave under various circumstances.
Quantity and quality of data need to be ensured from the beginning of the process. It is important to gather more data than needed and to have a high-quality dataset. Machine learning requires large sets of data to yield accurate results. Data collection needs however to be designed before the real need emerges. Moreover, it is important having a common ground to share information and knowledge between data scientists and process experts since in many cases they still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly.
PreCoM | Predictive Cognitive Maintenance Decision Support System
01-11-2017
-28-02-2021
The Predictive Cognitive Maintenance Decision Support System (PreCoM) enables its users to detect damages, estimate damage severity, predict damage development, follow up, optimize maintenance (for reducing unnecessary stoppages) and get recommendations (on what, why, where, how and when to perform maintenance). PreCoM is a cloud-based smart PdM system using vibration as a condition monitoring parameter. Some accelerometers for measuring vibration (of both rotating and nonrotating components), as well as other sensors (i.e. for temperature), have been installed in machines’ significant components (i.e. components whose failures either expensive or dangerous). Over 20 hardware and software modules (common to all considered and equivalent use cases) are integrated into a single automatic and digitised system that gathers, stores, processes and securely sends data, providing recommendations necessary for planning and optimizing maintenance and manufacturing schedules. The PreCoM system includes loops and sub-systems for data acquisition, data/sensor quality control, predictive algorithm, scheduling algorithm, follow up tool, self-healing ability for specific problems, and end-user information interface.
To develop and apply statistical models for supporting PdM, it is always crucial to have as much as possible failure data, which is not easy to find in the companies’ databases. Furthermore, advancing and integrating different technologies in a single automatic and digitised smart PdM system is a challenge that requires close collaboration between research and industry players.
SERENA | VerSatilE plug-and-play platform enabling remote pREdictive mainteNAnce
01-10-2017
-31-03-2021
In this case edge analytics performed by a low-cost edge device (Raspberry Pi) proved it is feasible to practice predictive maintenance systems like SERENA. However, performance issues were caused by having a large sampling frequency (16kHz) and background processes running on the device, which affected the measurements. By lowering the sampling frequency, this issue can be reduced. Another way would be to obtain different hardware with buffer memory on the AD-card. In addition, a real-time operating system implementation would also work. It was found that the hardware is inexpensive to invest in, but the solution requires a lot of tailoring, which means that expenses grow through the number of working hours needed to customize the hardware into the final usage location. If there are similar applications thatimplement the same configuration, cost-effectiveness increases. Furthermore, the production operators, maintenance technicians, supervisors, and staff personnel at the factory need to be further trained for the SERENA system and its features and functionalities.
From the activities developed within the SERENA project, it became clearer the relation between the accuracy and good performance of a CMM machine and the good functioning of the air bearings system. It was proved or confirmed by TRIMEK’s machine operators and personnel that airflow or pressure inputs and values out of the defined threshold directly affects the machine accuracy. This outcome was possible from the correlation made with the datasets collected from the sensors installed in the machine axes and the use of the tetrahedron artifact to make a verification of the accuracy of the machine, thanks to the remote real-time monitoring system deployed for TRIMEK’s pilot. This has helped to reflect the importance of a cost and time-effective maintenance approach and the need to be able to monitor critical parameters of the machine, both for TRIMEK as a company and for the client’s perspective.
Another lesson learned throughout the project development is related to the AI maintenance-based techniques, as it is required multiple large datasets besides the requirement of having failure data to develop accurate algorithms; and based on that CMM machine are usually very stable this makes difficult to develop algorithms for a fully predictive maintenance approach in metrology sector, at least with a short period for collection and assessment.
In another sense, it became visible that an operator’s support system is a valuable tool for TRIMEK’s personnel, mainly the operators (and new operators), as an intuitive and interacting guide for performing maintenance task that can be more exploited by adding more workflows to other maintenance activities apart from the air bearings system. Additionally, a more customized scheduler could also represent a useful tool for daily use with customers.
As with any software service or package of software, it needs to be learned to implement it and use it, however, TRIMEK’s personnel is accustomed to managing digital tools.
The SERENA project has provided a deep dive into an almost complete IIoT platform, leveraging all the knowledge from all the partners involved, merging in the platform many competencies and technological solutions. The two main aspects of the project for COMAU are the container-based architecture and the analytics pipeline. Indeed, those are the two components that more have been leveraged internally, and which have inspired COMAU effort in developing the new versions of its IIoT offering portfolio. The predictive maintenance solutions, developed under the scope of the project, have confirmed the potential of that kind of solutions, confirming the expectations.
On the contrary, another takeaway was the central need for a huge amount of data, possibly labelled, containing at least the main behaviour of the machine. That limits the generic sense that predictive maintenance is made by general rules which can easily be applied to any kind of machine with impressive results.
More concretely, predictive maintenance and, in general, analytics pipelines have the potential to be disruptive in the industrial scenario but, on the other hand, it seems to be a process that will take some time before being widely used, made not by few all-embracing algorithms, but instead by many vertical ones and applicable to a restricted set of machines or use cases, thus the most relevant.
The SERENA system is very versatile accommodating many different use cases. Prognostics need a deep analysis to model the underlying degradation mechanisms and the phenomena that cause them. Creating a reasonable RUL calculation for the VDL WEW situation was expected to be complicated. This expectation became true as the accuracy of the calculated RUL was not satisfactory. Node-Red is a very versatile tool, well-chosen for the implementation of the gateway. However, the analysis of production data revealed useful information regarding the impact of newly introduced products on the existing production line.
All the experiments conducted have been interpreted within their business implication: a reliable system (i.e. robust SERENA platform and robust data connection) and an effective Prediction System (i.e. data analytics algorithm able to identify mixing head health status in advance) will have an impact on main KPI related to the foaming machine performances. In particular:
- Overall equipment effectiveness increased 0.4% with an intended achievement of 15%
- Mean time to repair reduced from 3.5 hours to 3 hours
- Mean time between failures increased from 180 days to over 360 days
- Total cost of maintenance was reduced from €17400 to €8000.
Besides practical results, SERENA provided some important lesson to be transferred into its operative departments:
Data Quality
Finding the relevant piece of information hidden in large amounts of data, turned to be more difficult than initially thought. One of the main learnings is that Data Quality needs to be ensured since the beginning and this implies spending some more time, effort and money to carefully select sensor type, data format, tags, and correlating information. This turns particularly true when dealing with human-generated data: if the activity of input data from operators is felt as not useful, time-consuming, boring and out of scope, this will inevitably bring bad data.
Some examples of poor quality are represented by:
a. Missing data
b. Poor data description or no metadata availability
c. Data not or scarcely relevant for the specific need
d. Poor data reliability
The solutions are two: 1) train people on the shop floor to increase their skills on Digitalization in general and a Data-Based decision process specifically; 2) design more ergonomic human-machine interfaces, involving experts in the HMI field with the scope of reducing time to insert data and uncertainty during data input.
These two recommendations can bring in having a better design of dataset since the beginning (which ensure machine-generated data quality) and reduce the possibility of errors, omissions and scarce accuracy in human-generated data.
Data Quantity
PU foaming is a stable, controlled process and it turned to have less variation: thus, machine learning requires large sets of data to yield accurate results. Also, this aspect of data collection needs to de designed in advance, months, even years before the real need will emerge. This turns into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best is doing it at the equipment first installation or its first revamp activity. Don’t underestimate the amount of data you need to improve good machine learning. This, of course, also needs to provide economic justification since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed. Common practice advice is to design a data-gathering campaign starting from the current need. This could lead, though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological descriptions of the system under design. Of course, this could not be feasible in real-life situations, but a good strategy could be to populate the machine with as many sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating excel files spread in individuals’ PC into commonly shared databases, making good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Skills
Data Scientists and Process Experts are not yet talking the same language and it takes significant time and effort from mediators to make them communicate properly. This is also an aspect that needs to be taken into account and carefully planned: companies need definitely to close the “skills” gaps and there are different strategies applicable: train Process Experts on data science, train data scientists on the Subject matter; develop a new role of Mediators, which stays in between and shares a minimum common ground to enable the extreme cases to communicate.
PROGRAMS | PROGnostics based Reliability Analysis for Maintenance Scheduling
01-10-2017
-31-03-2021
PROGRAMS aims at developing a HW/SW suite of solutions capable of:
- Managing data relative to all maintenance strategies (including PdM)
- Evaluating the cost associated to different maintenance strategies and policies
- Select and allocate the optimal maintenance Strategies (considering PdM) and Policies for each factory/machine assets (that minimise the overall cost)
- Allowing the seamless transfer of information from all factory levels
- Allowing easy PdM solutions deployment and exploitation
- Optimizing integration of PdM based maintenance with production activities
- Gathering and sharing maintenance information at all factory levels
PROGRAMS solution will allow SMEs to access the benefits of Predicitive Maintenance wilth limited costs.
Several challenges limit the succesfull application of Predictive Maintenance in factories:
- Lack of pre-existing maintenance data: Industry 4.0 is only slightly improving the deployment of tools for collecting data
- Difficult data synchronization: existing data is saved into tens of different formats and with different sampling frequencies
- Lack of sensors data relative to equipment fault status: equipment is never purposefully left to reach such a degraded status and, even then, faults happens only few times a year (so there is an high chance of never seeing faults during project duration).
Correct determination of best maintenance strategies and computation of components RUL requires the collection of a vast amount of data in a format that must be easily accessible and analyzed.
Robot components show a slow degradation of their performances: data collection must begin as soon as possible.
Predictive maintenance requires different skills and thus new professional figures will have to to be trained:
- Production equipment operators
- Maintenance operators
- Data scientists
- Maintenance managers
- Software developers
PROPHESY | Platform for rapid deployment of self-configuring and optimized predictive maintenance services
01-10-2017
-30-09-2020
- The PROPHESY-SHARE platform has developed into a multi-purpose tool for both remote support as well as providing specific work instructions and visualization of (predicted) tooling information to the maintenance mechanic. As a result, both industrial partners are considering investments in this technology.
- The end-users are very eager to work with digital support systems that ease their job as it brings a single point of data.
- Quite some hurdles to take in the field of health and safety, IT security, GDPR (e.g. face blurring).
- The development cycle with prototype demonstration v1-feedback from the end-users and other stakeholders-demonstration of an improved prototype v2-feedback from the end-users and interdepartmental stakeholders worked very well.
- It is challenging to run a development project in a real-life production environment due to high pressure on production output.
- Data collection from legacy systems brings some specific IT challenges.
- Data understanding and data-pre-processing are crucial before starting the data analysis.
- It was proven once more that close collaboration is needed between data scientists and process experts to get reliable results.
- The results of PROPHESY-ML application to the use cases have been reported for both demonstrators in the project’s deliverables, achieving RUL prediction with accuracy as little as 4-5% Mean Absolute Percentage error.
L4MS | Logistics for Manufacturing SMEs
01-10-2017
-31-03-2021
MIDIH | Manufacturing Industry Digital Innovation Hubs
01-10-2017
-30-09-2020
I4MS-Go | I4MS Going to Market Alliance
01-09-2017
-29-02-2020
iDev40 | Integrated Development 4.0
01-05-2018
-31-10-2021
DELPHI4LED | From Measurements to Standardized Multi-Domain Compact Models of LEDs
01-06-2016
-30-09-2019
SWARMs | Smart and Networking UnderWAter Robots in Cooperation Meshes
01-07-2015
-31-07-2018
Met4FoF | Metrology for the Factory of the Future
01-06-2018
-31-05-2021
EUCoM | Standards for the evaluation of the uncertainty of coordinate measurements in industry
01-06-2018
-31-05-2021
SmartCom | Communication and validation of smart data in IoT-networks
01-01-2018
-01-01-2021
ROSSINI | RObot enhanced SenSing, INtelligence and actuation to Improve job quality in manufacturing
01-10-2018
-31-03-2022
ROSSINI develops and demonstrates technologies enabling a significant advancement in HRC. They are:
- ROSSINI Smart and Safe Sensing System
- Safety Aware Control Architecture
- Collaborative by Birth Robotic Arm
- Human-robot mutual understanding framework
- Integration and Validation Layer
These technologies will be then integrated into the ROSSINI Platform architecture.
Expected achievements: 15% increase in OECD Job Quality Index through work environment and safety improvement; 20% reduction in production reconfiguration time and cost; reduction of heavy works impacts and costs: increase in the overall job satisfaction and job attractiveness; increased value-chain integration and stakeholder satisfaction
HRC applications pose several challenges to the manufacturing industry which sees an increased need for automation and scalability, notably in SMEs. Moreover, at the moment, HRC applications imply also huge investments in terms of effort, time and intellectual capital to integrate robots and sensors into the manufacturing workflow which can’t be afforded by most of the European SMEs, notably if the production combines low volume with high mix. Trough ROSSINI project, implementation of real and cost effective HRC contributes to redesign workplaces combining automation and lean manufacturing concept, with a drastic reduction of conversion and reconfiguration costs.
The development of the Rossini Modular KIT allowed to bring significant advance in terms of tecnology and awareness on collaborative robotics for all Europe. In particular, the set of efficient and modular tools developed within Rossini enable and ease different specific activities: from the hazarda assessment evaluation untill the multiple detection of humans on a monitored area where robots are working. Nevertheless, important activities still need quite a few effort in terms of knowledge, development and collaboration: from the necessity to refine (or define new well-establised interfaces) untill the identification of solutions able to bring the human-robot iteraction even closer (and more trustworthy), also in terms of standadization (that still present several gaps on this topic).
In particular, the following topic have been identified as important issues to tackle in future activities and research:
- Cross-domain approach for both design and validation 8in terms of safety and beyond)
- Considering the need to widespread HRC technologies throughout very different kind of people
- Identification of the most suitable data to collect and monitor
- Inclusion of a proper ergonomics perspective to interpret data
- Automatization of the risk assessment in the design phase
- Maintaining coherence in development with the already published standards
ROSSINI helps European factories to attract skilled workforce in factories because of the attention paid to job quality and employee satisfaction
CoLLaboratE | Co-production CeLL performing Human-Robot Collaborative AssEmbly
01-10-2018
-31-03-2022
In order for CoLLaboratE to successfully realize its vision, several prerequisites were set in the form of major Scientific and Technological Objectives throughout the project duration. These are summarized in the following points:
Objective 1: To equip the robotic agents with basic collaboration skills easily adaptable to specific tasks
Objective 2: To develop a framework that enables non-experts teaching human-robot collaborative tasks from demonstration
Objective 3: The development of technologies that will enable autonomous assembly policy learning and policy improvement
Objective 4: To develop advanced safety strategies allowing effective human robot cooperation with no barriers and ergonomic performance monitoring
Objective 5: To develop techniques for controlling the production line while making optimal use of the resources by generating efficient production plans, employing reconfigurable hardware design, and utilising AGV’s with increased autonomy
Objective 6: To investigate the impact of Human-Robot Collaboration to the workers’ job satisfaction, as well as test easily applicable interventions in order to increase trust, satisfaction and performance
Objective 7: To validate CoLLaboratE system’s ability to facilitate genuine collaboration between robots and humans
The CoLLaboratE project will have profound impact on strengthening the competitiveness and growth of companies in the manufacturing sector:
- CoLLaboratE developed a co-production cell for manufacturing production lines, capable to perform assembly operations through human-robot collaboration. This cell is the result of inter-disciplinary technological advances that were realized during the project, in a series of highly significant areas related to robotics and artificial intelligence. The proposed system has been demonstrated and evaluated at TRL6, being ready for commercial take-up, allowing this assembled knowledge to be in turn, rapidly integrated in real production lines of industries and SMEs.
- CoLLaboratE developed technologies for autonomous and collaborative assembly learning and teaching methods by non-experts so that no explicit robot programming is required. As the products of industries, such as LCD TV’s rapidly evolve, flexibility so as to easily adapt in a new assembly task regarding a new product, is a major quality sought for modern assembly lines. Robots that need several months to be programmed and start working on the task are a rather unrealistic solution. Given the (a) time-consuming programming process typically required for industrial robots and (b) difficulties posed to robots from uncertainties in small parts assembly, cheap labour hands of low cost countries (LCCs) have so far been typically utilized instead of robotic solutions, through LCC assembly outsourcing strategies.
- CoLLaboratE service portfolio included a set of innovative fast and flexible manufacturing techniques, combining the benefits of the reconfigurable hardware design and modern ICT technologies (e.g. AΙ, learning toolkit, digitization of assembling processes)
- CoLLaboratE introduced novel AGVs on shop floors with enhanced capabilities, that apart from motion planning and obstacle detection, they are also capable of detecting the intentions of human users in the factory in order to provide flexibility and facilitate the production process, along with optimal use of resources.
- CoLLaboratE reduced delivery times and costs, whereas robot assembly techniques will also allow a much greater degree of customization and product variability. As it is highlighted in the euRobotics AISBL Strategic Research Agenda, the use of robotics in production is a key factor in making manufacturing within Europe economically viable; locating manufacturing in Europe through robotic solutions that will suppress LCC outsourcing is a major goal for the near future. Through flexible assembly lines, the manufacturing companies will be offered with great leverage over their innovation capacity and integration of new knowledge into their products.
- CoLLaboratE paved the way for a new era in industrial assembly lines, where robots will present genuine collaboration with the human workers and will allow manufacturing industries to establish in-house robotic-based assembly lines, capable to rapidly adapt in continuously evolving products. Through its advances, SMEs holding robotic-based assembly lines, will benefit by acting as subcontractors for large industries, since they will be a viable alternative to LCC outsourcing.
It becomes clear that the CoLLaboratE project has profound potential to strengthen the competitiveness and growth of companies and bring back production to Europe, by implementing novel artificial intelligence technologies and integrating robots with collaborative skills in the production, meeting a specific, highly important need of European, as well as worldwide manufacturing industries toward their future growth and sustainability.
The target users for the CoLLaboratE system are manufacturing industries in need of flexible and affordable automation systems to boost their global competitiveness. Successful completion of CoLLaboratE will allow SMEs and large manufacturing companies in Europe to easily program assembly tasks and flexibly adapt to changes in the production pipeline. Such ease of use and rapid integration time of robotic assembly systems is expected to pave the way for step change in the adoption of not only collaborative robots, but a complete collaborative environment provided by the CoLLaboratE solution.
The main innovation will be represented by the introduction in production of MPFQ model fused with AQ control loops: Functional Integration and Correlation between Material, Quality, Process and Appliance Functions.