Projects overview
SCOTT | Secure COnnected Trustable Things
01-05-2017
-31-10-2020
QU4LITY | Digital Reality in Zero Defect Manufacturing
01-01-2019
-31-07-2022
Through innovative algorithms and statistical methods, possible data sources for predictive quality control can be identified and evaluated. Moreover, by cooperation of all project partners, the realization of data access and acquisition along the whole process chain can be realized. With a focus on algorithms and methodology, a use case-specific algorithm is going to be implemented and validated to maintain high prediction accuracy.
Data availability is a challenge: Limited access to measurement data (due to limited access to third-party systems)
There seems to be relationship to predict torque with use of in-line data. Needs to be more explored
By applying sophisticated algorithms and methods on the acquired data, systematic failure root cause detection supported by data analytics can be implemented. In addition, improved knowledge of machine states/maintenance requirements for neuralgic points can be implemented through the desired solution path within this pilot.
An AI vision algorithm developed by TNO (WP3) seems to filter bad rated parts compared to installed algorithm. Advantage can be when product print is changing to catch-up development speed in traditional algorithm development.
For this trial, the acquired test data will be analyzed regarding quality classification. In every test a part could pass or fail. Failed parts must be reworked, if possible, and brought back to the process. Sometimes parts are classified as failed even if they are good (false positive). This effect will be analyzed by machine learning algorithms and, if necessary, adopted in classification parameterisation. Additionally, the fact of 100% testing, means every panel is tested automatically, with bottleneck in out of the line test stations will be addressed in setting up failure prediction models for quality forecast. This will be supported by data analysis of pre reflow AOI (automated optical inspection).
With all these data analysis and process optimization activities economical evaluation will be included to support decisions in-process and configuration changes. For the development of these applications, the main steps are data availability/access, data processing, and model development. The developed applications should be deployed on Edge devices.
Details: AI, foundational, concepts and terminology
Details: AI, Use cases
Details: AI, Framework for AI systems using ML
Details: Industrial IoT standards and roadmapping
Details: IoT RA
Details: Industrial IoT standards and roadmapping
Details: Edge Computing
Details: IoT, interoperability framework
Details: IoT, Vocabulary
Details: IIRA
Details: IoT, Use cases
Details: Big data vocabulary
Milling Digital Twin enables strategy design and quality control in milling processes with only SW tools, simulation and virtual optimisation
Cockpit optimiser software provides environment for intelligent design of an automated cell with the customer.
Cockpit optimiser and Milling Digital Twin with AI tools for accelerating current design and optimisation processes by operators
Solutions to facilitate the analytical thinking of the operator. The solution will help the operator with the correlation of quality and process parameters in order to make a decision upwards in the process.
With the help of skilled production line workers, the data in the AI platform can be annotated and herewith produce the predictive models for ZDM autonomous quality inspection. The platform gives users the ability to monitor the AQ process (Autonomous Quality) and provide feedback for the ZDM.
To acquire quality data, all involved users and managers must understand some basic data science principles. Machine vision in modern times relies on large amount of consistent data. Data acquisition process begins with organized collection of samples, which should become an integral part of every standardized manufacturing process that involves automated quality inspection or ZDM.
There is a need of managing large Data Sets and Big Data, IA solutions for different Manufacturing Processes. Solutions need to support operators in decision-making
Enable operators to work in a more complex environment while reducing the strain of administrative tasks and enabling easy production analytics by capturing information online instead of on paper.
Shopfloor worker (operator – technical support group): From a shopfloor perspective new job profiles, or altered job profiles should be defined, however In essence the job profiles will remain the same, while the operators and Technical Support Groups need to understand & be able to work with these new technologies. This requires some basic knowledge on the (digitalized) systems, for the operators a lot can be captured in SOP’s (Standard Operating Procedures), but the technical support staff should also have some basic knowledge on the workings and the hardware/software side of the systems in order to be able to support the shopfloor where needed.
The ZDM-Autononous Quality Solutions are used as systems that perform tasks in an autonomous/automated way, requiring the intervention of an operator only when an operational tie-breaker is needed. When that is the case, the operator has to analyse the incident and provide for a solution to the AQL System, interacting with it via an HMI interface.
Complete machine parameters correlation is realized, allowing machine operators to take into account all the assets from each workstation of the production line. It enhances its capacity in relation to conventional analytics methods.
The end2end process supported by the overall architecture helps the operator and team leader in their daily activities in order to prevent and anticipate as much as possible quality issues on the product via the analysis of a huge amount of data linked together via the holistic semantic model.
ZDMP | Zero Defect Manufacturing Platform
01-01-2019
-30-06-2023
Details: Messaging, message Exchange
Details: IoT/Device Integration
Details: IoT/Device Integration
Details: IoT/Device Integration
Details: IoT/Device Integration
Details: IoT/Device Integration
Details: IoT/Device Integration
Details: IoT/Device Integration
Details: IoT Architecture
EFPF (European Factory Platform) | European Connected Factory Platform for Agile Manufacturing
01-01-2019
-31-12-2022
MANUELA | Additive Manufacturing using Metal Pilot Line
01-10-2018
-31-03-2023
Productive4.0 | Electronics and ICT as enabler for digital industry and optimized supply chain management covering the entire product lifecycle
01-05-2017
-31-10-2020
UPTIME | UNIFIED PREDICTIVE MAINTENANCE SYSTEM
01-09-2017
-28-02-2021
UPTIME will reframe predictive maintenance strategy in a systematic and unified way with the aim to fully exploit the advancements in ICT and maintenance management by examining the potential of big data in an e-maintenance infrastructure taking into account the Gartner’s four levels of data analytics maturity and the
proactive computing principles.
UPTIME will enable manufacturing companies to reach Gartner's four levels of data analytics maturity for optimised decision making - each one building on the previous one: Monitor, Diagnose and Control, Manage, Optimize - aims to optimise in-service efficiency and contribute to increased accident mitigation capability by avoiding crucial breakdowns with significant consequences. UPTIME Components UPTIME_DETECT & UPTIME_PREDICT and UPTIME_ANALYZE aim to enhance the methodology framework for data processing and analytics. The key role for the UPTIME_DETECT and UPTIME_PREDICT components are data scientists who are in charge of developing, testing and deploying algorithmic calculations on data streams. In this way, the component is able to to identify the current condition of technical equipment and to give predictions. On the other hand, the UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, and to extract and correlate relevant knowledge.
In UPTIME, two data processing solutions are considered. (1) Batch processing of data at rest, through massively paralle processing, (2) real-time processing of data in motion, real time data from heterogeneous sources are processsed as a continuous "stream" of events (produced by some outside system or systems), and that data processing occurs so fast that all decisions are made without stopping the data stream and storing the information first.
UPTIME main functionalities are structured in three main modules, namely: edge, cloud and GUI modules.
- The UPTIME edge module will ensure data collection from machines, sensors, etc. and sent it on for analysis. It may also include some additional functionalities which require real-time results.
- The UPTIME Cloud module contains all the advanced functionalities of the solution, which do not require a real time result. There we will analyse the data collected on the edge, as well as data received from relevant information systems, and provide the expected predictions. “Cloud” can refer to remote servers or an internal cloud within the customer’s Plant or Enterprise, as is deemed necessary by the customer.
- Lastly the GUI module, through which the user will interact with the previously mentioned functionalities, whether it is to view data or configure the solution.
4 main components in the cloudbased infastructure of the UPTIME platform include:
- UPTIME_DETECT and PREDICT component (extended version of PreIno prototype) processes mainly timeseries?based data from the field, to give further context to the data, e.g. to detect topical conditions of technical equipment and to predict probable future conditions.
- UPTIME_ANALYZE (a new developed prototype) is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, and to extract and correlate relevant knowledge.
- UPTIME_DECIDE component (extended version of PANDDA prototype) that implements a prescriptive analytics approach for proactive decision making in a streaming processing computational environment. It provides real-time prescriptions fo the optimal maintenance actions and the optimal time for their implementation on the basis of streams of predictions about future failures.
- UPTIME_FMECA (extended version of DRIFT prototype) provides estimation of possible failure modes and risk criticalities evolution through its data-driven FMECA approach.
UPTIME_SENSE component (USG prototype) is located in the edgebased infrastructure of the UPTIME platform. It aims to capture data from a high variety of sources and cloud environments. SENSE brings configurable diagnosis capabilities on the edge, e.g. for real-time or off-the-grid applications. SENSE addresses 3 high level functions:
- Sensor signal processing, which collects the signals from equipment or other sensors, and pre-processes them before transmitting them on the cloud platform.
- Edge diagnosis for optional state detection diagnosis for certain use cases.
- Support functions for functions necessary for the correct operation of the edge module.
The UPTIME_SENSE component is responsible for the acquisition of sensor data from the field. It is utilised to enable previously disconnected assets, to communication with the UPTIME Cloud.
The current draft of the UPTIME data model is designed based on international standards like MIMOSA (OSA-CBM v3.3.1 and OSA-EAI v3.2.3a), the initial historical data provided by the business cases and the previous implementations of UPTIME_FMECA and UPTIME_DECIDE.
The "Persistence" layer in the UPTIME conceptual architecture includes a Database Abstraction Layer (DAL) and houses of relational database engine as well as a NoSQL database, where all information needed by the "User Interaction" and "Real-Time Procesing and Batch Processing" layers (refer to UPTIME conceptual architecture) is stored and retrieved. For the raw sensor data itself, this data storage concept is enhanced by a database for time-series data to ensure efficient and reliable storage, while visualization functionalities will use an indexing database to facilitate the exposure of analytics. In these databases, all information needed by the other three layers is stored and retrieved. The UPTIME solution aims to provide data harmonization in terms of manipulating streaming data coming from the sensors. Based upon these needs a time series database is needed and in the context of UPTIME three instances of influxDB (one instance per business case) are installed. Along with the influxDB instances, a common MySQL database that will handle the operations of the UPTIME system is created.
UPTIME Data are stored in appropriate, shared databases (NoSQL, time-series-based, relational) according to a common UPTIME predictive maintenance model in order to facilitate homogeneous access.
UPTIME has a common MySQL database that will handle the operations of the UPTIME system.
UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, as well as to extract and correlate relevant knowledge. The data mining and analytics of ANALYZE component practically delivers the intelligence of the ANALYZE component by defining, training, executing and experimenting with different machine learning algorithms.
UPTIME_VISUALIZE (extended version of SeaBAR prototype) component is responsible for the intuitive and uninterrupted human-machine interaction. The user interfaces, including the analytics dashboards and the notificaiton engine, will be customised or further developed in full accordance with the end-user business case. Taking an example ofthe UPTIME White Good business case for complex automatic production line to produce drums for dryer, the generation of early warnings to suggest autonomous activities to factory workers should be communicated through mobile devices or on-board devices.
The data visualisation in UPTIME is performed by the UPTIME_VISUALIZE (SeaBAR prototype) component:
- By establishing standardised connectors to the key data/information/knowledge sources for maintenance, production planning, logistics and quality management.
- By faceting, filtering and semantic structuring of the collected data according to maintenance viewpoints.
- Context-sensitive, interactive visualizations in order to allow the end user to easily search and navigate through huge amounts of heterogeneous information with the aim to enable a maximum flexible analysis of all relevant information, e.g. to drill-down into the data according to a region, timeframe and machine or to generalise a specific critical situation and find similar (past) situations with appropriate measures and individual user experience (e.g. best practices).
- A customizable dashboard focusing on the specific information needs of e.g. maintenance engineers, quality managers and upper management by different context specific visualization and analytics tools.
- Continuous visualization of critical data and utilization of captured experience from past situations for providing useful insights in interaction with manufacturing companies’ systems.
- By turning from generic statistical data towards instance-specific data and enabling instance specific diagnosis and prognosis.
UPTIME Platform is developed accroding to unified predictive maintenance framework and an associated unified information system to enable the predictive maintenance strategy implementation in manufacturing industries. The UPTIME predictive maintenance system will extend and unify the new digital, e-maintenance services and tools and will incorporate information from heterogeneous data sources, e.g. sensors, to more accurately estimate the process performances. The UPTIME predictive maintenance platform is developed mainly based on five baseline e-maintenance services and tools:
- USG (Universal Sensor Gateway): USG serves as a modular data acquisition and integration device to the Product Lifecycle Management (PLM) ecosystem of a product
- preInO: The preInO Processing Engine is able to detect and predict the state of a whole system or components with respect to mechanical systems such as windturbines, special-purpose vehicles, production machinery, etc.
- PANDDA (ProActive seNsing enterprise Decision configurator DAshboard): PANDDA is a software service that implements (i) proactive decision methods to provide recommendations about mitigating (perfect or imperfect) maintenance actions and the time for their implementation on the basis of real-time prognostic information; and (ii) a Sensor-Enabled Feedback (SEF) mechanism for continuous improvement of the generated recommendations.
- SeaBAR (Search Based Application Repository): SeaBAR is a modular software platform built on Big Data and Enterprise Search technology. The SeaBAR platform supports end users by means of data aggregation, data analysis and visualization.
- DRIFT (Data-Driven Failure Mode, Effects, and Criticality Analysis (FMECA) Tool): DRIFT is a tool that, on the basis of the information gathered in other modules (maintenance data, production, logistics, quality data) use them to identify what are the Failure Modes, Effects and Criticalities of the components and system according to literature available and novel correlation algorithms among modes failures, effects and critical impacts.
To ease integration of all UPTIME components, the main programming language used by the components and the integrated platform is Java.
The UPTIME platform will leverage crucial, and often hidden, data from machines and systems in real time and substantiate the benefits of advanced predictive maintenance analytics in boosting asset availability and service levels in manufacturing operations. Moreover, UPTIME delivers new ways for effective operational risk management in terms of preventing unexpected failures of a manufacturer’s assets, and effectively planning maintenance actions, thereby transforming the mentality of manufacturing industries in respect to maintenance services. With the help of the end-to-end UPTIME smart diagnosis-prognosis-decision making methods, manufacturers become leaner, more versatile and better prepared to act upon accidents and unexpected incidents in their everyday operations. The increased accident mitigation capabilities they acquire, allow them not only to accelerate the workplace safety and improve the workers’ health,
but also to reduce the incurring costs and become more competitive.
UPTIME platform focusses on the use of condition monitoring techniques, e.g. event monitoring and data processing systems, that will enable manufacturing companies having installed sensors to fully exploit the availability of huge amounts of data and to handle the real-time data in complex, dynamics environement in order to get meaningful insights and to decide and act ahead of time to resolve problems before they appear, e.g. to avoid or mitigate the impact of a future failure, in a proactive manner. Moreover, UPTIME proposed unified framework will not be limited to monitoring and diagnosis but it aims to cover the whole prognostic lifecycle from signal processing and diagnostics till prognostics and maintenance decision making along with their interactions with quality management, production planning and logistics decisions.
PreCoM | Predictive Cognitive Maintenance Decision Support System
01-11-2017
-28-02-2021
SERENA | VerSatilE plug-and-play platform enabling remote pREdictive mainteNAnce
01-10-2017
-31-03-2021
Digital models enahnced with real world data acquired from sensor devices will be used as the basis of physical phenomena that affect the operational condition of the equipment, such as degradation. THis will result in the improvement of the accuracy of the predictive maintenance functionalities of the SERENA platfrom and tools.
The SERENA porject considers the support of data analytics functionalities for acquiring certainportion of sensor data to feed the machine learning algorithms responsible for predictive maintenance.
The SERENA project includes the developments of a plug and play device for machine data acquisition.
Digital models enahnced with real world data acquired from sensor devices will be used as the basis of physical phenomena that affect the operational condition of the equipment, such as degradation. THis will result in the improvement of the accuracy of the predictive maintenance functionalities of the SERENA platfrom and tools.
PROGRAMS | PROGnostics based Reliability Analysis for Maintenance Scheduling
01-10-2017
-31-03-2021
L4MS | Logistics for Manufacturing SMEs
01-10-2017
-31-03-2021
MIDIH | Manufacturing Industry Digital Innovation Hubs
01-10-2017
-30-09-2020
I4MS-Go | I4MS Going to Market Alliance
01-09-2017
-29-02-2020
iDev40 | Integrated Development 4.0
01-05-2018
-31-10-2021
DELPHI4LED | From Measurements to Standardized Multi-Domain Compact Models of LEDs
01-06-2016
-30-09-2019
SWARMs | Smart and Networking UnderWAter Robots in Cooperation Meshes
01-07-2015
-31-07-2018
Met4FoF | Metrology for the Factory of the Future
01-06-2018
-31-05-2021
EUCoM | Standards for the evaluation of the uncertainty of coordinate measurements in industry
01-06-2018
-31-05-2021
SmartCom | Communication and validation of smart data in IoT-networks
01-01-2018
-01-01-2021
ROSSINI | RObot enhanced SenSing, INtelligence and actuation to Improve job quality in manufacturing
01-10-2018
-31-03-2022
Sensor data fusion from multiple and heterogeneous is at the core of the development of the RS4 Controller (CORE component of the Rossini Modular KIT)
Human-machine interface is key to evaluate Job Quality in considering Human Factors analysis and therefore is very relevant in the HR cell design phase also
Based on ROS, the Rossini Modular KIT aims to be fully scalable and widely adopted
The ROSSINI modular KIT offers advance components for the different layers of a robotic application (sensing, perception, cognition, control, actuation and integration)
The RS4 Controller gather and fuse data from different sensor sources. Within ROSSINI the following sensor sources have been developed as EXTRA components able to be connected with the RS4 controller: 3D Vision cameras, Lidar arrays, Radars and Skins.
The ROSSINI Controller (CORE component of the ROSSINI Modular KIT) integrates a Semantic Scena Map, a Flexible layer (Scheduler) and an Execution layer (Motion Planner) to guarantee optimal efficiency of the robot (also considering Job Quality factors)
Within the ROSSINI project an advanced collaborative robot have been developed, equipped with advanced and novel interfaces, and able to perfom very low breaking time.
All the three ROSSINI demonstrators (white goods, electronic equipment, and food packaging) proof the feasibility and the advantages of ROSSINI Platform Implementations in relevant (diffenet and complex) industrial environments
Human-Robot Collaboration is key in the development of the Rossini solution: all the use cases look at this scenario, allowing the human operation working in the same cell with the robot, on the same machine and even on the same work-piece.
The Virtual Design Tool (CORE component of the ROSSINI solution) wants to ease the design process of a HR cell implementation (e.g., helping with the sensor placing or the hazard assessment evaluation)
Job quality evaluation focused on HRC developments aim in improving well-being of workers (acceptance, trust, physical health) in contexts where robots may become actual co-workers.
Trust and acceptance of the workforce are the enabling factor for an actual adoption of HRC solution.
Human-Robot Collaboration is key in the development of the Rossini solution: all the use cases look at this scenario, allowing the human operation working in the same cell with the robot, on the same machine and even on the same work-piece.
The ROSSINI vision is the on of guarantee above all safety and health of the operators.
CoLLaboratE | Co-production CeLL performing Human-Robot Collaborative AssEmbly
01-10-2018
-31-03-2022