PERFoRM | Production harmonizEd Reconfiguration of Flexible Robots and Machinery
10-01-2015
-30-09-2018
10-01-2015
-30-09-2018
10-01-2015
-30-09-2018
10-01-2015
-30-09-2018
01-09-2016
-31-08-2019
The COMPOSITION project contains a Digital Factory Model Component which offers representation of factory processes and resources in a common format based on well-known standards.
01-09-2016
-31-08-2019
01-09-2016
-30-11-2019
01-10-2016
-31-10-2019
The system blueprint is the FAR-EDGE RA. After having defined the requirements and the constraints for each block of the RA, a thorough analysis of the SotA has been done, which led to the identification of some existing software components meeting the specs. We then identified the gaps that the project will need to fill-in: not surprisingly, these where all the key enabling technologies, like the distributed ledger. However, hardly anything is going to be built totally from scratch in FAR-EDGE. The distributed ledger, for example, will be a customization of a generic, open source Blockchain platform (Hyperledger Fabric).
01-10-2016
-29-02-2020
01-10-2016
-30-09-2019
01-10-2016
-31-03-2020
Within the Z-Fact0r, the proposed (higher level) DSS, with the support of the knowledge base and the online inspection module (1st level decision support at single stage), produce, verify and validate decisions aligned with the quality control policies, production targets, desired product specifications and maintenance management requirements. Key functional characteristics of the envisioned DSS incorporates among others, techniques for monitoring and predicting product quality, action prioritization, root cause analysis, and mitigation planning algorithms (at product and workstation level). Moving beyond existing solutions that focus only on specific aspects of the production procedure, or that are restrained to diagnosis, the proposed DSS system incorporates autonomous, hierarchical decision support, based on process analytical technologies and newly developed suitably adjusted knowledge-based systems, and combines product monitoring models and data analytics from heterogeneous sources. The envisioned DSS takes into account a wide set of multiple factors and criteria, such as data uncertainty, lack of information and information quality, involvement of multiple actors, and real-time response. Thanks to the 5 intertwined zero-defect strategies (i.e. Z-PREDICT, Z-PREVENT, Z-DETECT, Z-REPAIR and Z-MANAGE) the overall solution presents a significant contribution to a spectacular improvement in the overall performance and reliability of the targeted multi-stage manufacturing systems and in the production agility (response to continuous adjustments in production targets).
DATAPIXEL provides the information associated to the defect detection in the manufacturing parts selected. This information is used as an input for developing the defect detection algorithms of Z-Fact0r solution. Based on this input, a data conditioning methodology has been developed to extract information concerning to the defect position and type. This information will be used as baseline for the model validation, via comparison with the respective simulation results.
The procedure that has been used is the following:
01-10-2016
-31-10-2019
From a technological point of view, Open vf-OS Platform will provide elements covering the connected world, allowing the exchange and collaboration of information between companies on a value stream thanks to the cloud approach to be adopted (vf-Platform). The Open vf-OS covers from the control device level, where information from the systems (IoT, CPS, embedded systems) is gathered, processed and empowered.
01-10-2016
-30-09-2019
01-10-2016
-30-09-2019
01-11-2015
-31-10-2017
Cyper Physical Production System and digital twins requires data collection from real system
01-09-2017
-28-02-2021
UPTIME will reframe predictive maintenance strategy in a systematic and unified way with the aim to fully exploit the advancements in ICT and maintenance management by examining the potential of big data in an e-maintenance infrastructure taking into account the Gartner’s four levels of data analytics maturity and the
proactive computing principles.
UPTIME will enable manufacturing companies to reach Gartner's four levels of data analytics maturity for optimised decision making - each one building on the previous one: Monitor, Diagnose and Control, Manage, Optimize - aims to optimise in-service efficiency and contribute to increased accident mitigation capability by avoiding crucial breakdowns with significant consequences. UPTIME Components UPTIME_DETECT & UPTIME_PREDICT and UPTIME_ANALYZE aim to enhance the methodology framework for data processing and analytics. The key role for the UPTIME_DETECT and UPTIME_PREDICT components are data scientists who are in charge of developing, testing and deploying algorithmic calculations on data streams. In this way, the component is able to to identify the current condition of technical equipment and to give predictions. On the other hand, the UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, and to extract and correlate relevant knowledge.
In UPTIME, two data processing solutions are considered. (1) Batch processing of data at rest, through massively paralle processing, (2) real-time processing of data in motion, real time data from heterogeneous sources are processsed as a continuous "stream" of events (produced by some outside system or systems), and that data processing occurs so fast that all decisions are made without stopping the data stream and storing the information first.
UPTIME main functionalities are structured in three main modules, namely: edge, cloud and GUI modules.
4 main components in the cloudbased infastructure of the UPTIME platform include:
UPTIME_SENSE component (USG prototype) is located in the edgebased infrastructure of the UPTIME platform. It aims to capture data from a high variety of sources and cloud environments. SENSE brings configurable diagnosis capabilities on the edge, e.g. for real-time or off-the-grid applications. SENSE addresses 3 high level functions:
The UPTIME_SENSE component is responsible for the acquisition of sensor data from the field. It is utilised to enable previously disconnected assets, to communication with the UPTIME Cloud.
The current draft of the UPTIME data model is designed based on international standards like MIMOSA (OSA-CBM v3.3.1 and OSA-EAI v3.2.3a), the initial historical data provided by the business cases and the previous implementations of UPTIME_FMECA and UPTIME_DECIDE.
The "Persistence" layer in the UPTIME conceptual architecture includes a Database Abstraction Layer (DAL) and houses of relational database engine as well as a NoSQL database, where all information needed by the "User Interaction" and "Real-Time Procesing and Batch Processing" layers (refer to UPTIME conceptual architecture) is stored and retrieved. For the raw sensor data itself, this data storage concept is enhanced by a database for time-series data to ensure efficient and reliable storage, while visualization functionalities will use an indexing database to facilitate the exposure of analytics. In these databases, all information needed by the other three layers is stored and retrieved. The UPTIME solution aims to provide data harmonization in terms of manipulating streaming data coming from the sensors. Based upon these needs a time series database is needed and in the context of UPTIME three instances of influxDB (one instance per business case) are installed. Along with the influxDB instances, a common MySQL database that will handle the operations of the UPTIME system is created.
UPTIME Data are stored in appropriate, shared databases (NoSQL, time-series-based, relational) according to a common UPTIME predictive maintenance model in order to facilitate homogeneous access.
UPTIME has a common MySQL database that will handle the operations of the UPTIME system.
UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, as well as to extract and correlate relevant knowledge. The data mining and analytics of ANALYZE component practically delivers the intelligence of the ANALYZE component by defining, training, executing and experimenting with different machine learning algorithms.
01-10-2017
-31-03-2021
The Z-Break solution uses a variety of communication protocols. HTTP, OPC-UA, IEEE 802.15.4e and IEC WirelessHART. The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web. OPC UA supports two protocols. The binary protocol is opc.tcp://Server and http://Server is for Web Service. Otherwise OPC UA works completely transparent to the API. IEEE 802.15.4 is a technical standard which defines the operation of low-rate wireless personal area networks (LR-WPANs). It specifies the physical layer and media access control for LR-WPANs, and is maintained by the IEEE 802.15 working group, which defined the standard in 2003. WirelessHART is a wireless sensor networking technology based on the Highway Addressable Remote Transducer Protocol (HART). Developed as a multi-vendor, interoperable wireless standard, WirelessHART was defined for the requirements of process field device networks. Also, it uses the NGSI protocol. NGSI is a protocol developed to manage Context Information. It provides operations like managing the context information about context entities, for example the lifetime and quality of information and access (query, subscribe/notify) to the available context Information about context Entities.
Z-BRE4K solution provides a big data analytics framework for the identification of the deterioration trends to extended towards prescriptive maintenance. Advanced data analysis tools are under development, to be applied to the quality and production data to realise zero-defect and zero-break down production. Furthermore, it involves models for anomaly detection, that are capable of identifying the machine states where the operation deviated from the norm. This is achieved by collecting the data from the machine sensors in chunks of time and processing them in batch through deep learning models. The models are trying to recreate their inputs, and this results in an observable measure called Reconstruction Error, which is used to identify states that the models aren’t capable of addressing sufficiently (which constitutes an anomaly.
Z-BRE4K solution will provide an ontology with annotation mechanisms that include all the necessary information to perform predictive maintenance analysis to achieve extended operating life of assets in production facilities. This context includes the sensorial data processing to be used as simulation inputs and the simulation process itself (physics-based modelling). It also includes the machine learning application as well, due to the usage of prediction models in data-driven modelling. Knowledge Based System (KBS) will extract, store and retrieve all the relevant information enriched with semantic annotations to guarantee a prompt identification of criticalities. Shoop floor data is transformed into RDF (Resource Description Framework) data, a standard model for data interchange on the web, and stored in a triple store DB. Also, the M3 Gage platform serves for fast verification of the machine, condition monitoring, and as a data repository as well. It allows information interconnection from different data sources, and furthermore, the architecture proposed by AUTOWARE provides the ability to establish data processing and computing at the most appropriate abstraction level in the hierarchy: Field, Workcell/ Production Line, Factory and Enterprise. Different filtering and pre-processing algorithms are applied on the edge to clean real time raw data and reduce unwanted noise. In addition, convolutional neural networks are used to process high-throughput video streams, providing a non-time critical stream of features for cloud services.
The suggestion beyond the state-of-the-art is to have intelligent machine simulators so an information and knowledge rich platform can provide an accurate account of the machine’s current state and provide predictive (look ahead) potential scenarios of future time type, severity and risks of breakdown. Collected, processed, integrated and aggregated data will be structured and fed in real-time into networked simulators enabling advanced analysis and visualization to provide smart services, higher fidelity and prediction accuracy for production and manufacturing assets management. Different schemes for data collection configuration are implemented (ranging dedicated IoT devices with independent methodologies) to collect raw data from sensors, pre-process and aggregate the information, and share the results with other services through an IDS connector. The Z-Bre4k IDS connectors have a reference architecture to ensure data sovereignty and integrity throughout this collection phase.
Within Z-BRE4K a semantic data modelling is used for interoperability. Semantic representation is used for machinery, critical components, failure modes as well as optimal conditions. Statistical methods and machine learning algorithms are used in offline mode to discover patterns in the data and associate them with specific events (Pattern discovery, Association Rules), as well as infer causality events in cases such as quality control (Quality Estimation based on Machine Status.
Within Z-BRE4K, a novel software application will be developed and added to I-LiKe Machine’s tech stack: A Knowledge Base System (KBS) to extract, store, and retrieve all the relevant information enriched with semantic annotations to guarantee a prompt identification of criticalities in the process.
The KBS represents a step towards the implementation of novel and innovative solutions, still not common practice in manufacturing. The data repository is in the form of Triplestore which is designed to store identities conducted from triplex collections representing a subject-predicate-object relationship. On top of the repositories a reasoning engine creates relationships and allows to extract knowledge to be consumed by other applications.
01-10-2017
-30-09-2021
01-01-2019
-31-07-2022
Through innovative algorithms and statistical methods, possible data sources for predictive quality control can be identified and evaluated. Moreover, by cooperation of all project partners, the realization of data access and acquisition along the whole process chain can be realized. With a focus on algorithms and methodology, a use case-specific algorithm is going to be implemented and validated to maintain high prediction accuracy.
Data availability is a challenge: Limited access to measurement data (due to limited access to third-party systems)
There seems to be relationship to predict torque with use of in-line data. Needs to be more explored
By applying sophisticated algorithms and methods on the acquired data, systematic failure root cause detection supported by data analytics can be implemented. In addition, improved knowledge of machine states/maintenance requirements for neuralgic points can be implemented through the desired solution path within this pilot.
An AI vision algorithm developed by TNO (WP3) seems to filter bad rated parts compared to installed algorithm. Advantage can be when product print is changing to catch-up development speed in traditional algorithm development.
For this trial, the acquired test data will be analyzed regarding quality classification. In every test a part could pass or fail. Failed parts must be reworked, if possible, and brought back to the process. Sometimes parts are classified as failed even if they are good (false positive). This effect will be analyzed by machine learning algorithms and, if necessary, adopted in classification parameterisation. Additionally, the fact of 100% testing, means every panel is tested automatically, with bottleneck in out of the line test stations will be addressed in setting up failure prediction models for quality forecast. This will be supported by data analysis of pre reflow AOI (automated optical inspection).
With all these data analysis and process optimization activities economical evaluation will be included to support decisions in-process and configuration changes. For the development of these applications, the main steps are data availability/access, data processing, and model development. The developed applications should be deployed on Edge devices.
01-01-2020
-31-12-2023
Operational services aim to collect product data on post-use Li-Ion batteries about their use phase in order to enable monitoring and full traceability of its life-cycle;
Operational services aim to:
01-05-2017
-31-10-2020
01-11-2017
-28-02-2021
01-10-2017
-31-03-2021
Digital models enahnced with real world data acquired from sensor devices will be used as the basis of physical phenomena that affect the operational condition of the equipment, such as degradation. THis will result in the improvement of the accuracy of the predictive maintenance functionalities of the SERENA platfrom and tools.
The SERENA porject considers the support of data analytics functionalities for acquiring certainportion of sensor data to feed the machine learning algorithms responsible for predictive maintenance.
01-10-2018
-31-03-2022
Sensor data fusion from multiple and heterogeneous is at the core of the development of the RS4 Controller (CORE component of the Rossini Modular KIT)
01-11-2020
-31-10-2023
The proposed approach is underpinned by predictive and prescriptive AI analytics at both component and system level, by cross-fertilizing edge and platform AI, while leveraging the human knowledge and feedback for reinforcement learning (human-in-the-loop)