Factory2Fit | Empowering and participatory adaptation of factory automation to fit for workers
01-10-2016
-30-09-2019
01-10-2016
-30-09-2019
01-10-2016
-31-10-2019
We are considering some core GEs from FIWARE as candidate building blocks of our Edge Computing architecture. In particular, the Publish/Subscribe GE (Orion Context Broker implementation) is a good candidate as the northbound interface exposed by Edge Gateways – i.e., computing nodes aggregating a number of local edge nodes (field devices, smart factory equipment) and running local automation and/or analytics processes. We are considering to significantly extend the Publish/Subscribe GE by adding distributed computing capabilities: a data context that is replicated and kept in-sync across a number of GE instances (running anywhere on the network), using a Blockchain and smart contracts as the backing technology. FAR-EDGE will contribute its results, as open source software, to the FIWARE for Industry community.
In the scope of FAR-EDGE, the value of FIWARE is in the OMA NGSI standard: a RESTful Web API implementing the publish/subscribe pattern on context information – i.e., a set of attributes representing the current state of some device or process. NGSI is the common language that FIWARE applications use to integrate themselves with the IoT world. For this reason, supporting NGSI in FAR-EDGE means opening up the Platform to the FIWARE community. The FIWARE asset that is crucial for the support of NGSI is Orion Context Broker (OCB), which as for all FIWARE Generic Enablers is open source software. In FAR-EDGE, we envision the use of OCB to implement the generic publish/subscribe interface of the Distributed Data Analytics subsystem
01-09-2016
-31-08-2019
CloudBoard: offers multiple views and access rights to different human actors Decision Support Toolkit: supports decisions authorised by humans, especially in the shop floor Enterprise and Factory models: accessible and re-configurable through user interaction
01-10-2016
-31-03-2021
01-10-2016
-29-02-2020
01-10-2016
-30-09-2019
01-10-2016
-31-03-2020
Applied Technologies:
Spring Boo: Spring Boot is a framework for building web applications. It is built on top of the Spring Framework and follows a zero-configuration principle. The major set of microservices are build using Spring Boot as an application framework.
Spring Cloud: Functionalities for building and integrating microservices are provided by Spring Cloud. It mainly aggregates components of the Netflix Open Source Software (Netflix OSS) project and makes them easily be integrated with
Spring Boot applications. Components of the underlying microservice infrastructure are heavily using modules from Spring Cloud (e.g. Service Discovery, Configuration Server and Gateway Proxy).
Spring Cloud Security: Standardized security mechanisms are implemented using Spring Cloud Security. It provides out-of-the-box integration of security modules to Spring Cloud applications. Authentication and authorization between microservices are realized by using Spring Cloud Security, which supports OAuth2 and OpenID Connect and communicates with the authentication server (i.e. Cloud Foundry UAA).
ELK Stack Logs can be streamed to Logstash, which stores them persistently in Elastic Search. visualizations are done using Kibana, hence the ELK stack. The ELK stack is used to aggregate log output of distributed microservices in order to centrally perform analysis of generated log output.
Cloud Foundry UAA: The Cloud Foundry User Account and Authentication (UAA) is a multi tenant identity management service, available as a stand alone OAuth2 server issuing tokens for clients. Cloud Foundry UAA acts as identity and authentication server issuing OpenID Connect tokens.
Camunda BPM: Camunda BPM is an open source platform for business process management. Camunda BPM is used for the definition and execution of business processes (e.g. supply chain process).
Apache Marmotta: Apache Marmotta is an open implementation of a linked data platform. Apache Marmotta will be mainly used to store catalog data and perform product-search queries. Apache Solr Apache Solr is a free-text indexing tool providing advanced search and navigation capabilities on the indexed data. Apache Marmotta uses Apache Solr for its semantic search cores composed semantic features of indexed items.
Docker: Docker is an open-source solution for application deployment, consisting of prebuilt images running inside a container. Docker will be used for intermediate development releases and on-premises deployment.
PostgreSQL: PostgreSQL is an open-source database system for object-relational data. PostgreSQL will mainly be used as a database technology, in order to have a homogeneous setup.
Apache Kafka: Open Source messaging infrastructure Mainly used for private communication among components and entities.
Data management:
01-10-2016
-31-03-2020
Within the Z-Fact0r, the proposed (higher level) DSS, with the support of the knowledge base and the online inspection module (1st level decision support at single stage), produce, verify and validate decisions aligned with the quality control policies, production targets, desired product specifications and maintenance management requirements. Key functional characteristics of the envisioned DSS incorporates among others, techniques for monitoring and predicting product quality, action prioritization, root cause analysis, and mitigation planning algorithms (at product and workstation level). Moving beyond existing solutions that focus only on specific aspects of the production procedure, or that are restrained to diagnosis, the proposed DSS system incorporates autonomous, hierarchical decision support, based on process analytical technologies and newly developed suitably adjusted knowledge-based systems, and combines product monitoring models and data analytics from heterogeneous sources. The envisioned DSS takes into account a wide set of multiple factors and criteria, such as data uncertainty, lack of information and information quality, involvement of multiple actors, and real-time response. Thanks to the 5 intertwined zero-defect strategies (i.e. Z-PREDICT, Z-PREVENT, Z-DETECT, Z-REPAIR and Z-MANAGE) the overall solution presents a significant contribution to a spectacular improvement in the overall performance and reliability of the targeted multi-stage manufacturing systems and in the production agility (response to continuous adjustments in production targets).
DATAPIXEL provides the information associated to the defect detection in the manufacturing parts selected. This information is used as an input for developing the defect detection algorithms of Z-Fact0r solution. Based on this input, a data conditioning methodology has been developed to extract information concerning to the defect position and type. This information will be used as baseline for the model validation, via comparison with the respective simulation results.
The procedure that has been used is the following:
Nowadays, it is familiar that within the Industry 4.0, the ICT and the CPS, as parts of the industrial processes, are implemented and merged. For data collection, sensors are being used, imbedded within the AI in order to make smooth communication among humans and machines. Thus, Z-Fact0r is a pioneer with several advances in predictive maintenance, IoT sensors on shop floors and endless communication between the various components of the system, creating effective and many efficient applications for Industry 4.0.
01-10-2016
-31-10-2019
From a technological point of view, Open vf-OS Platform will provide elements covering the connected world, allowing the exchange and collaboration of information between companies on a value stream thanks to the cloud approach to be adopted (vf-Platform). The Open vf-OS covers from the control device level, where information from the systems (IoT, CPS, embedded systems) is gathered, processed and empowered.
01-10-2016
-31-03-2021
01-10-2016
-30-09-2019
Decision support systems
01-10-2016
-30-09-2019
01-05-2017
-31-10-2020
01-09-2017
-28-02-2021
UPTIME will reframe predictive maintenance strategy in a systematic and unified way with the aim to fully exploit the advancements in ICT and maintenance management by examining the potential of big data in an e-maintenance infrastructure taking into account the Gartner’s four levels of data analytics maturity and the
proactive computing principles.
UPTIME will enable manufacturing companies to reach Gartner's four levels of data analytics maturity for optimised decision making - each one building on the previous one: Monitor, Diagnose and Control, Manage, Optimize - aims to optimise in-service efficiency and contribute to increased accident mitigation capability by avoiding crucial breakdowns with significant consequences. UPTIME Components UPTIME_DETECT & UPTIME_PREDICT and UPTIME_ANALYZE aim to enhance the methodology framework for data processing and analytics. The key role for the UPTIME_DETECT and UPTIME_PREDICT components are data scientists who are in charge of developing, testing and deploying algorithmic calculations on data streams. In this way, the component is able to to identify the current condition of technical equipment and to give predictions. On the other hand, the UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, and to extract and correlate relevant knowledge.
In UPTIME, two data processing solutions are considered. (1) Batch processing of data at rest, through massively paralle processing, (2) real-time processing of data in motion, real time data from heterogeneous sources are processsed as a continuous "stream" of events (produced by some outside system or systems), and that data processing occurs so fast that all decisions are made without stopping the data stream and storing the information first.
UPTIME main functionalities are structured in three main modules, namely: edge, cloud and GUI modules.
4 main components in the cloudbased infastructure of the UPTIME platform include:
UPTIME_SENSE component (USG prototype) is located in the edgebased infrastructure of the UPTIME platform. It aims to capture data from a high variety of sources and cloud environments. SENSE brings configurable diagnosis capabilities on the edge, e.g. for real-time or off-the-grid applications. SENSE addresses 3 high level functions:
The UPTIME_SENSE component is responsible for the acquisition of sensor data from the field. It is utilised to enable previously disconnected assets, to communication with the UPTIME Cloud.
The current draft of the UPTIME data model is designed based on international standards like MIMOSA (OSA-CBM v3.3.1 and OSA-EAI v3.2.3a), the initial historical data provided by the business cases and the previous implementations of UPTIME_FMECA and UPTIME_DECIDE.
The "Persistence" layer in the UPTIME conceptual architecture includes a Database Abstraction Layer (DAL) and houses of relational database engine as well as a NoSQL database, where all information needed by the "User Interaction" and "Real-Time Procesing and Batch Processing" layers (refer to UPTIME conceptual architecture) is stored and retrieved. For the raw sensor data itself, this data storage concept is enhanced by a database for time-series data to ensure efficient and reliable storage, while visualization functionalities will use an indexing database to facilitate the exposure of analytics. In these databases, all information needed by the other three layers is stored and retrieved. The UPTIME solution aims to provide data harmonization in terms of manipulating streaming data coming from the sensors. Based upon these needs a time series database is needed and in the context of UPTIME three instances of influxDB (one instance per business case) are installed. Along with the influxDB instances, a common MySQL database that will handle the operations of the UPTIME system is created.
UPTIME Data are stored in appropriate, shared databases (NoSQL, time-series-based, relational) according to a common UPTIME predictive maintenance model in order to facilitate homogeneous access.
UPTIME has a common MySQL database that will handle the operations of the UPTIME system.
UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, as well as to extract and correlate relevant knowledge. The data mining and analytics of ANALYZE component practically delivers the intelligence of the ANALYZE component by defining, training, executing and experimenting with different machine learning algorithms.
UPTIME_VISUALIZE (extended version of SeaBAR prototype) component is responsible for the intuitive and uninterrupted human-machine interaction. The user interfaces, including the analytics dashboards and the notificaiton engine, will be customised or further developed in full accordance with the end-user business case. Taking an example ofthe UPTIME White Good business case for complex automatic production line to produce drums for dryer, the generation of early warnings to suggest autonomous activities to factory workers should be communicated through mobile devices or on-board devices.
The data visualisation in UPTIME is performed by the UPTIME_VISUALIZE (SeaBAR prototype) component:
UPTIME Platform is developed accroding to unified predictive maintenance framework and an associated unified information system to enable the predictive maintenance strategy implementation in manufacturing industries. The UPTIME predictive maintenance system will extend and unify the new digital, e-maintenance services and tools and will incorporate information from heterogeneous data sources, e.g. sensors, to more accurately estimate the process performances. The UPTIME predictive maintenance platform is developed mainly based on five baseline e-maintenance services and tools:
To ease integration of all UPTIME components, the main programming language used by the components and the integrated platform is Java.
01-11-2017
-28-02-2021
01-10-2017
-31-03-2021
Digital models enahnced with real world data acquired from sensor devices will be used as the basis of physical phenomena that affect the operational condition of the equipment, such as degradation. THis will result in the improvement of the accuracy of the predictive maintenance functionalities of the SERENA platfrom and tools.
The SERENA porject considers the support of data analytics functionalities for acquiring certainportion of sensor data to feed the machine learning algorithms responsible for predictive maintenance.
The SERENA project includes the developments of a plug and play device for machine data acquisition.
01-10-2017
-31-03-2021
01-10-2017
-31-03-2021
The Z-Break solution uses a variety of communication protocols. HTTP, OPC-UA, IEEE 802.15.4e and IEC WirelessHART. The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web. OPC UA supports two protocols. The binary protocol is opc.tcp://Server and http://Server is for Web Service. Otherwise OPC UA works completely transparent to the API. IEEE 802.15.4 is a technical standard which defines the operation of low-rate wireless personal area networks (LR-WPANs). It specifies the physical layer and media access control for LR-WPANs, and is maintained by the IEEE 802.15 working group, which defined the standard in 2003. WirelessHART is a wireless sensor networking technology based on the Highway Addressable Remote Transducer Protocol (HART). Developed as a multi-vendor, interoperable wireless standard, WirelessHART was defined for the requirements of process field device networks. Also, it uses the NGSI protocol. NGSI is a protocol developed to manage Context Information. It provides operations like managing the context information about context entities, for example the lifetime and quality of information and access (query, subscribe/notify) to the available context Information about context Entities.
Z-BRE4K solution provides a big data analytics framework for the identification of the deterioration trends to extended towards prescriptive maintenance. Advanced data analysis tools are under development, to be applied to the quality and production data to realise zero-defect and zero-break down production. Furthermore, it involves models for anomaly detection, that are capable of identifying the machine states where the operation deviated from the norm. This is achieved by collecting the data from the machine sensors in chunks of time and processing them in batch through deep learning models. The models are trying to recreate their inputs, and this results in an observable measure called Reconstruction Error, which is used to identify states that the models aren’t capable of addressing sufficiently (which constitutes an anomaly.
Z-BRE4K solution will provide an ontology with annotation mechanisms that include all the necessary information to perform predictive maintenance analysis to achieve extended operating life of assets in production facilities. This context includes the sensorial data processing to be used as simulation inputs and the simulation process itself (physics-based modelling). It also includes the machine learning application as well, due to the usage of prediction models in data-driven modelling. Knowledge Based System (KBS) will extract, store and retrieve all the relevant information enriched with semantic annotations to guarantee a prompt identification of criticalities. Shoop floor data is transformed into RDF (Resource Description Framework) data, a standard model for data interchange on the web, and stored in a triple store DB. Also, the M3 Gage platform serves for fast verification of the machine, condition monitoring, and as a data repository as well. It allows information interconnection from different data sources, and furthermore, the architecture proposed by AUTOWARE provides the ability to establish data processing and computing at the most appropriate abstraction level in the hierarchy: Field, Workcell/ Production Line, Factory and Enterprise. Different filtering and pre-processing algorithms are applied on the edge to clean real time raw data and reduce unwanted noise. In addition, convolutional neural networks are used to process high-throughput video streams, providing a non-time critical stream of features for cloud services.
The suggestion beyond the state-of-the-art is to have intelligent machine simulators so an information and knowledge rich platform can provide an accurate account of the machine’s current state and provide predictive (look ahead) potential scenarios of future time type, severity and risks of breakdown. Collected, processed, integrated and aggregated data will be structured and fed in real-time into networked simulators enabling advanced analysis and visualization to provide smart services, higher fidelity and prediction accuracy for production and manufacturing assets management. Different schemes for data collection configuration are implemented (ranging dedicated IoT devices with independent methodologies) to collect raw data from sensors, pre-process and aggregate the information, and share the results with other services through an IDS connector. The Z-Bre4k IDS connectors have a reference architecture to ensure data sovereignty and integrity throughout this collection phase.
Within Z-BRE4K a semantic data modelling is used for interoperability. Semantic representation is used for machinery, critical components, failure modes as well as optimal conditions. Statistical methods and machine learning algorithms are used in offline mode to discover patterns in the data and associate them with specific events (Pattern discovery, Association Rules), as well as infer causality events in cases such as quality control (Quality Estimation based on Machine Status.
Within Z-BRE4K, a novel software application will be developed and added to I-LiKe Machine’s tech stack: A Knowledge Base System (KBS) to extract, store, and retrieve all the relevant information enriched with semantic annotations to guarantee a prompt identification of criticalities in the process.
The KBS represents a step towards the implementation of novel and innovative solutions, still not common practice in manufacturing. The data repository is in the form of Triplestore which is designed to store identities conducted from triplex collections representing a subject-predicate-object relationship. On top of the repositories a reasoning engine creates relationships and allows to extract knowledge to be consumed by other applications.
In the framework of Z-BRE4K, an IoT approach is applied to integrate end user machines to the Z-BRE4K platform. Through IoT gateways deployed at the shop floor, machine components are enabled to communicate their conditions, sending sensors data to the cloud where they are stored and analysed to provide predictive maintenance related information.
The UI’s goal is to visualize data from maintainers and components statuses (real time data and relevant KPIs). The SPARQL Web service is used to send custom SPARQL queries against the Semantic Framework RDF repository as a general-purpose querying web service. The UI can visualise the probability of breakdown and RUL. CAD and CAE models would be useful in mapping these values into a 3D visualisation.
Z-Bre4k provides a Semantic Framework as a RESTful web services API. Each request returns an HTTP status and optionally a document of result sets. Each results document can be serialized and may be expressed as RDF, pure XML or JSON. The operator input to the machine and threshold changes can be built as a UI. These parameters can be monitored directly in machine simulators. Furthermore, the dashboard application (M3 modules) will alert the shop floor operator about quality detected issues and suggests recommendations for the production adjustment and maintenance of the machine.
The AUTOWARE apps development will be supported and linked to the FIWARE. AUTOWARE approach is to connect and extend the FIWARE for Industry resources and assets to the end of digital automation community, so synergies emerge both in terms of multi-sided business opportunities and amount of resources that are made available to the cognitive automation community to build their autonomous solutions and apps.
FIWARE is a curated open source framework with components that can be assembled together with other third-party ones to accelerate the development of Smart Solutions. The Orion Context Broker is the core component of any “Powered by FIWARE” platform. It enables the system to perform updates and access to the current state of context.
The choice of the AUTOWARE platform was based on the fact that is an open source project, and that is hardware agnostic.
AUTOWARE platform will push forward and stablish an open CPPS ecosystem. In the AUTOWARE Framework, a collection of enablers has been defined as components/tools that will enable potential users of the tools, be it end users, system integrators or software developers, to easily apply the developed technical enablers to their daily work. Moreover, verification, validation and certification enablers will be introduced in the AUTOWARE platform.
01-10-2017
-30-09-2021
01-10-2018
-31-03-2022
Sensor data fusion from multiple and heterogeneous is at the core of the development of the RS4 Controller (CORE component of the Rossini Modular KIT)
Human-machine interface is key to evaluate Job Quality in considering Human Factors analysis and therefore is very relevant in the HR cell design phase also
Based on ROS, the Rossini Modular KIT aims to be fully scalable and widely adopted
01-10-2018
-31-03-2023
01-05-2019
-31-07-2022
01-12-2019
-30-11-2022
01-11-2018
-30-04-2022
01-01-2017
-30-06-2020
01-11-2015
-31-10-2017
Cyper Physical Production System and digital twins requires data collection from real system
Interaction with mobile, wireless smart devices
01-10-2017
-31-03-2021
The system blueprint is the FAR-EDGE RA. After having defined the requirements and the constraints for each block of the RA, a thorough analysis of the SotA has been done, which led to the identification of some existing software components meeting the specs. We then identified the gaps that the project will need to fill-in: not surprisingly, these where all the key enabling technologies, like the distributed ledger. However, hardly anything is going to be built totally from scratch in FAR-EDGE. The distributed ledger, for example, will be a customization of a generic, open source Blockchain platform (Hyperledger Fabric).