Projects overview
vf-OS | Virtual Factory Open Operating System
01-10-2016
-31-10-2019
MANUWORK | Balancing Human and Automation Levels for the Manufacturing Workplaces of the Future
01-10-2016
-31-03-2020
ModuLase | Development and Pilot Line Validation of a Modular re-configurable Laser Process Head
01-09-2016
-31-05-2021
THOMAS | Mobile dual arm robotic workers with embedded cognition for hybrid and dynamically reconfigurable manufacturing systems
01-10-2016
-31-03-2021
NIMBLE | Collaboration Network for Industry, Manufacturing, Business and Logistics in Europe
01-10-2016
-31-03-2020
Applied Technologies:
Spring Boo: Spring Boot is a framework for building web applications. It is built on top of the Spring Framework and follows a zero-configuration principle. The major set of microservices are build using Spring Boot as an application framework.
Spring Cloud: Functionalities for building and integrating microservices are provided by Spring Cloud. It mainly aggregates components of the Netflix Open Source Software (Netflix OSS) project and makes them easily be integrated with
Spring Boot applications. Components of the underlying microservice infrastructure are heavily using modules from Spring Cloud (e.g. Service Discovery, Configuration Server and Gateway Proxy).
Spring Cloud Security: Standardized security mechanisms are implemented using Spring Cloud Security. It provides out-of-the-box integration of security modules to Spring Cloud applications. Authentication and authorization between microservices are realized by using Spring Cloud Security, which supports OAuth2 and OpenID Connect and communicates with the authentication server (i.e. Cloud Foundry UAA).
ELK Stack Logs can be streamed to Logstash, which stores them persistently in Elastic Search. visualizations are done using Kibana, hence the ELK stack. The ELK stack is used to aggregate log output of distributed microservices in order to centrally perform analysis of generated log output.
Cloud Foundry UAA: The Cloud Foundry User Account and Authentication (UAA) is a multi tenant identity management service, available as a stand alone OAuth2 server issuing tokens for clients. Cloud Foundry UAA acts as identity and authentication server issuing OpenID Connect tokens.
Camunda BPM: Camunda BPM is an open source platform for business process management. Camunda BPM is used for the definition and execution of business processes (e.g. supply chain process).
Apache Marmotta: Apache Marmotta is an open implementation of a linked data platform. Apache Marmotta will be mainly used to store catalog data and perform product-search queries. Apache Solr Apache Solr is a free-text indexing tool providing advanced search and navigation capabilities on the indexed data. Apache Marmotta uses Apache Solr for its semantic search cores composed semantic features of indexed items.
Docker: Docker is an open-source solution for application deployment, consisting of prebuilt images running inside a container. Docker will be used for intermediate development releases and on-premises deployment.
PostgreSQL: PostgreSQL is an open-source database system for object-relational data. PostgreSQL will mainly be used as a database technology, in order to have a homogeneous setup.
Apache Kafka: Open Source messaging infrastructure Mainly used for private communication among components and entities.
Data management:
- Product life-cycle management
- Web objects for IoT data ingestion
SAFIRE | Cloud-based Situational Analysis for Factories providing Real-time Reconfiguration Services
01-10-2016
-30-09-2019
Decision support systems
OpenHybrid | Developing a novel hybrid AM approach which will offer unrivalled flexibility, part quality and productivity
01-10-2016
-30-09-2019
STREAM-0D | Simulation in Real Time for Manufacturing with Zero Defects
01-10-2016
-30-09-2020
Z-Fact0r | Zero-defect manufacturing strategies towards on-line production management for European factories
01-10-2016
-31-03-2020
Within the Z-Fact0r, the proposed (higher level) DSS, with the support of the knowledge base and the online inspection module (1st level decision support at single stage), produce, verify and validate decisions aligned with the quality control policies, production targets, desired product specifications and maintenance management requirements. Key functional characteristics of the envisioned DSS incorporates among others, techniques for monitoring and predicting product quality, action prioritization, root cause analysis, and mitigation planning algorithms (at product and workstation level). Moving beyond existing solutions that focus only on specific aspects of the production procedure, or that are restrained to diagnosis, the proposed DSS system incorporates autonomous, hierarchical decision support, based on process analytical technologies and newly developed suitably adjusted knowledge-based systems, and combines product monitoring models and data analytics from heterogeneous sources. The envisioned DSS takes into account a wide set of multiple factors and criteria, such as data uncertainty, lack of information and information quality, involvement of multiple actors, and real-time response. Thanks to the 5 intertwined zero-defect strategies (i.e. Z-PREDICT, Z-PREVENT, Z-DETECT, Z-REPAIR and Z-MANAGE) the overall solution presents a significant contribution to a spectacular improvement in the overall performance and reliability of the targeted multi-stage manufacturing systems and in the production agility (response to continuous adjustments in production targets).
DATAPIXEL provides the information associated to the defect detection in the manufacturing parts selected. This information is used as an input for developing the defect detection algorithms of Z-Fact0r solution. Based on this input, a data conditioning methodology has been developed to extract information concerning to the defect position and type. This information will be used as baseline for the model validation, via comparison with the respective simulation results.
The procedure that has been used is the following:
- Image-based feature extraction: Convolutional Neural Networks (CNN) and Variational Auto Encoders (VAE) used as feature extractors. CNNs will define the appropriate features that have been used for the classification between healthy and defected parts, and VAEs will be utilized to distinguish that can be used for image generation.
- Feature selection: An efficient filter feature selection (FS) method was developed for selecting informative and non- redundant feature subsets. In addition to enhanced accuracy rates and dimensionality reduction, the method have reasonably low computational demands. A robust and computationally efficient evaluation criterion with respect to patterns was defined allowing us to assess the redundancy between the features. The proposed FS technique was performed on a forward selection basis handling simultaneously both the discrimination power and the complementary characteristics between the extracted features. To decide on the number of retained features, a termination condition was finally introduced, thus avoiding the trial-and-error procedure usually employed in the common FS techniques of the literature.
- Classification: The selected features were input in a virtual classification module. The role of this module is to provide a decision on the workpiece condition. The technologies used were Artificial Neural Networks (ANN) and deep learning algorithms.
Nowadays, it is familiar that within the Industry 4.0, the ICT and the CPS, as parts of the industrial processes, are implemented and merged. For data collection, sensors are being used, imbedded within the AI in order to make smooth communication among humans and machines. Thus, Z-Fact0r is a pioneer with several advances in predictive maintenance, IoT sensors on shop floors and endless communication between the various components of the system, creating effective and many efficient applications for Industry 4.0.
Once a “repairable” defect is detected (Z-DETECT), a proper and customized repairing action must be deployed with the minimum time and effort, assuring the best productivity and production flow. In fact, a major challenge for an effective ZD manufacturing is related with the capability to automatically repair the occurred defects without perturbing the overall production flow.
Z-Fact0r is developing a model-based, supervisory control solution that will be able to interpret the interstage quality control measurements together with the monitoring of the process itself, in order to identify the defect sources and generate a proper and customized repairing action. Additive manufacturing in the form of inkjet or paste printing of various materials (metal, ceramic, polymer resins) can successfully be used to fill a missing spot or correct a damaged part. Upon detection of the defected area, the printing head will deliver the patch material in solution or paste form. In the case of inkjet printing, defect as small as 20 μm can be patched. Post printing treatment of the delivered material include solvent evaporation (e.g. in the case of polymer patches), UV curing (e.g. in the case of epoxy resins) and low temperature laser sintering in the case of metal or ceramic nanoparticles, thermal curable resins or paste where a local reflow process is required.
To facilitate the implementation of the five strategies, Z-Fact0r is supporting a “reverse supply-chain” policy in the context of a multi-stage supply-chain attached to a multi-stage production. As a result, the defected products/parts detected in downstream stages (produced during a stage, or provided from suppliers in a particular stage) could be returned to upstream stages for remanufacturing or recycling.
Additive manufacturing (AM) is a widely used set of techniques used to build objects by adding layer-upon-layer of material. While materials typically used are plastic, metal or concrete, nowadays AM technologies are expanding to include all kind of materials such as ceramic, nanocomposites, glass, and other.
In Z-Fact0r, we exploited AM-based technologies as a tool for repairing of components in a production line. Thanks to the ability for local deposition, i.e. precision placement of material at desired position, AM was the optimum choice to correct or repair a defect. Moreover, AM combined with subtracted manufacturing techniques for the effective repairing. In context, in the case of a defect, material can be subtracted by means of laser ablation or classical machining, thus removing of the problematic area cleaning or preparing the surface. Then, AM is used to fill the defect with the desired material. A final step of sintering or other processing used to finalize the repairing action.
Additive manufacturing (AM) is a widely used set of techniques used to build objects by adding layer-upon-layer of material. While materials typically used are plastic, metal or concrete, nowadays AM technologies are expanding to include all kind of materials such as ceramic, nanocomposites, glass, and other.
In Z-Fact0r, we exploited AM-based technologies as a tool for repairing of components in a production line. Thanks to the ability for local deposition, i.e. precision placement of material at desired position, AM was the optimum choice to correct or repair a defect. Moreover, AM combined with subtracted manufacturing techniques for the effective repairing. In context, in the case of a defect, material can be subtracted by means of laser ablation or classical machining, thus removing of the problematic area cleaning or preparing the surface. Then, AM is used to fill the defect with the desired material. A final step of sintering or other processing used to finalize the repairing action.
Z-DETECT is the first strategy of the Z-Fact0r solution: the detection strategy consists of detecting any machining process anomaly or instability through process monitoring by means of controlled variables called critical process variables (CPVs). In particular, this strategy is invoked when a defect is being generated after the adaptation of the parameters. In such a scenario, an alarm is being triggered to flag the parameters that resulted in a defect. By mapping the true reasons, the system will be able to avoid having more generated defects by weighting the system model.
Apart from the inspection of the product from which the defect is being observed, the strategy involves more actions and processes to deal both with the generation of the detected defect, and its propagation to the next stages.
Z-PREDICT strategy is triggered when a defect is recognised during the Z-DETECT stage. The events detected from the physical layer of the system are engineered into high value data that will stipulate new and more accurate process models. Such an unbiased systems behaviour monitoring and analysis provides the basis for enriching the existing knowledge of the system (experience) learning new patterns, raising attention towards behaviour that cause operational and functional discrepancies (e.g. alarms) and the general trends in the shop-floor.
The more the data pool is being increased the more precise (repeatability) and accurate the predictions will be. The estimations for the future states involve the whole production line, e.g. machine status after x number of operations and/or quality of the products for given set of parameters.
The system will predict with high confidence the expected quality and customer satisfaction, allowing modifications to the parameters before the production of the products. In addition, Z-Fact0r can operate in the reverse mode, i.e. insert a Customer Satisfaction Goal and control the parameters accordingly to achieve this target.
The ability of Z-Fact0r to optimise the manufacturing processes according to certain/target quality levels and/or customer satisfaction is the key innovation to fulfil the industrial requirements.
The overall supervision and optimisation of the system is achieved after the execution of Z-MANAGE strategy. The defects are processed with Decision support system (DSS) tools and are interfaced with Manufacturing Execution Systems (MES). False positives and false negatives are clustered after each Z-Fact0r strategy, which results into a good filtering of these false alarms. To achieve so, the previous acquired knowledge and incidents are also processed to fine tune the system’s operation.
Additionally, the production is optimised by better scheduling, taking into account the environmental impact of each process. The optimised scheduling and adaptability of the manufacturing improves the overall flexibility, placing a premium on the production rates, satisfying the demand, while preserve increased machinery availability. Since, the Knowledge management system will tune the whole production according to certain quality levels and customer satisfaction, it is highly anticipated that the overall performance of the system will suffice the increased needs of the customers.
Z-Manage strategy involves also a Knowledge based decision support system which collects knowledge from all the components and the operators and therefore is able to suggest solution for the tuning the rest of the components.
The strategy involves also the decision making in the event of a defect. The defect will be analysed via the inspection system, from which the defect can be classified and categorised on its severity. In case of “repairable” defects the system will decide for the following; (i) rework on spot, (ii) removal from the production line for further inspection and rework. If the defect is classified as “non-repairable” then the system will decide whether (a) the product will be forwarded to upstream stages, or (b) considered as total failure where it will be recycled.
ZAero | Zero-defect manufacturing of composite parts in the aerospace industry
01-10-2016
-30-09-2019
SCALABLE4.0 | Scalable automation for flexible production systems
01-01-2017
-30-06-2020
5G-Ensure | 5G Enablers for Network and System Security and Resilience
01-11-2015
-31-10-2017
Productive4.0 | Electronics and ICT as enabler for digital industry and optimized supply chain management covering the entire product lifecycle
01-05-2017
-31-10-2020
MANTIS | Cyber Physical System based Proactive Collaborative Maintenance
01-05-2015
-31-07-2018
One of the objectives of the MANTIS project is to design and develop the human-machine interface (HMI) to deal with the intelligent optimisation of the production processes through the monitoring and management of its components. MANTIS HMI should allow intelligent, context-aware human-machine interaction by providing the right information, in the right modality and in the best way for users when needed. To achieve this goal, the user interface should be highly personalised and adapted to each specific user or user role. Since MANTIS comprises eleven distinct use cases, the design of such HMI presents a great challenge. Any unification of the HMI design may impose the constraints that could result in the HMI with a poor usability.
I-MECH | Intelligent Motion Control Platform for Smart Mechatronic Systems
01-06-2017
-31-05-2020
PreCoM | Predictive Cognitive Maintenance Decision Support System
01-11-2017
-28-02-2021
SERENA | VerSatilE plug-and-play platform enabling remote pREdictive mainteNAnce
01-10-2017
-31-03-2021
Digital models enahnced with real world data acquired from sensor devices will be used as the basis of physical phenomena that affect the operational condition of the equipment, such as degradation. THis will result in the improvement of the accuracy of the predictive maintenance functionalities of the SERENA platfrom and tools.
The SERENA porject considers the support of data analytics functionalities for acquiring certainportion of sensor data to feed the machine learning algorithms responsible for predictive maintenance.
Digital models enahnced with real world data acquired from sensor devices will be used as the basis of physical phenomena that affect the operational condition of the equipment, such as degradation. THis will result in the improvement of the accuracy of the predictive maintenance functionalities of the SERENA platfrom and tools.
PROGRAMS | PROGnostics based Reliability Analysis for Maintenance Scheduling
01-10-2017
-31-03-2021
PROPHESY | Platform for rapid deployment of self-configuring and optimized predictive maintenance services
01-10-2017
-30-09-2020
L4MS | Logistics for Manufacturing SMEs
01-10-2017
-31-03-2021
MIDIH | Manufacturing Industry Digital Innovation Hubs
01-10-2017
-30-09-2020
I4MS-Go | I4MS Going to Market Alliance
01-09-2017
-29-02-2020
UPTIME | UNIFIED PREDICTIVE MAINTENANCE SYSTEM
01-09-2017
-28-02-2021
UPTIME will reframe predictive maintenance strategy in a systematic and unified way with the aim to fully exploit the advancements in ICT and maintenance management by examining the potential of big data in an e-maintenance infrastructure taking into account the Gartner’s four levels of data analytics maturity and the
proactive computing principles.
UPTIME will enable manufacturing companies to reach Gartner's four levels of data analytics maturity for optimised decision making - each one building on the previous one: Monitor, Diagnose and Control, Manage, Optimize - aims to optimise in-service efficiency and contribute to increased accident mitigation capability by avoiding crucial breakdowns with significant consequences. UPTIME Components UPTIME_DETECT & UPTIME_PREDICT and UPTIME_ANALYZE aim to enhance the methodology framework for data processing and analytics. The key role for the UPTIME_DETECT and UPTIME_PREDICT components are data scientists who are in charge of developing, testing and deploying algorithmic calculations on data streams. In this way, the component is able to to identify the current condition of technical equipment and to give predictions. On the other hand, the UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, and to extract and correlate relevant knowledge.
In UPTIME, two data processing solutions are considered. (1) Batch processing of data at rest, through massively paralle processing, (2) real-time processing of data in motion, real time data from heterogeneous sources are processsed as a continuous "stream" of events (produced by some outside system or systems), and that data processing occurs so fast that all decisions are made without stopping the data stream and storing the information first.
UPTIME main functionalities are structured in three main modules, namely: edge, cloud and GUI modules.
- The UPTIME edge module will ensure data collection from machines, sensors, etc. and sent it on for analysis. It may also include some additional functionalities which require real-time results.
- The UPTIME Cloud module contains all the advanced functionalities of the solution, which do not require a real time result. There we will analyse the data collected on the edge, as well as data received from relevant information systems, and provide the expected predictions. “Cloud” can refer to remote servers or an internal cloud within the customer’s Plant or Enterprise, as is deemed necessary by the customer.
- Lastly the GUI module, through which the user will interact with the previously mentioned functionalities, whether it is to view data or configure the solution.
4 main components in the cloudbased infastructure of the UPTIME platform include:
- UPTIME_DETECT and PREDICT component (extended version of PreIno prototype) processes mainly timeseries?based data from the field, to give further context to the data, e.g. to detect topical conditions of technical equipment and to predict probable future conditions.
- UPTIME_ANALYZE (a new developed prototype) is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, and to extract and correlate relevant knowledge.
- UPTIME_DECIDE component (extended version of PANDDA prototype) that implements a prescriptive analytics approach for proactive decision making in a streaming processing computational environment. It provides real-time prescriptions fo the optimal maintenance actions and the optimal time for their implementation on the basis of streams of predictions about future failures.
- UPTIME_FMECA (extended version of DRIFT prototype) provides estimation of possible failure modes and risk criticalities evolution through its data-driven FMECA approach.
UPTIME_SENSE component (USG prototype) is located in the edgebased infrastructure of the UPTIME platform. It aims to capture data from a high variety of sources and cloud environments. SENSE brings configurable diagnosis capabilities on the edge, e.g. for real-time or off-the-grid applications. SENSE addresses 3 high level functions:
- Sensor signal processing, which collects the signals from equipment or other sensors, and pre-processes them before transmitting them on the cloud platform.
- Edge diagnosis for optional state detection diagnosis for certain use cases.
- Support functions for functions necessary for the correct operation of the edge module.
The "Persistence" layer in the UPTIME conceptual architecture includes a Database Abstraction Layer (DAL) and houses of relational database engine as well as a NoSQL database, where all information needed by the "User Interaction" and "Real-Time Procesing and Batch Processing" layers (refer to UPTIME conceptual architecture) is stored and retrieved. For the raw sensor data itself, this data storage concept is enhanced by a database for time-series data to ensure efficient and reliable storage, while visualization functionalities will use an indexing database to facilitate the exposure of analytics. In these databases, all information needed by the other three layers is stored and retrieved. The UPTIME solution aims to provide data harmonization in terms of manipulating streaming data coming from the sensors. Based upon these needs a time series database is needed and in the context of UPTIME three instances of influxDB (one instance per business case) are installed. Along with the influxDB instances, a common MySQL database that will handle the operations of the UPTIME system is created.
UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, as well as to extract and correlate relevant knowledge. The data mining and analytics of ANALYZE component practically delivers the intelligence of the ANALYZE component by defining, training, executing and experimenting with different machine learning algorithms.
UPTIME_VISUALIZE (extended version of SeaBAR prototype) component is responsible for the intuitive and uninterrupted human-machine interaction. The user interfaces, including the analytics dashboards and the notificaiton engine, will be customised or further developed in full accordance with the end-user business case. Taking an example ofthe UPTIME White Good business case for complex automatic production line to produce drums for dryer, the generation of early warnings to suggest autonomous activities to factory workers should be communicated through mobile devices or on-board devices.
The data visualisation in UPTIME is performed by the UPTIME_VISUALIZE (SeaBAR prototype) component:
- By establishing standardised connectors to the key data/information/knowledge sources for maintenance, production planning, logistics and quality management.
- By faceting, filtering and semantic structuring of the collected data according to maintenance viewpoints.
- Context-sensitive, interactive visualizations in order to allow the end user to easily search and navigate through huge amounts of heterogeneous information with the aim to enable a maximum flexible analysis of all relevant information, e.g. to drill-down into the data according to a region, timeframe and machine or to generalise a specific critical situation and find similar (past) situations with appropriate measures and individual user experience (e.g. best practices).
- A customizable dashboard focusing on the specific information needs of e.g. maintenance engineers, quality managers and upper management by different context specific visualization and analytics tools.
- Continuous visualization of critical data and utilization of captured experience from past situations for providing useful insights in interaction with manufacturing companies’ systems.
- By turning from generic statistical data towards instance-specific data and enabling instance specific diagnosis and prognosis.
UPTIME Platform is developed accroding to unified predictive maintenance framework and an associated unified information system to enable the predictive maintenance strategy implementation in manufacturing industries. The UPTIME predictive maintenance system will extend and unify the new digital, e-maintenance services and tools and will incorporate information from heterogeneous data sources, e.g. sensors, to more accurately estimate the process performances. The UPTIME predictive maintenance platform is developed mainly based on five baseline e-maintenance services and tools:
- USG (Universal Sensor Gateway): USG serves as a modular data acquisition and integration device to the Product Lifecycle Management (PLM) ecosystem of a product
- preInO: The preInO Processing Engine is able to detect and predict the state of a whole system or components with respect to mechanical systems such as windturbines, special-purpose vehicles, production machinery, etc.
- PANDDA (ProActive seNsing enterprise Decision configurator DAshboard): PANDDA is a software service that implements (i) proactive decision methods to provide recommendations about mitigating (perfect or imperfect) maintenance actions and the time for their implementation on the basis of real-time prognostic information; and (ii) a Sensor-Enabled Feedback (SEF) mechanism for continuous improvement of the generated recommendations.
- SeaBAR (Search Based Application Repository): SeaBAR is a modular software platform built on Big Data and Enterprise Search technology. The SeaBAR platform supports end users by means of data aggregation, data analysis and visualization.
- DRIFT (Data-Driven Failure Mode, Effects, and Criticality Analysis (FMECA) Tool): DRIFT is a tool that, on the basis of the information gathered in other modules (maintenance data, production, logistics, quality data) use them to identify what are the Failure Modes, Effects and Criticalities of the components and system according to literature available and novel correlation algorithms among modes failures, effects and critical impacts.
The UPTIME platform will leverage crucial, and often hidden, data from machines and systems in real time and substantiate the benefits of advanced predictive maintenance analytics in boosting asset availability and service levels in manufacturing operations. Moreover, UPTIME delivers new ways for effective operational risk management in terms of preventing unexpected failures of a manufacturer’s assets, and effectively planning maintenance actions, thereby transforming the mentality of manufacturing industries in respect to maintenance services. With the help of the end-to-end UPTIME smart diagnosis-prognosis-decision making methods, manufacturers become leaner, more versatile and better prepared to act upon accidents and unexpected incidents in their everyday operations. The increased accident mitigation capabilities they acquire, allow them not only to accelerate the workplace safety and improve the workers’ health,
but also to reduce the incurring costs and become more competitive.
UPTIME platform focusses on the use of condition monitoring techniques, e.g. event monitoring and data processing systems, that will enable manufacturing companies having installed sensors to fully exploit the availability of huge amounts of data and to handle the real-time data in complex, dynamics environement in order to get meaningful insights and to decide and act ahead of time to resolve problems before they appear, e.g. to avoid or mitigate the impact of a future failure, in a proactive manner. Moreover, UPTIME proposed unified framework will not be limited to monitoring and diagnosis but it aims to cover the whole prognostic lifecycle from signal processing and diagnostics till prognostics and maintenance decision making along with their interactions with quality management, production planning and logistics decisions.
LASER4SURF | LASER FOR MASS PRODUCTION OF FUNCTIONALISED METALLIC SURFACES
01-10-2017
-31-03-2021
From a technological point of view, Open vf-OS Platform will provide elements covering the connected world, allowing the exchange and collaboration of information between companies on a value stream thanks to the cloud approach to be adopted (vf-Platform). The Open vf-OS covers from the control device level, where information from the systems (IoT, CPS, embedded systems) is gathered, processed and empowered.