5G-Ensure | 5G Enablers for Network and System Security and Resilience
01-11-2015
-31-10-2017
Interaction with mobile, wireless smart devices
Cyper Physical Production System and digital twins requires data collection from real system
01-11-2015
-31-10-2017
Interaction with mobile, wireless smart devices
Cyper Physical Production System and digital twins requires data collection from real system
01-10-2017
-31-03-2021
01-05-2015
-31-07-2018
One of the objectives of the MANTIS project is to design and develop the human-machine interface (HMI) to deal with the intelligent optimisation of the production processes through the monitoring and management of its components. MANTIS HMI should allow intelligent, context-aware human-machine interaction by providing the right information, in the right modality and in the best way for users when needed. To achieve this goal, the user interface should be highly personalised and adapted to each specific user or user role. Since MANTIS comprises eleven distinct use cases, the design of such HMI presents a great challenge. Any unification of the HMI design may impose the constraints that could result in the HMI with a poor usability.
01-06-2017
-31-05-2020
01-10-2017
-31-03-2021
The Z-Break solution uses a variety of communication protocols. HTTP, OPC-UA, IEEE 802.15.4e and IEC WirelessHART. The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web. OPC UA supports two protocols. The binary protocol is opc.tcp://Server and http://Server is for Web Service. Otherwise OPC UA works completely transparent to the API. IEEE 802.15.4 is a technical standard which defines the operation of low-rate wireless personal area networks (LR-WPANs). It specifies the physical layer and media access control for LR-WPANs, and is maintained by the IEEE 802.15 working group, which defined the standard in 2003. WirelessHART is a wireless sensor networking technology based on the Highway Addressable Remote Transducer Protocol (HART). Developed as a multi-vendor, interoperable wireless standard, WirelessHART was defined for the requirements of process field device networks. Also, it uses the NGSI protocol. NGSI is a protocol developed to manage Context Information. It provides operations like managing the context information about context entities, for example the lifetime and quality of information and access (query, subscribe/notify) to the available context Information about context Entities.
Z-BRE4K solution provides a big data analytics framework for the identification of the deterioration trends to extended towards prescriptive maintenance. Advanced data analysis tools are under development, to be applied to the quality and production data to realise zero-defect and zero-break down production. Furthermore, it involves models for anomaly detection, that are capable of identifying the machine states where the operation deviated from the norm. This is achieved by collecting the data from the machine sensors in chunks of time and processing them in batch through deep learning models. The models are trying to recreate their inputs, and this results in an observable measure called Reconstruction Error, which is used to identify states that the models aren’t capable of addressing sufficiently (which constitutes an anomaly.
Z-BRE4K solution will provide an ontology with annotation mechanisms that include all the necessary information to perform predictive maintenance analysis to achieve extended operating life of assets in production facilities. This context includes the sensorial data processing to be used as simulation inputs and the simulation process itself (physics-based modelling). It also includes the machine learning application as well, due to the usage of prediction models in data-driven modelling. Knowledge Based System (KBS) will extract, store and retrieve all the relevant information enriched with semantic annotations to guarantee a prompt identification of criticalities. Shoop floor data is transformed into RDF (Resource Description Framework) data, a standard model for data interchange on the web, and stored in a triple store DB. Also, the M3 Gage platform serves for fast verification of the machine, condition monitoring, and as a data repository as well. It allows information interconnection from different data sources, and furthermore, the architecture proposed by AUTOWARE provides the ability to establish data processing and computing at the most appropriate abstraction level in the hierarchy: Field, Workcell/ Production Line, Factory and Enterprise. Different filtering and pre-processing algorithms are applied on the edge to clean real time raw data and reduce unwanted noise. In addition, convolutional neural networks are used to process high-throughput video streams, providing a non-time critical stream of features for cloud services.
The suggestion beyond the state-of-the-art is to have intelligent machine simulators so an information and knowledge rich platform can provide an accurate account of the machine’s current state and provide predictive (look ahead) potential scenarios of future time type, severity and risks of breakdown. Collected, processed, integrated and aggregated data will be structured and fed in real-time into networked simulators enabling advanced analysis and visualization to provide smart services, higher fidelity and prediction accuracy for production and manufacturing assets management. Different schemes for data collection configuration are implemented (ranging dedicated IoT devices with independent methodologies) to collect raw data from sensors, pre-process and aggregate the information, and share the results with other services through an IDS connector. The Z-Bre4k IDS connectors have a reference architecture to ensure data sovereignty and integrity throughout this collection phase.
Within Z-BRE4K a semantic data modelling is used for interoperability. Semantic representation is used for machinery, critical components, failure modes as well as optimal conditions. Statistical methods and machine learning algorithms are used in offline mode to discover patterns in the data and associate them with specific events (Pattern discovery, Association Rules), as well as infer causality events in cases such as quality control (Quality Estimation based on Machine Status.
Within Z-BRE4K, a novel software application will be developed and added to I-LiKe Machine’s tech stack: A Knowledge Base System (KBS) to extract, store, and retrieve all the relevant information enriched with semantic annotations to guarantee a prompt identification of criticalities in the process.
The KBS represents a step towards the implementation of novel and innovative solutions, still not common practice in manufacturing. The data repository is in the form of Triplestore which is designed to store identities conducted from triplex collections representing a subject-predicate-object relationship. On top of the repositories a reasoning engine creates relationships and allows to extract knowledge to be consumed by other applications.
In the framework of Z-BRE4K, an IoT approach is applied to integrate end user machines to the Z-BRE4K platform. Through IoT gateways deployed at the shop floor, machine components are enabled to communicate their conditions, sending sensors data to the cloud where they are stored and analysed to provide predictive maintenance related information.
The UI’s goal is to visualize data from maintainers and components statuses (real time data and relevant KPIs). The SPARQL Web service is used to send custom SPARQL queries against the Semantic Framework RDF repository as a general-purpose querying web service. The UI can visualise the probability of breakdown and RUL. CAD and CAE models would be useful in mapping these values into a 3D visualisation.
Z-Bre4k provides a Semantic Framework as a RESTful web services API. Each request returns an HTTP status and optionally a document of result sets. Each results document can be serialized and may be expressed as RDF, pure XML or JSON. The operator input to the machine and threshold changes can be built as a UI. These parameters can be monitored directly in machine simulators. Furthermore, the dashboard application (M3 modules) will alert the shop floor operator about quality detected issues and suggests recommendations for the production adjustment and maintenance of the machine.
The AUTOWARE apps development will be supported and linked to the FIWARE. AUTOWARE approach is to connect and extend the FIWARE for Industry resources and assets to the end of digital automation community, so synergies emerge both in terms of multi-sided business opportunities and amount of resources that are made available to the cognitive automation community to build their autonomous solutions and apps.
FIWARE is a curated open source framework with components that can be assembled together with other third-party ones to accelerate the development of Smart Solutions. The Orion Context Broker is the core component of any “Powered by FIWARE” platform. It enables the system to perform updates and access to the current state of context.
The choice of the AUTOWARE platform was based on the fact that is an open source project, and that is hardware agnostic.
AUTOWARE platform will push forward and stablish an open CPPS ecosystem. In the AUTOWARE Framework, a collection of enablers has been defined as components/tools that will enable potential users of the tools, be it end users, system integrators or software developers, to easily apply the developed technical enablers to their daily work. Moreover, verification, validation and certification enablers will be introduced in the AUTOWARE platform.
Z-BRE4K’s mission is to build a distributed software system solution including the Industry 4.0 principals towards cyber-physical, digital, virtual and resource-efficient factories. The ultimate goal is to develop intelligent maintenance systems for increased reliability of production systems. Additionally, special attention is going to be given on processes, advancing technologies and products, integrating knowledge, training, technology and industrial development in a market-oriented environment. Z-BRE4K’s intended impact to the European manufacturing industry in the increase of the in-service efficiency by 24%
Z-BRE4K will contribute to the productivity increase of different critical manufacturing processes, such as joining (GESTAMP), cutting (PHILIPS) and forming (GESTAMP, PHILIPS, SACMI) by providing analytics and suggestion in to order to assist in minimizing the machines breakdowns. The main gain is operational and maintenance costs reduction. Furthermore, for the GESTAMP use case a real-time arc welding quality control system, based on infrared images, is being developed
Z-break will make it possible to combine the current manufacturing systems with current and new mechatronic systems. These combinations will lead to smarter manufacturing systems and thus a shorter ramp up in generating higher quality and productivity.
Part of the Z-BRE4K project is the development, of a novel embedded condition monitoring solution with cognitive capabilities, by applying deep learning techniques to reduce the dimensionality of multimodal sensor data associated to a given machine/device, and provide meaningful features to predictive maintenance services on the cloud. Most suitable IoT edge devices, for optimal trade-off between computational power and energy consumption, sensors, providing relevant information of the condition of different components, and signal processing algorithm are proposed for different machines and processes. Data gathering is enabled by the installation of IoT gateways, where data in different protocols are homogenised and sent to the cloud for storage. Real-time data, relevant KPIs and information about components status are visualised through dedicated dashboards.
The modelling and simulation methods used in Z-BRE4K are mainly Finite Element Methods (FEM) where complex problems and processes from the real world are being simplified and solved using a numerical approach. First, an accurate digital model of the geometry and material properties of all involved objects, boundary conditions between these objects and process data is created (i.e. forces or temperature).
Then, the complex shape of all objects involved, is approximate using a finite number of simple geometries (i.e. triangles) which simplify the complex mathematical problem. A computer is capable of solving these mathematical operations at a rate impossible for humans and thus enables the user to analyse various scenarios, ranging from mechanical strains within the objects to rise in temperature or material fatigue. This information can be used to predict the remaining useful lifetime of a given tool.
Simulation platform is deployed by the physical equipment to create intuitive maintenance control and management systems. The Z-BRE4K’s platform simulation capabilities will estimate the remaining useful life calling for maintenance and suggesting the optimal times to place orders for spare parts, reducing the related costs. The increased predictability of the system and the failure prevention actions will reduce the number of failures, maximise the performance, reduce the repair/recover times reducing further the costs.
By applying time series analysis, we are able to detect special events that are known (Fault detection) or unknown (anomaly detection) during production. This information, correlated with sensor readings is fed into machine learning algorithms that create estimates of Remaining Useful Life (RUL), Health Indexes (HI) and forecast upcoming events (Likelihood of Failure). Special focus is given in techniques that can provide real-time information (Fast computation and high accuracy) as well as being scalable in order to use new data as it becomes available. Additional information such as meantime between failures based on historical data or an expert opinion, CAE data, quality control data, real time states etc. are also used to the design of machine simulators.
01-10-2017
-30-09-2021
CloudiFacturing will extend the field of action of the technology developed in CloudFlow and CloudSME from the product development process to the production process, in order to leverage factory data with analytics algorithms and simulation tools
Thanks to cloud resources, enough power computing is available to analyze different scenarios in a few days instead of several weeks.
Designers of CATMARINE and SKA are now able to achieve high-quality products by analyzing different manufacturing scenarios without wasting time, money and material.
The platform is able to optimize the resin injections points/vents and verify the presence of defects in the final product, thus ensuring a complete and correct mold-filling.
Outcomes of the project creates base for the improvement of the existing design of the water quench and will be used for the development of the new generation of the nozzles.
It is expected that new nozzle design and thus new water quench will be available for the customers in 5 years time. It is expected that those new products will attract new clients: 5 new contracts in 1 year increasing to 10 new contracts in 5 years, which will increase the turnover of Ferram by 500k Euros in 1 year and 3,5 million Euros in 5 years after the experiment end.
01-10-2018
-31-03-2022
Sensor data fusion from multiple and heterogeneous is at the core of the development of the RS4 Controller (CORE component of the Rossini Modular KIT)
Human-machine interface is key to evaluate Job Quality in considering Human Factors analysis and therefore is very relevant in the HR cell design phase also
Based on ROS, the Rossini Modular KIT aims to be fully scalable and widely adopted
The ROSSINI modular KIT offers advance components for the different layers of a robotic application (sensing, perception, cognition, control, actuation and integration)
The RS4 Controller gather and fuse data from different sensor sources. Within ROSSINI the following sensor sources have been developed as EXTRA components able to be connected with the RS4 controller: 3D Vision cameras, Lidar arrays, Radars and Skins.
The ROSSINI Controller (CORE component of the ROSSINI Modular KIT) integrates a Semantic Scena Map, a Flexible layer (Scheduler) and an Execution layer (Motion Planner) to guarantee optimal efficiency of the robot (also considering Job Quality factors)
Within the ROSSINI project an advanced collaborative robot have been developed, equipped with advanced and novel interfaces, and able to perfom very low breaking time.
All the three ROSSINI demonstrators (white goods, electronic equipment, and food packaging) proof the feasibility and the advantages of ROSSINI Platform Implementations in relevant (diffenet and complex) industrial environments
Human-Robot Collaboration is key in the development of the Rossini solution: all the use cases look at this scenario, allowing the human operation working in the same cell with the robot, on the same machine and even on the same work-piece.
The Virtual Design Tool (CORE component of the ROSSINI solution) wants to ease the design process of a HR cell implementation (e.g., helping with the sensor placing or the hazard assessment evaluation)
Job quality evaluation focused on HRC developments aim in improving well-being of workers (acceptance, trust, physical health) in contexts where robots may become actual co-workers.
Trust and acceptance of the workforce are the enabling factor for an actual adoption of HRC solution.
Human-Robot Collaboration is key in the development of the Rossini solution: all the use cases look at this scenario, allowing the human operation working in the same cell with the robot, on the same machine and even on the same work-piece.
The ROSSINI vision is the on of guarantee above all safety and health of the operators.
01-10-2018
-31-03-2022
01-10-2018
-31-03-2023
01-12-2018
-30-11-2022
01-10-2018
-31-03-2021
The Industry4.E Lighthouse team has created careers resources targetted at citizens considering STEM careers, up-skilling or re-skilling, and SMEs interested in the training and resources available to get involved in Industry4.0.
Visit Industry4.E careers webpage today and download our Industy4.0 careers opportunities flyer
01-05-2019
-31-07-2022
01-05-2017
-31-10-2020
01-09-2017
-28-02-2021
UPTIME will reframe predictive maintenance strategy in a systematic and unified way with the aim to fully exploit the advancements in ICT and maintenance management by examining the potential of big data in an e-maintenance infrastructure taking into account the Gartner’s four levels of data analytics maturity and the
proactive computing principles.
UPTIME will enable manufacturing companies to reach Gartner's four levels of data analytics maturity for optimised decision making - each one building on the previous one: Monitor, Diagnose and Control, Manage, Optimize - aims to optimise in-service efficiency and contribute to increased accident mitigation capability by avoiding crucial breakdowns with significant consequences. UPTIME Components UPTIME_DETECT & UPTIME_PREDICT and UPTIME_ANALYZE aim to enhance the methodology framework for data processing and analytics. The key role for the UPTIME_DETECT and UPTIME_PREDICT components are data scientists who are in charge of developing, testing and deploying algorithmic calculations on data streams. In this way, the component is able to to identify the current condition of technical equipment and to give predictions. On the other hand, the UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, and to extract and correlate relevant knowledge.
In UPTIME, two data processing solutions are considered. (1) Batch processing of data at rest, through massively paralle processing, (2) real-time processing of data in motion, real time data from heterogeneous sources are processsed as a continuous "stream" of events (produced by some outside system or systems), and that data processing occurs so fast that all decisions are made without stopping the data stream and storing the information first.
UPTIME main functionalities are structured in three main modules, namely: edge, cloud and GUI modules.
4 main components in the cloudbased infastructure of the UPTIME platform include:
UPTIME_SENSE component (USG prototype) is located in the edgebased infrastructure of the UPTIME platform. It aims to capture data from a high variety of sources and cloud environments. SENSE brings configurable diagnosis capabilities on the edge, e.g. for real-time or off-the-grid applications. SENSE addresses 3 high level functions:
The UPTIME_SENSE component is responsible for the acquisition of sensor data from the field. It is utilised to enable previously disconnected assets, to communication with the UPTIME Cloud.
The current draft of the UPTIME data model is designed based on international standards like MIMOSA (OSA-CBM v3.3.1 and OSA-EAI v3.2.3a), the initial historical data provided by the business cases and the previous implementations of UPTIME_FMECA and UPTIME_DECIDE.
The "Persistence" layer in the UPTIME conceptual architecture includes a Database Abstraction Layer (DAL) and houses of relational database engine as well as a NoSQL database, where all information needed by the "User Interaction" and "Real-Time Procesing and Batch Processing" layers (refer to UPTIME conceptual architecture) is stored and retrieved. For the raw sensor data itself, this data storage concept is enhanced by a database for time-series data to ensure efficient and reliable storage, while visualization functionalities will use an indexing database to facilitate the exposure of analytics. In these databases, all information needed by the other three layers is stored and retrieved. The UPTIME solution aims to provide data harmonization in terms of manipulating streaming data coming from the sensors. Based upon these needs a time series database is needed and in the context of UPTIME three instances of influxDB (one instance per business case) are installed. Along with the influxDB instances, a common MySQL database that will handle the operations of the UPTIME system is created.
UPTIME Data are stored in appropriate, shared databases (NoSQL, time-series-based, relational) according to a common UPTIME predictive maintenance model in order to facilitate homogeneous access.
UPTIME has a common MySQL database that will handle the operations of the UPTIME system.
UPTIME_ANALYZE is a data analytics engine driven by the need to leverage manufacturers’ legacy data and operational data related to maintenance, as well as to extract and correlate relevant knowledge. The data mining and analytics of ANALYZE component practically delivers the intelligence of the ANALYZE component by defining, training, executing and experimenting with different machine learning algorithms.
UPTIME_VISUALIZE (extended version of SeaBAR prototype) component is responsible for the intuitive and uninterrupted human-machine interaction. The user interfaces, including the analytics dashboards and the notificaiton engine, will be customised or further developed in full accordance with the end-user business case. Taking an example ofthe UPTIME White Good business case for complex automatic production line to produce drums for dryer, the generation of early warnings to suggest autonomous activities to factory workers should be communicated through mobile devices or on-board devices.
The data visualisation in UPTIME is performed by the UPTIME_VISUALIZE (SeaBAR prototype) component:
UPTIME Platform is developed accroding to unified predictive maintenance framework and an associated unified information system to enable the predictive maintenance strategy implementation in manufacturing industries. The UPTIME predictive maintenance system will extend and unify the new digital, e-maintenance services and tools and will incorporate information from heterogeneous data sources, e.g. sensors, to more accurately estimate the process performances. The UPTIME predictive maintenance platform is developed mainly based on five baseline e-maintenance services and tools:
To ease integration of all UPTIME components, the main programming language used by the components and the integrated platform is Java.
The UPTIME platform will leverage crucial, and often hidden, data from machines and systems in real time and substantiate the benefits of advanced predictive maintenance analytics in boosting asset availability and service levels in manufacturing operations. Moreover, UPTIME delivers new ways for effective operational risk management in terms of preventing unexpected failures of a manufacturer’s assets, and effectively planning maintenance actions, thereby transforming the mentality of manufacturing industries in respect to maintenance services. With the help of the end-to-end UPTIME smart diagnosis-prognosis-decision making methods, manufacturers become leaner, more versatile and better prepared to act upon accidents and unexpected incidents in their everyday operations. The increased accident mitigation capabilities they acquire, allow them not only to accelerate the workplace safety and improve the workers’ health,
but also to reduce the incurring costs and become more competitive.
UPTIME platform focusses on the use of condition monitoring techniques, e.g. event monitoring and data processing systems, that will enable manufacturing companies having installed sensors to fully exploit the availability of huge amounts of data and to handle the real-time data in complex, dynamics environement in order to get meaningful insights and to decide and act ahead of time to resolve problems before they appear, e.g. to avoid or mitigate the impact of a future failure, in a proactive manner. Moreover, UPTIME proposed unified framework will not be limited to monitoring and diagnosis but it aims to cover the whole prognostic lifecycle from signal processing and diagnostics till prognostics and maintenance decision making along with their interactions with quality management, production planning and logistics decisions.
01-11-2017
-28-02-2021
01-10-2017
-31-03-2021
Digital models enahnced with real world data acquired from sensor devices will be used as the basis of physical phenomena that affect the operational condition of the equipment, such as degradation. THis will result in the improvement of the accuracy of the predictive maintenance functionalities of the SERENA platfrom and tools.
The SERENA porject considers the support of data analytics functionalities for acquiring certainportion of sensor data to feed the machine learning algorithms responsible for predictive maintenance.
The SERENA project includes the developments of a plug and play device for machine data acquisition.
Digital models enahnced with real world data acquired from sensor devices will be used as the basis of physical phenomena that affect the operational condition of the equipment, such as degradation. THis will result in the improvement of the accuracy of the predictive maintenance functionalities of the SERENA platfrom and tools.
01-10-2017
-31-03-2021
01-01-2019
-31-07-2022
Through innovative algorithms and statistical methods, possible data sources for predictive quality control can be identified and evaluated. Moreover, by cooperation of all project partners, the realization of data access and acquisition along the whole process chain can be realized. With a focus on algorithms and methodology, a use case-specific algorithm is going to be implemented and validated to maintain high prediction accuracy.
Data availability is a challenge: Limited access to measurement data (due to limited access to third-party systems)
There seems to be relationship to predict torque with use of in-line data. Needs to be more explored
By applying sophisticated algorithms and methods on the acquired data, systematic failure root cause detection supported by data analytics can be implemented. In addition, improved knowledge of machine states/maintenance requirements for neuralgic points can be implemented through the desired solution path within this pilot.
An AI vision algorithm developed by TNO (WP3) seems to filter bad rated parts compared to installed algorithm. Advantage can be when product print is changing to catch-up development speed in traditional algorithm development.
For this trial, the acquired test data will be analyzed regarding quality classification. In every test a part could pass or fail. Failed parts must be reworked, if possible, and brought back to the process. Sometimes parts are classified as failed even if they are good (false positive). This effect will be analyzed by machine learning algorithms and, if necessary, adopted in classification parameterisation. Additionally, the fact of 100% testing, means every panel is tested automatically, with bottleneck in out of the line test stations will be addressed in setting up failure prediction models for quality forecast. This will be supported by data analysis of pre reflow AOI (automated optical inspection).
With all these data analysis and process optimization activities economical evaluation will be included to support decisions in-process and configuration changes. For the development of these applications, the main steps are data availability/access, data processing, and model development. The developed applications should be deployed on Edge devices.
Milling Digital Twin enables strategy design and quality control in milling processes with only SW tools, simulation and virtual optimisation
Cockpit optimiser software provides environment for intelligent design of an automated cell with the customer.
Cockpit optimiser and Milling Digital Twin with AI tools for accelerating current design and optimisation processes by operators
Solutions to facilitate the analytical thinking of the operator. The solution will help the operator with the correlation of quality and process parameters in order to make a decision upwards in the process.
With the help of skilled production line workers, the data in the AI platform can be annotated and herewith produce the predictive models for ZDM autonomous quality inspection. The platform gives users the ability to monitor the AQ process (Autonomous Quality) and provide feedback for the ZDM.
To acquire quality data, all involved users and managers must understand some basic data science principles. Machine vision in modern times relies on large amount of consistent data. Data acquisition process begins with organized collection of samples, which should become an integral part of every standardized manufacturing process that involves automated quality inspection or ZDM.
There is a need of managing large Data Sets and Big Data, IA solutions for different Manufacturing Processes. Solutions need to support operators in decision-making
Enable operators to work in a more complex environment while reducing the strain of administrative tasks and enabling easy production analytics by capturing information online instead of on paper.
Shopfloor worker (operator – technical support group): From a shopfloor perspective new job profiles, or altered job profiles should be defined, however In essence the job profiles will remain the same, while the operators and Technical Support Groups need to understand & be able to work with these new technologies. This requires some basic knowledge on the (digitalized) systems, for the operators a lot can be captured in SOP’s (Standard Operating Procedures), but the technical support staff should also have some basic knowledge on the workings and the hardware/software side of the systems in order to be able to support the shopfloor where needed.
The ZDM-Autononous Quality Solutions are used as systems that perform tasks in an autonomous/automated way, requiring the intervention of an operator only when an operational tie-breaker is needed. When that is the case, the operator has to analyse the incident and provide for a solution to the AQL System, interacting with it via an HMI interface.
Complete machine parameters correlation is realized, allowing machine operators to take into account all the assets from each workstation of the production line. It enhances its capacity in relation to conventional analytics methods.
The end2end process supported by the overall architecture helps the operator and team leader in their daily activities in order to prevent and anticipate as much as possible quality issues on the product via the analysis of a huge amount of data linked together via the holistic semantic model.
01-01-2019
-30-06-2023
01-01-2019
-31-12-2022
01-01-2020
-31-12-2023
Operational services aim to collect product data on post-use Li-Ion batteries about their use phase in order to enable monitoring and full traceability of its life-cycle;
Operational services aim to:
As the proactive exploitation of the DigiPrime platform enables the car-monitored SOH tracing and availability, less testing is needed to assess the residual capacity of the battery. Moreover, by knowing the structure of the battery packs, a decision support system can be implemented to adjust the de-and remanufacturing strategy accordingly and select the most proper cells for re-assembly second-life modules, thus unlocking a systematic circular value chain for Li-ion battery cells re-use. Furthermore, excessively degraded cells which cannot be re-used can be sent to high-value recycling, based on the knowledge of their material compositions.
01-01-2020
-31-12-2023
01-01-2020
-30-06-2023
Social-manufacturing platform that enables multi-stakeholder interactions and collaborations to support user-driven open-innovation and co-creation.
Social-manufacturing platform that enables multi-stakeholder interactions and collaborations to support user-driven open-innovation and co-creation.
01-09-2019
-30-11-2022
Cyper Physical Production System and digital twins requires data collection from real system