MUSIC | MUlti-layers control&cognitive System to drive metal and plastic production line for Injected Components
01-09-2012
-31-08-2016
01-09-2012
-31-08-2016
11-01-2015
-31-10-2018
01-10-2016
-30-09-2019
Trust is identified as a key indicator in the pilot. Trust experiments are critical when introducing automation mechanisms that co-operate with workers.
Workers’ opinion is key, especially for decision and acceptance, during the design and development of adaptive automation solutions.
Adaptation within automation mechanisms is reported to be an enhancement at the workplace, according to workers.
The introduction of the is perceived by workers as helpful, especially when productive tasks are exhausting and may provoke health issues. They are not received with reluctance but as supportive in workers’ tasks at the workplace. Regarding AR, it is generally considered as very useful, although the HMD (HoloLens) are too heavy for long time tasks.
01-10-2016
-30-09-2019
01-10-2016
-30-09-2019
The smart HMI was tested in E80, with expert and nonexpert operators working on an AGV in real working environment. The availability of such a tool to guide the use and maintenance of complex vehicles was strongly appreciated by workers, since it simplifies the interaction with the vehicle proprietary user interface and enable structured access to knowledge of specific procedures that have been carried out empirically. Moreover, expert operators are now able to take advantage of their experience and plan ad hoc maintenance plan, customized on the current status of the fleet.
Further studies are being carried out to employ the virtual training in order to train the customers’ operators without having to wait for the delivery of a newly bought machine, or without having to block a productive machine for training purposes. The ADAPT module will be further developed in order to evaluate integration with the recently released MAESTRO Active HMI, which already incorporates personalization features such as language settings. Finally, discussions are underway with the commercial area in order to verify whether the use of wearables by customer’s workers can be promoted in order to improve their well-being at home.
Even if robots are well known in Europe there is a lack of knowledge on their real potential and on the existence of tools able to simplify their programming and reconfiguration. Most Industries need to be supported in the process of introducing such tools in the plants. Advanced tools for training, are essential
01-10-2016
-30-09-2019
Evaluation studies carried out at the premises of Airbus showed positive results for both of the Exoskeleton and KIT services developed for this specific use case. Both physical and mental fatigue of workers were reduced, as an outcome of the Exoskeleton and KIT services respectively. Workers were keen enough to adopt the new technologies to their everyday working activities.
Both WOS and KIT services have been evaluated at COMAU in real life applications, showing that WOS has been well accepted by both operators and engineers as a valuable tool for eliminating motion waste and improve workplace ergonomics in production lines. The evaluation of KIT showed that the developed solution helps to reduce cognitive load of operators, reduce faults and improve efficiency.
KIT, Exoskeleton service and OAST have been evaluated at ROYO premises in real working conditions. KIT has been characterized as a valuable tool that reduces cognitive load and helps workers eliminate uncertainties at the assembly process. The combination of Exoskeleton service with OAST has helped to reduce the physical and mental stress of the operators at the palletization area of ROYO.
01-10-2016
-31-03-2020
With the provision of the right tools, every person can become part of a manufacturing system, including people with disabilities. These people are eager to work and feel productive which can affect global industry very positively.
The developed tool, has been applied in VOLVO, utilizing an offline production line, similar to the actual one. The perception time of the model has been reduced. Notably, more aspects of the model have been taken into consideration compared to the conventional simulation representation currently used. Finally, collaborative design of the simulation model has been rendered feasible, as a team of production managers can group and discuss on the same 3D model by making annotations.
11-01-2015
-31-07-2020
01-09-2016
-31-08-2019
See also D2.6 Lessons Learned and updated requirements report II and D8.8 Final Evaluation Report of the COMPOSITION IIMS Platform
1. Early design decisions on deployment and communication protocols were made. (Docker, MQTT, AMQP). Deciding on the deployment and communication platforms has made test deployment and integration work easier to manage.
2. Inception design (from the DoA) did not specify some components, e.g., for operational management or configuration. The architecture needed additional components to cover system configuration and monitoring.
3. Blockchain is still not a plug-and-play technology and requires a substantial amount of low-level configuration.
4. The Matchmaker should match agents (requester and suppliers). Moreover, the Matchmaker should match a request with the best available offer.
5. Use cases need to be solidly anchored in the real world of the actors and end users. They must not solely represent what is feasible from a technical point of view, but also reflect non-functional requirements such as regulations and business practices. Otherwise, the business cases would become unsustainable for further exploitation.
01-10-2016
-30-09-2019
A use case derived from Continental’s measurement lab has been used for validation, revealing the importance of task properties careful choice, time to familiarize employees to such system and assuring sensitive data security.
Within Factory2Fit there were 2 use cases for the codesign process piloted at Continental plant Limbach-Oberfrohna. One pilot was carried out for the workplace design and one for the work process design. An evaluation of the method selection and execution showed that there was good acceptance among the workers who contributed to the design process. To reach positive results during the codesign process it is essential to assess the boundary conditions and the group structure very well.
The developed tool could be extended to become a part of a bigger communication platform, between the equipment provider and their customers, aiming at strengthening their relationship.
SoMeP was piloted at Prima Power, unveiling that the integration of production information and messaging is valuable and time-saving in getting guidance. Gamification can motivate workers to share knowledge (Zikos et al., 2019). The use of social media will require organizational policies e.g. in moderating the content (Aromaa et al., 2019).
Worker Feedback Dashboard was piloted in three factories with ten workers. For user acceptance, it has been crucial that the workers participated in planning how to use the solution, and what kind of work practices were related to its use. The pilot evaluation results indicate that there are potential lead users for the Worker Feedback Dashboard. Introducing the solution would facilitate showing the impacts and could then encourage those who may be more doubtful to join.
ARAG solution was piloted in a factory of United Technologies Corporation (UTC). The validation results reflected the potential of the solution, technicians’ acceptability to solutions specifically designed for supporting them in complex operations. Recent studies have shown that gamification tools can be utilized in industrial AR solutions for reducing technicians’ learning curve and increasing their cognition (Tsourma et al., 2019).
On-the-job learning tool was piloted in a UTC factory producing air handling units. What is learned, is that in order to display the content more understandable, users must be able to interact with it, by viewing the components CAD files and make or read remarks.
01-10-2016
-31-10-2019
In FAR-EDGE, the value of DLT is in distributed consensus: enabling the intelligent edge nodes of a system to agree - or disagree - on state transitions, to the effect that any change in global state (e.g., the assignment of a task to a specific node) must be approved collectively; moreover, individual nodes may fail or go offline without compromising the system as a whole. However, standard DLT platforms also maintain an immutable log of transactions (the Blockchain) that is replicated on all nodes, which may represent a significant performance bottleneck in many real-world applications. What we learned from the FAR-EDGE experimentation is that, in most factory automation scenarios, the historical memory of any transaction may be cleared once consensus is safely reached on it - something that is not supported by any current DLT implementation. This form of short-lived state persistency is something that is probably worth experimenting with in the future.
01-10-2016
-31-03-2020
Concerning Microsemi use case: Firstly, the knowledge and physical equipment already in place will have a lasting legacy within the MICROSEMI facility at Caldicot. This understanding and equipment will continue to be used on the original use case and expanded upon as explained in the next section.
There was a large lack of understanding at the beginning of this project about what could be achieved by the type of approach being undertaken within the project and whether it could produce results in a manufacturing environment. Now we have an answer, and the Z-Fact0r approach has been a great success.
NECO, as end user, has contributed to the estimation of the benefits Z-Fact0r platform would provide to the manufacturers. In the NECO case:
For bringing the Z-Fact0r solution to TRL 9, NECO would definitely support the Consortium intention of trying to find other funding opportunities for the future, especially in the form of other calls from the European Commission.
For manufacturing of metallic parts with less stringent dimensional requirements, the 3D scanning equipment is a fundamental part of the solution, especially if there are many parts being produced daily with the same geometry. This would allow for an automated inspection to be put in place and, if the production rate is too high, a significative number of samples could still be controlled and used for the PREDICT and PREVENT modules to be used. Of course, sensors in the machines need to be installed for enough data to be available in the machine learning processes. Then, the results could be applied not only to the samples but to all parts being produced. For products with larger added value, the robot deburring is a compulsory tool to use, as it eliminates the need of human intervention in rather sensitive tasks as are the elimination of defects originating in the production line.
01-09-2017
-28-02-2021
One of the main learnings is that Data quality needs to be ensured from the beginning of the process. This implies spending some more time, effort and money to carefully select the sensor type, data format, tags, and correlating information. This turns to be particular true when dealing with human-generated data. It means that if the activity of input of data from operators is felt as not useful, time consuming, boring and out of scope, this will inevitably bring bad data.
Quantity of data is another important aspect as well. A stable and controlled process has less variation. Thus, machine learning requires large sets of data to yield accurate results. Also this aspect of data collection needs to be designed for example some months, even years in advance, before the real need emerges.
This experience turns out into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best way is doing it during the first installation of the equipment or at its first revamp activity. Don’t underestimate the amount of data you will need, in order to improve a good machine learning. This of course needs also to provide economic justification, since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed.
A common practice advice is to design a data gathering campaign starting from the current need. This could lead though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological description of the system under design. Of course, this could not be feasible in all real-life situations, but a good strategy could be populating the machine with as much sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating Excel files distributed across different PCs into common shared databases, taking care of making a good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Finally, the third important learning is that Data Scientists and Process Experts still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly. This is also an aspect that needs to be taken into account and carefully planned. Companies need definitely to close the “skills” gaps and there are different strategies applicable:
train Process Experts on data science;
train Data Scientists on subject matter;
develop a new role of Mediators, which stays in between and shares a minimum common ground to enable clear communication in extreme cases.
Quantity and quality of data: the available data in the FFT use case mainly consists of legacy data from specific measurement campaigns. The campaigns were mainly targeted to obtain insights about the effect of operational loads on the health of the asset, which is therefore quite suitable to establish the range and type of physical parameters to be monitored by the UPTIME system. UPTIME_SENSE is capable of acquiring data of mobile assets in transit using different modes of transport. While this would have been achievable from a technical point of view, the possibility to perform field trials was limited by the operational requirements of the end-user. Therefore, only one field trial in one transport mode (road transport) was performed, which yielded insufficient data to develop useful state detection capability. Due to the limited availability of the jig, a laboratory demonstrator was designed to enable partially representative testing of UPTIME_SENSE under lab conditions, to allow improvement of data quantity and diversity and to establish a causal relationship between acquired data and observed failures to make maintenance recommendations.
Installation of sensor infrastructure: during the initial design to incorporate the new sensors into the existing infrastructure, it is necessary to take into consideration the extreme physical conditions present inside the milling station, which require special actions to avoid sensors being damaged or falling off. A flexible approach is adopted, which involves the combination of internal and external sensors to allow the sensor network prone to less failure. Quantity and quality of data: it is necessary to have a big amount of collected data for the training of algorithms. Moreover, the integration of real-time analytics and batch data analytics is expected to provide a better insight into the ways the milling and support rollers work and behave under various circumstances.
Quantity and quality of data need to be ensured from the beginning of the process. It is important to gather more data than needed and to have a high-quality dataset. Machine learning requires large sets of data to yield accurate results. Data collection needs however to be designed before the real need emerges. Moreover, it is important having a common ground to share information and knowledge between data scientists and process experts since in many cases they still don’t talk the same language and it takes significant time and effort from mediators to help them communicate properly.
01-11-2017
-28-02-2021
To develop and apply statistical models for supporting PdM, it is always crucial to have as much as possible failure data, which is not easy to find in the companies’ databases. Furthermore, advancing and integrating different technologies in a single automatic and digitised smart PdM system is a challenge that requires close collaboration between research and industry players.
01-10-2017
-31-03-2021
In this case edge analytics performed by a low-cost edge device (Raspberry Pi) proved it is feasible to practice predictive maintenance systems like SERENA. However, performance issues were caused by having a large sampling frequency (16kHz) and background processes running on the device, which affected the measurements. By lowering the sampling frequency, this issue can be reduced. Another way would be to obtain different hardware with buffer memory on the AD-card. In addition, a real-time operating system implementation would also work. It was found that the hardware is inexpensive to invest in, but the solution requires a lot of tailoring, which means that expenses grow through the number of working hours needed to customize the hardware into the final usage location. If there are similar applications thatimplement the same configuration, cost-effectiveness increases. Furthermore, the production operators, maintenance technicians, supervisors, and staff personnel at the factory need to be further trained for the SERENA system and its features and functionalities.
From the activities developed within the SERENA project, it became clearer the relation between the accuracy and good performance of a CMM machine and the good functioning of the air bearings system. It was proved or confirmed by TRIMEK’s machine operators and personnel that airflow or pressure inputs and values out of the defined threshold directly affects the machine accuracy. This outcome was possible from the correlation made with the datasets collected from the sensors installed in the machine axes and the use of the tetrahedron artifact to make a verification of the accuracy of the machine, thanks to the remote real-time monitoring system deployed for TRIMEK’s pilot. This has helped to reflect the importance of a cost and time-effective maintenance approach and the need to be able to monitor critical parameters of the machine, both for TRIMEK as a company and for the client’s perspective.
Another lesson learned throughout the project development is related to the AI maintenance-based techniques, as it is required multiple large datasets besides the requirement of having failure data to develop accurate algorithms; and based on that CMM machine are usually very stable this makes difficult to develop algorithms for a fully predictive maintenance approach in metrology sector, at least with a short period for collection and assessment.
In another sense, it became visible that an operator’s support system is a valuable tool for TRIMEK’s personnel, mainly the operators (and new operators), as an intuitive and interacting guide for performing maintenance task that can be more exploited by adding more workflows to other maintenance activities apart from the air bearings system. Additionally, a more customized scheduler could also represent a useful tool for daily use with customers.
As with any software service or package of software, it needs to be learned to implement it and use it, however, TRIMEK’s personnel is accustomed to managing digital tools.
The SERENA project has provided a deep dive into an almost complete IIoT platform, leveraging all the knowledge from all the partners involved, merging in the platform many competencies and technological solutions. The two main aspects of the project for COMAU are the container-based architecture and the analytics pipeline. Indeed, those are the two components that more have been leveraged internally, and which have inspired COMAU effort in developing the new versions of its IIoT offering portfolio. The predictive maintenance solutions, developed under the scope of the project, have confirmed the potential of that kind of solutions, confirming the expectations.
On the contrary, another takeaway was the central need for a huge amount of data, possibly labelled, containing at least the main behaviour of the machine. That limits the generic sense that predictive maintenance is made by general rules which can easily be applied to any kind of machine with impressive results.
More concretely, predictive maintenance and, in general, analytics pipelines have the potential to be disruptive in the industrial scenario but, on the other hand, it seems to be a process that will take some time before being widely used, made not by few all-embracing algorithms, but instead by many vertical ones and applicable to a restricted set of machines or use cases, thus the most relevant.
The SERENA system is very versatile accommodating many different use cases. Prognostics need a deep analysis to model the underlying degradation mechanisms and the phenomena that cause them. Creating a reasonable RUL calculation for the VDL WEW situation was expected to be complicated. This expectation became true as the accuracy of the calculated RUL was not satisfactory. Node-Red is a very versatile tool, well-chosen for the implementation of the gateway. However, the analysis of production data revealed useful information regarding the impact of newly introduced products on the existing production line.
All the experiments conducted have been interpreted within their business implication: a reliable system (i.e. robust SERENA platform and robust data connection) and an effective Prediction System (i.e. data analytics algorithm able to identify mixing head health status in advance) will have an impact on main KPI related to the foaming machine performances. In particular:
Besides practical results, SERENA provided some important lesson to be transferred into its operative departments:
Data Quality
Finding the relevant piece of information hidden in large amounts of data, turned to be more difficult than initially thought. One of the main learnings is that Data Quality needs to be ensured since the beginning and this implies spending some more time, effort and money to carefully select sensor type, data format, tags, and correlating information. This turns particularly true when dealing with human-generated data: if the activity of input data from operators is felt as not useful, time-consuming, boring and out of scope, this will inevitably bring bad data.
Some examples of poor quality are represented by:
a. Missing data
b. Poor data description or no metadata availability
c. Data not or scarcely relevant for the specific need
d. Poor data reliability
The solutions are two: 1) train people on the shop floor to increase their skills on Digitalization in general and a Data-Based decision process specifically; 2) design more ergonomic human-machine interfaces, involving experts in the HMI field with the scope of reducing time to insert data and uncertainty during data input.
These two recommendations can bring in having a better design of dataset since the beginning (which ensure machine-generated data quality) and reduce the possibility of errors, omissions and scarce accuracy in human-generated data.
Data Quantity
PU foaming is a stable, controlled process and it turned to have less variation: thus, machine learning requires large sets of data to yield accurate results. Also, this aspect of data collection needs to de designed in advance, months, even years before the real need will emerge. This turns into some simple, even counterintuitive guidelines:
1. Anticipate the installation of sensors and data gathering. The best is doing it at the equipment first installation or its first revamp activity. Don’t underestimate the amount of data you need to improve good machine learning. This, of course, also needs to provide economic justification since the investment in new sensors and data storing will find payback after some years.
2. Gather more data than needed. Common practice advice is to design a data-gathering campaign starting from the current need. This could lead, though to missing the right data history when a future need emerges. In an ideal state of infinite capacity, the data gathering activities should be able to capture all the ontological descriptions of the system under design. Of course, this could not be feasible in real-life situations, but a good strategy could be to populate the machine with as many sensors as possible.
3. Start initiatives to preserve and improve the current datasets, even if not immediately needed. For example, start migrating excel files spread in individuals’ PC into commonly shared databases, making good data cleaning and normalization (for example, converting local languages descriptions in data and metadata to English).
Skills
Data Scientists and Process Experts are not yet talking the same language and it takes significant time and effort from mediators to make them communicate properly. This is also an aspect that needs to be taken into account and carefully planned: companies need definitely to close the “skills” gaps and there are different strategies applicable: train Process Experts on data science, train data scientists on the Subject matter; develop a new role of Mediators, which stays in between and shares a minimum common ground to enable the extreme cases to communicate.
01-10-2017
-31-03-2021
Several challenges limit the succesfull application of Predictive Maintenance in factories:
Correct determination of best maintenance strategies and computation of components RUL requires the collection of a vast amount of data in a format that must be easily accessible and analyzed.
Robot components show a slow degradation of their performances: data collection must begin as soon as possible.
01-10-2017
-30-09-2020
01-10-2017
-31-03-2021
GESTAMP, besides getting familiar with Z-BRE4K’s solution validation and assessment methodology, got a better understanding of internal reflection and readiness to apply predictive maintenance solutions to its plants while new mitigation actions related to process flaws and defects identification were developed during the Z-BRE4K. Also, they have understood the importance of solution validation and assessment methodology defined in Z-BRE4K.
PHILIPS supports the idea of predictive maintenance, “listening to the machines” and understands that the key to success is close contact between technology providers and experts where data integration/architecture and machine learning are both very important projects.
SACMI-CDS found out the importance of collaboration not only with a mechanical engineering/maintenance-related professionals but also with different technical background experts that together can improve multi-tasking and combining shopfloor and office-related activities as well as scheduling of activities during the work journey.
In general, after the solution implementation (TRL5), testing the system on the shop floor (TRL6) and validation of the Z-BRE4K solution (TRL7) at end users, the very final lesson learnt can be summarised as follows:
01-10-2017
-30-09-2021
The combined use of a simulation solution based on a numerical model and of remote HPC resources has allowed the development of a new design and development process of catamaran hulls.
The solution, which relies on Cloudifacturing platform, gives two kinds of advantages. Technically, more efficient and less defect-prone processes can be conceived and analyzed before their adoption. From an economic point of view, SME can afford the use of HPC resources without heavy investments.
"Thanks to the CloudiFacturing technology we can now take a look inside the VARTM process and switch our manufacturing process to a more safe, reliable and cheaper one", Gabriele Totisco, Catmarine CEO.
The experiment setup was well prepared and discussed by the experiment partners LCM and Hanning. The implementation of the solution in the CloudiFacturing Marketplace, however, was unclear and needed significant efforts during the early stages of the experiment. This partly concerned the technical solution and partly was felt in the definition and in conveying the experiment structure to the consortium.
While this has added some unplanned efforts to the experiment which then reduced the time available for the practical implementations, the cause seems quite unavoidable for wave 1 of the experiments since the development of the Marketplace happened in parallel to the experiments. This situation will, therefore, automatically improve for the experiments in wave 2.
01-10-2018
-31-03-2022
The development of the Rossini Modular KIT allowed to bring significant advance in terms of tecnology and awareness on collaborative robotics for all Europe. In particular, the set of efficient and modular tools developed within Rossini enable and ease different specific activities: from the hazarda assessment evaluation untill the multiple detection of humans on a monitored area where robots are working. Nevertheless, important activities still need quite a few effort in terms of knowledge, development and collaboration: from the necessity to refine (or define new well-establised interfaces) untill the identification of solutions able to bring the human-robot iteraction even closer (and more trustworthy), also in terms of standadization (that still present several gaps on this topic).
In particular, the following topic have been identified as important issues to tackle in future activities and research:
01-11-2018
-30-04-2022
IDS RA configuration and implementation not only on a peer to peer basis, but creating a data space.
01-10-2020
-30-09-2023
For flexible, reconfigurable systems where everything is connected together and must utilise a common data format, selecting the correct data format and a common structure for its use is key. B2MML worked very well for this application, but there is still scope for variation in the way terms and variables are defined, which must be settled on.
Converting an agreed process plan for manufacturing into the B2MML has some degree of automation, but also required a large amount of manual processing. More time should have been spent on automating this process.
Ideally, all components of the system would communicate directly with the service bus. Practically, not all devices will support the service bus, so use of an intermediary communication protocol such as OPC UA may be necessary.
Although process control may all be centralised with a manufacturing service bus, safety systems may not be. This can cause unexpected system behaviour when the system starts a new process unless the safety system is fully understood by the users.
Selection of flexible technologies and standards does not necessarily mean that any given implementation using those technologies will be flexible. A system implementation must be designed specifically to be flexible and future proof.
The ARC robotic control system is extremely effective for bringing a robot / part to a specific and highly accurate location. However, it does not allow for accuracy along a path, so would not be appropriate for continuous path accuracy e.g. robotic milling or welding.
The large amount of metal in the cell (robots, parts) dramatically lowered the accuracy that was possible with the RFID positioning system. Rather than being able to track parts to a specific location, we could determine no better than if a part was inside or outside the cell. Active RFID tags may help mitigate this.
K-CMM technology was extremely effective, but subject to line-of-sight restrictions for large assemblies such as aerospace fuselages.
When integrating technologies and solutions from multiple equipment vendors, the challenge is almost always interoperability and standards compliance. The ARC system was comparatively simple to integrate and commission, but integration into the larger context of a manufacturing process with a SCADA and other physical devices was more of a challenge.
For flexible, reconfigurable systems where everything is connected together and must utilise a common data format, selecting the correct data format and a common structure for its use is key. B2MML worked very well for this application, but there is still scope for variation in the way terms and variables are defined, which must be settled on.
Converting an agreed process plan for manufacturing into the B2MML has some degree of automation, but also required a large amount of manual processing. More time should have been spent on automating this process.
Ideally, all components of the system would communicate directly with the service bus. Practically, not all devices will support the service bus, so use of an intermediary communication protocol such as OPC UA may be necessary.
Although process control may all be centralised with a manufacturing service bus, safety systems may not be. This can cause unexpected system behaviour when the system starts a new process unless the safety system is fully understood by the users.
Selection of flexible technologies and standards does not necessarily mean that any given implementation using those technologies will be flexible. A system implementation must be designed specifically to be flexible and future proof.
The ARC robotic control system is extremely effective for bringing a robot / part to a specific and highly accurate location. However, it does not allow for accuracy along a path, so would not be appropriate for continuous path accuracy e.g. robotic milling or welding.
The large amount of metal in the cell (robots, parts) dramatically lowered the accuracy that was possible with the RFID positioning system. Rather than being able to track parts to a specific location, we could determine no better than if a part was inside or outside the cell. Active RFID tags may help mitigate this.
K-CMM technology was extremely effective, but subject to line-of-sight restrictions for large assemblies such as aerospace fuselages.
When integrating technologies and solutions from multiple equipment vendors, the challenge is almost always interoperability and standards compliance. The ARC system was comparatively simple to integrate and commission, but integration into the larger context of a manufacturing process with a SCADA and other physical devices was more of a challenge.
Proprietary Reconfigurable Flooring – A bespoke reconfigurable floor system is being installed that allows for fixtures and robots to be rapidly moved and securely and accurately fixed in place. A lack of a common or established standard in this area was noted.
Proprietary Reconfigurable Flooring – A bespoke reconfigurable floor system is being installed that allows for fixtures and robots to be rapidly moved and securely and accurately fixed in place. A lack of a common or established standard in this area was noted.
The challenge is the sensorization and data acquisition in the existing equipment not properly designed.
The data miming, real-time alerting and cognitive predictive meta-modelling is not a chimera.