Flexibility in manufacturing means the ability to deal with slightly or greatly mixed parts, to allow variation in parts assembly and variations in process sequence, change the production volume and change the design of certain product being manufactured.
A lead time is the latency between the initiation and execution of a process. For example, the lead time between the placement of an order and delivery of a new car from a manufacturer (from https://en.wikipedia.org/wiki/Lead_time)
In business, engineering, and manufacturing, quality has a pragmatic interpretation as the non-inferiority or superiority of something; it's also defined as being suitable for its intended purpose (fitness for purpose) while satisfying customer expectations. (from https://en.wikipedia.org/wiki/Quality_(business))
Quality assurance (QA) is a way of preventing mistakes and defects in manufactured products and avoiding problems when delivering solutions or services to customers; which ISO 9000 defines as "part of quality management focused on providing confidence that quality requirements will be fulfilled". This defect prevention in quality assurance differs subtly from defect detection and rejection in quality control, and has been referred to as a shift left as it focuses on quality earlier in the process i.e. to the left of a linear process diagram reading left to right. (from https://en.wikipedia.org/wiki/Quality_control)
Productivity describes various measures of the efficiency of production. A productivity measure is expressed as the ratio of output to inputs used in a production process, i.e. output per unit of input. Productivity is a crucial factor in production performance of firms and nations. (from https://en.wikipedia.org/wiki/Productivity)
In systems engineering, dependability is a measure of a system's availability, reliability, and its maintainability, and maintenance support performance, and, in some cases, other characteristics such as durability, safety and security. In software engineering, dependability is the ability to provide services that can defensibly be trusted within a time-period. This may also encompass mechanisms designed to increase and maintain the dependability of a system or software. (from https://en.wikipedia.org/wiki/Dependability)
Business development entails tasks and processes to develop and implement growth opportunities within and between organizations. It is a subset of the fields of business, commerce and organizational theory. Business development is the creation of long-term value for an organization from customers, markets, and relationships. (from https://en.wikipedia.org/wiki/Business_development)
Occupational safety and health (OSH), also commonly referred to as occupational health and safety (OHS), occupational health or workplace health and safety (WHS), is a multidisciplinary field concerned with the safety, health, and welfare of people at work. (from https://en.wikipedia.org/wiki/Occupational_safety_and_health)
Material efficiency is a description or metric which expresses the degree in which raw materials are consumed, incorporated, or wasted, as compared to previous measures in construction / manufacturing projects or physical processes. Making a usable item out of thinner stock than a prior version increases the material efficiency of the manufacturing process. (from https://en.wikipedia.org/wiki/Material_efficiency)
Waste minimisation is a set of processes and practices intended to reduce the amount of waste produced. By reducing or eliminating the generation of harmful and persistent wastes, waste minimisation supports efforts to promote a more sustainable society. Waste minimisation involves redesigning products and processes and/or changing societal patterns of consumption and production. (from https://en.wikipedia.org/wiki/Waste_minimisation)
The efficiency and sustainability of both the manufacturing of actual and future products is still very much determined by the processes that shape and assemble the components of these products.
Innovative products and advanced materials (including nano-materials) are emerging but are not yet developing to their full advantage since robust manufacturing methods to deliver these products and materials are not developed for large scale. Research is needed to ensure that novel manufacturing processes can efficiently exploit the potential of novel products for a wide range of applications.
Integration of non-conventional technologies (e.g. laser, ultrasonic) towards the development of new multifunctional manufacturing processes (including in process concept: inspection, thermal treatment, stress relieving, machining, joining
Mechatronics, which is also called mechatronic engineering, is a multidisciplinary branch of engineering that focuses on the engineering of both electrical and mechanical systems, and also includes a combination of robotics, electronics, computer, telecommunications, systems, control, and product engineering. (From https://en.wikipedia.org/wiki/Mechatronics)
Control technologies will be further exploiting the increasing computational power and intelligence in order to come forward to the demands of increased speed and precision in manufacturing. Advanced control strategies will allow the use of lighter actuators and structural elements for obtaining very rigid and accurate solutions, replacing slower and more energy-intensive approaches. Learning controllers adapt the behaviour of systems to changing environments or system degradation, taking into account constraints and considering alternatives, hereby relying on robust industrial real-time communication technologies, system modelling approaches and distributed intelligence architectures.
Continuous monitoring of the condition and performance of the manufacturing system on component and machine level, enables sustainable and competive manufacturing, also by introducing autonomous diagnosis capabilities and context-awareness. Detecting, measuring and monitoring the variables, events and situations will increase the performance and reliability of manufacturing systems. This involves advanced metrology, calibration and sensing, signal processing and model-based virtual sensing for a wide range of applications, e.g. event pattern detection, diagnostics, anomaly detection, prognostics and predictive maintenance.
Intelligent components enable the deployment of safe, energy-efficient, accurate and flexible or reconfigurable products and production systems. This includes the introduction of smart actuators and the use of advanced end-effectors composed of passive and active materials. Energy technologies are gaining importance, such as (super)capacitors, pneumatic storage devices, batteries and energy harvesting technologies.
Production equipment does not yet take full advantage of the benefits that new and advanced materials offer, and factories of the future will need more advanced equipment to meet the requirements for energy efficiency and environmental targets and to meet new demands for a connected world. The future will therefore see modern, lightweight, long-lasting/flexible and smart equipment able to produce current and future products for existing and new markets. There will be a step change in the construction of such equipment, leading to a sustainable manufacturing base able to deliver high added value products and customised production. Increased smartness in the manufacturing equipment also enables a systems approach with machines able to learn from each other and impacting on the human-machine interface.
Smarter equipment and manufacturing systems with self-diagnosis (temperature, vibrations, noise) and embedded sensing, memory or active architecture, with functional materials allowing them to adjust work processes and operations to variances in structure, shape and material composition (right first time manufacture).. Capture of machine data through this inherent ‘smartness’ for communication between machines (for M2M), at factory level and through supply chains for a systems approach to manufacturing and meeting customer demand.
New equipment components taking advantage of new designs and advanced materials (e.g. gears and transmissions providing longer lifetime of equipment, active surfaces that can embed and release lubricant when needed (higher pressures or temperatures))
The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. (from https://en.wikipedia.org/wiki/Internet_of_things)
Advanced machine interaction with humans through ubiquity of mobile devices will enable users to receive relevant production and enterprise-specific information regardless of their geographical location and tailored to the context and the skills/responsibilities they own. Interactions with ICT infrastructures and equipment will be natural language-like
Data acquisition is the process of sampling signals that measure real world physical conditions and converting the resulting samples into digital numeric values that can be manipulated by a computer. Data acquisition systems, abbreviated by the acronyms DAS or DAQ, typically convert analog waveforms into digital values for processing. The components of data acquisition systems include:
Sensors, to convert physical parameters to electrical signals.
Signal conditioning circuitry, to convert sensor signals into a form that can be converted to digital values.
Analog-to-digital converters, to convert conditioned sensor signals to digital values.
Data acquisition applications are usually controlled by software programs developed using various general purpose programming languages
So, as a summary, Data acquisition is in itself a vast group of protocols, technologies, sensors, hardware and software…
Data storage is the recording (storing) of information (data) in a storage medium. DNA and RNA, handwriting, phonographic recording, magnetic tape, and optical discs are all examples of storage media. (from https://en.wikipedia.org/wiki/Database)
Dataspaces are an abstraction in data management that aim to overcome some of the problems encountered in data integration system. The aim is to reduce the effort required to set up a data integration system by relying on existing matching and mapping generation techniques, and to improve the system in "pay-as-you-go" fashion as it is used. (From https://en.wikipedia.org/wiki/Dataspaces)
Cloud computing can be deployed as private cloud, public cloud, hybrid cloud
Digital Manufacturing Platforms can be ran into IaaS, PaaS or SaaS.
Considerations need to be made to security measures in the cloud (kubernetes, container security), identity & access, or carefully considering the security measures by the respective cloud services providers.
In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is used to describe machines that mimic "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving" (from https://en.wikipedia.org/wiki/Artificial_intelligence)
Fuzzy logic is a form of many-valued logic in which the truth values of variables may be any real number between 0 and 1 inclusive. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false (from https://en.wikipedia.org/wiki/Fuzzy_logic)
Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks and astrocytes that constitute animal brains. The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. (from https://en.wikipedia.org/wiki/Artificial_neural_network)
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection. (https://en.wikipedia.org/wiki/Genetic_algorithm)
Simulation (often referred to as digital twins) is the imitation of the operation of a real-world process or system. The act of simulating something first requires that a model be developed; this model represents the key characteristics, behaviors and functions of the selected physical or abstract system or process. The model represents the system itself, whereas the simulation represents the operation of the system over time. (from https://en.wikipedia.org/wiki/Simulation)
The Orion Context Broker Generic Enabler is the core and mandatory component of any “Powered by FIWARE” platform or solution. It enables to manage context information in a highly decentralized and large-scale manner. It provides the FIWARE NGSIv2 API which is a simple yet powerful Restful API enabling to perform updates, queries or subscribe to changes on context information.
The Cygnus Generic Enabler brings the means for managing the history of context that is created as a stream of data which can be injected into multiple data sinks, including some popular databases like PostgreSQL, MySQL, MongoDB or AWS DynamoDB as well as BigData platforms like Hadoop, Storm, Spark or Flink.
The “servitization wave” of manufacturing has already spread out to the advanced countries and many leading high-capital investment sectors (e.g. aerospace and automotive) are already competing in the international markets providing to their customers a composition of services for product operation (e.g. maintenance, reliability, upgrades), and end-of-life use (e.g. re-manufacturing, recycling, disposal). Especially SMEs are trying to compete in the international markets with their niche solutions, adding innovative services to their value propositions. Such innovative business models are based on a dynamic network of companies, continuously moving and changing in order to afford more and more complex compositions of services. In such a context, there is a strong need to create distributed, adaptive, and interoperable virtual enterprise environments supporting these undergoing processes. In order to do so, new tools must be provided for enabling and fostering the dynamic composition of enterprise networks. In particular, SMEs call for tools and instruments which follow them in their continuously re-shaping process, enabling collaboration and communication among the different actors of the product-service value chains. New IPR methods are also needed.
The rise of the transport cost, the need for higher efficiency and productivity, the customer demand for greener product, the higher instability of raw material and energy prices and the shortening of the lead-time production will push for a more critical assessment of the delocalisation strategy towards low cost countries. Service-led personalised products will require a new paradigm for western countries re-industrialisation (Globalisation 2.0), moving back (re-shoring) manufacturing of selected products.
Finally, innovation should become a business model in itself and a continuously run business process (the factory innovation): increasing the competitiveness through the design of a new product requires the development of a company strategy where product and process innovation is seen as a permanent, widely distributed, multi level, social oriented and user centric activity. Collaboration among companies of different sectors to exploit multi-disciplinary cross fertilisation is also envisaged. New tools, methodology and approaches for the user experience intelligence (i.e. social networks, crowd sourcing, social science methods, qualitative and quantitative, to generate insights, models and demonstrations, etc.) need to be addressed and explored.
According to the new paradigm of sustainability, the importance of the user is increasing. The user is at the same time a customer, a citizen and a worker. The well-being of the user could therefore become a winning strategy both for B2B as well as B2C companies. More detailed modelling behaviour can help the development of innovative solutions, aiming at user comfort, safety, performance, style; this requires new competitive focus for the development of these innovative solutions and new business models to support a quick and dynamic response to market changes.
As products are today virtually designed and tested before being engineered for production, new business models need also to have tools to support the company to design and test them before they are implemented through products, services and manufacturing processes. The complexity of these tools is higher than that of tools for product development, due the need for holistic modelling of product and processes.
The European Factories of the Future are expected to provide global manufacturing competitiveness, but also to create a large amount of work opportunities for the European population. Future factory workers are therefore key resources for industrial competitiveness as well as important consumers. However, as previously stated, the changing demographics and high skill requirements faced by European industry pose new challenges. Workers with high knowledge and skills (“knowledge workers”) will be scarce resources. Research efforts within Horizon 2020 must address ways to increase the number of people available for, and interested in, manufacturing tasks. This includes the following important aspects of the human resources: - New technology-based approaches to accommodate age-related limitations, through ICT and automation - New technical, educational, and organisational ways to increase the attractiveness of factory work to the young potential workforce, the existing workforce, the potential immigrant workforce, and the older workforce - New approaches to skill- and competence development, as well as skill and knowledge management, to increase competitiveness and be part of the global knowledge society - New ways to organise and compensate factory knowledge workers - New factory human-centric work-environments based on safety and comfort - Ways to integrate future factory work in global and local societal agendas and social patterns
Authorisation is the process of allowing an entity (humans, systems or devices) to access information systems or facilities where information and processing capabilities are being stored. More practical in an industrial setting for Digital Manufacturing Platforms, an authorized person can get access to an operational machine in order to update it, or investigate its contents. Unauthorized access could be someone who has been able to access the network from the outside, performing actions that have not been authorized and cannot be justified.
Authentication is a means to assess the authorization rules of an entity by means of a set of instruments. In the case of Digital Manufacturing Platforms it would be the instruments like user name and password, and in addition a second factor such as a physical token or a mobile phone that can authenticate the person accessing the platform. The physical token connects the person to something he has, the password to something he knows.
A third A in the AAA-architecture is related to Access. Once authorized, and authenticated, access can be granted to the location, system, application, and / or information. Access control levels can thus be set up on different layers. These can be physical (access to the country, to the plant, to the building, the room and the environment where the system is located), and logical (using authentication technologies). In Digital Manufacturing Platforms this means the systems could be accessible only on premise, in the factory or for instance in the (private or public) cloud. As a result different access mechanisms needs to be considered, depending on the risk and intended security levels and controls.
API's (and REST API's) need to be carefully protected through mechanisms limiting access on the basis of identity and authorizing and authenticating through managed and controlled mechanisms. Usually certificates and IP-addresses are being used to restrict access to API's, but a more granular approach is advisable from a Security Architecture perspective. Other architectures being used as Integratio Protocols in Digital Manufacturing Platforms are JSON (for its near real time capabilities) and MQ (message bus architectures). The letter being less secure, since it provides a continuous stream of information which is being sent to a destination.
A Security Architecture is a conceptual design that addresses various aspects of security in a system and resulting application, set of applications and components that make up the system. It is being used to support the design, development, implementation and operation of these systems, which can include Manufacturing Platforms. For Digital Manufacturing Platforms it addresses necessities and potential risks identified following potential scenario's or within a specific environment. It tries to present a comprehensive perspective of various security concepts on the conceived OT and IT architecture which includes networks, systems and equipment connected to these networks, the communication protocols and operating systems being used, the application development and operational process and recommends the use of security measures using security controls. Having a Security Architecture also helps both the design and integration process, supports identification of incidents and the security monitoring, speeds up discussions with partners for a level play field and best practices and is generally reproducible. Digital Manufacturing Platforms tend to try to bridge operational systems with information technology, such as the use of analytics, data collection and distribution and visualization that can lead to automated actions by these systems on the basis of unattended and unsupervised decisions and control implementations. To avoid physical harm, collateral damage other safety or cybersecurity issues, having a Security Architecture supporting the Digital Manufacturing Platforms should allow developers and companies at least to consider the various aspects and challenges of security in an organized and comprehensible manner. Architectures can follow standards such as IEC62443, ISO27k or NIST800.16, or any alternative scheme, but that needs to complete towards the digital and operational platforms.
The W3C Web Ontology Language (OWL) is a Semantic Web language designed to represent rich and complex knowledge about things, groups of things, and relations between things. (from https://www.w3.org/OWL/)
The international standard IEC 61499, addressing the topic of function blocks for industrial process measurement and control systems, was initially published in 2005. The specification of IEC 61499 defines a generic model for distributed control systems and is based on the IEC 61131 standard. (see https://en.wikipedia.org/wiki/IEC_61499 and IEC 61499 - International Electrotechnical Commission.
MQTT (MQ Telemetry Transport) is an open OASIS and ISO standard (ISO/IEC PRF 20922) lightweight, publish-subscribe network protocol that transports messages between devices. (From https://en.wikipedia.org/wiki/MQTT)
Universal Business Language (UBL) is an open library of standard electronic XML business documents for procurement and transportation such as purchase orders, invoices, transport logistics and waybills. UBL was developed by an OASIS Technical Committee with participation from a variety of industry data standards organizations. Version 2.1 was approved as an OASIS Standard in November 2013 and an ISO Standard (ISO/IEC 19845:2015) in December 2015
SensorThings API is an Open Geospatial Consortium (OGC) standard providing an open and unified framework to interconnect IoT sensing devices, data, and applications over the Web. It is an open standard addressing the syntactic interoperability and semantic interoperability of the Internet of Things. It complements the existing IoT networking protocols such CoAP, MQTT, HTTP, 6LowPAN. While the above-mentioned IoT networking protocols are addressing the ability for different IoT systems to exchange information, OGC SensorThings API is addressing the ability for different IoT systems to use and understand the exchanged information. As an OGC standard, SensorThings API also allows easy integration into existing Spatial Data Infrastructures or Geographic Information Systems.
In general, compliance means conforming to a rule, such as a specification, policy, standard or law. Regulatory compliance describes the goal that organizations aspire to achieve in their efforts to ensure that they are aware of and take steps to comply with relevant laws, policies, and regulations. (from https://en.wikipedia.org/wiki/Regulatory_compliance)
Here the term “business models” is used in a wide sense, complementing the technological and organisation aspects of digital platforms.
One proven tool for analysing and shaping business model is the “Business Model Canvas”. When trying to apply this tool to platforms, it appears that some elements apply to platform-based business models (e.g. the “value proposition”) and that tools as the ”canvas” can provide a first inspiration.
However, for digital platforms the traditional business models view in the narrow sense falls short of describing the business and relationship aspects of platforms. In particular, the strict “partner” and “customer”- view has to be replaced by an ecosystem-perspective. In addition, this ecosystem can be highly dynamic, which means that platforms can move into new user groups, change their features and might have the typical effects. Another difference is the central role of data for platforms, meaning that data governance is one of the essential elements of the value proposition of platforms.
By definition, by bringing together actors from different sides, platforms are defined by their stakeholders. There are core stakeholders (target customers, core suppliers, value chain partners), but it should not be forgotten that there are also actors with an indirect or external interest in the activities in the platform (competitors, existing customers not addressed through the platform). A platform also defines the relationship with and the channels with the different user groups.
In order to be sustainable, the value proposition must be mirrored by a revenue stream, which is orchestrated by the platform. This value streams can be direct (pay-per-use, subscription, sales etc.), but could also be indirect (increasing price of products, increasing market share).
Platform as a Service (PaaS) or application platform as a Service (aPaaS) or platform base service is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app. (from https://en.wikipedia.org/wiki/Platform_as_a_service)
Software as a service (SaaS /sæs/) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. It is sometimes referred to as "on-demand software". (from https://en.wikipedia.org/wiki/Software_as_a_service)
Pay-per-use or pay-per-duration-of-use implies that users are charged pro-rata of how much they used the service (in terms of consumed resources, computing power,... or in terms of the duration of the use of the service)
At the core of all potential industrial use case scenarios of platforms are data. When formerly isolated data are shared, suddenly a new set of factors arises, both in terms of new external factors, but also in terms of business/microeconomic implications. Therefore, at the core of every digital platform must be a legally, organizationally and commercially viable concept for data sharing/trading/exchange.
When shaping this model, the following questions must be answered:
What is the legal arrangement for data “ownership”? Can users classify their data, is staggered approach possible (closed, traded or open data)? What are legal means that the platform uses to ensure the confidentiality of data ? (Trade Secrets, data base directive)
Transparency: Can users monitor/control the sharing of data with third parties? Are there “expiration dates” for data use?
Is the legal setting a fixed standards (“general conditions”) or is it a flexible, individual approach? Are model contracts available?
Are there sectorial regulatory requirements concerning data?
How far is portability and change of platform possible?
Who is responsible in the case of breaches of confidentiality?
How is fairness/ a level playing field between the platform and smaller players ensured ?
Digital platforms will be successful if they provide a clear value proposition to the user groups involved. In general, digital platforms offer added-value basd upon three main mechanisms:
Reduction of transaction costs
Use of data integration for new services (mainly optimisation) and business models
Based upon these mechanisms, added-value can be created in a variety of perspectives, such as the process perspective (what process or activity is optimised?) or the KPI perspective (what KPI is the focus of the optimisation). This added value enables the financing of the digital processes through e.g.increased price margins, market shares or reduced costs.
Proprietary software is non-free computer software for which the software's publisher or another person retains intellectual property rights—usually copyright of the source code, but sometimes patent rights. (from https://en.wikipedia.org/wiki/Proprietary_software)
Open-source software (OSS) is a type of computer software whose source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. According to scientists who studied it, open-source software is a prominent example of open collaboration. (from https://en.wikipedia.org/wiki/Open-source_software)
In the same way that software can be developed and commercialized using different business models according to the software ownership, digital platforms could be developed and commercialized using different business models according to the infrastructure ownership. Different infrastructure ownerships can be identified in this chapter and also their business models (like renting, pay per use…)
Physical and logical password should be considered from the overall taxonomy and as part of one of the Digital Pathways, as Physical and Logical Access provisioning. Physical passwords here are types of authentication technologies and can be voice commands, fingerprints, or simple presence (by means of an electronic token that an operator carries). Logical passwords here are both pincodes, passphrases or even certificates or hash keys, that support the specific levels of security. Both are considering the mechanism of access control for security in this pathway.
Access control is a key component of security and cybersecurity to any system, being it a physical (gates, doors, equipment, ...) or logical (application, service, activity, ...) one.
Under this heading, the purpose is to clarify that access control should be mandatory for every system being operated in a manufacturing environment. Access control levels can be very low, by providing everybody access to an application on the factory floor. But at least it has considered that only people on the factory floor should be getting access. That physical constraint can be taken into account. This means that from a risk perspective, unaccompanied visitors or subcontractors without oversight could also get access to this system.
By considering access control as a fundamental security mechanism, based upon a risk approach, controls can be further built in, relating back to the types of users, or moments of intervention. Least access principles should be applied, in order to only provide access after a specific given thought. For instance, the system can have a regular user (an operator), a floormanager or head of production (being capable to override a decision from an operator), a service engineer (maintainance) and an administrator.
These roles should allow different levels of access to the systems and can be related to specific risks related to them, and to the overall risk consideration. Physical passwords can be considered into the application as additional means to identify the specfic roles.
As an example, to enhance the security of an application in a manufacturing environment from Level 1 to Level 3, there will be administrator access needed to operate a specific machine or function, instead of simply pushing the button to power up a specific machine. This can be trivial, as a sawing machine that can only be used by an operator qualified to use it, up until ensuring only oversight happens when a maintenance engineer updates the machine via a usb-token and leaves additional malware on the machines.
Malware is a broad term that describes a computer program (software) that was intentionally developed to cause damage to a computer system, mainly with the intention in financial gains - but more frequently to cause business interruptions, being held hostage or to simply steal information.
For over two decades malwares have existed, specifically written to exploit vulnerabilities in computer systems, that can be used for personal gains. It is a form of cybercrime to use them, to break into someone else system. In most countries in the world, it is not a crime to develop malware - only to exploit it against someone else.
Malwares exist in many different forms. What used to be viruses, that were sent generally via email in the past, have transformed into specifically engineered pieces of software for specific purposes - the most infamous one today being Stuxnet. For viruses, security software and firewalls have been equipped to detect them and quarantaine them before they can even be seen by the destination email address. But through phising attacks (emails with a malicious hyperlink - URL) or man in the middle attacks (website that have been compromised and redirect traffic) users are still being exposed to malware.
Malware can also enter by means of USB-sticks, pieces of software that don't belong on an industrial control systems or manufacturing system (games, apps, ...) which can sometimes contain malware of pieces of them.
Ransomware is a form of malware that typically starts encrypting data, once it has been activated. To decrypt a ransom has to be paid. Ransomware can be avoided by 1) frequently upgrading the underlying software to avoid exploitation of vulnerabilities, 2) isolating the industrial systems from office and other types of systems, 3) restricting access to the systems by means of physical and logical limitations.
APTs (Advanced Persistent Threats) usually are a combination of multiple attacks and threats, intended towards a specific target. APT's will combine the detection of vulnerabilities with the exploitation of malware and ransomware. APT's are typically being coordinated by nation state actors or organized crime.
Digital Manufacturing Platforms should be concerned about the abuse of their platforms by malicious users, and should prevent by all means available man in the middle attacks or similar attacks where redirects of the platform end up on the download of malwares. By running the Digital Manufacturing Platforms in the cloud, additional security measures can be put in place specifically monitoring the activities of specific containers for unexpected calls or actions. Manufacturing companies should further give notice to the continuous protection of end point devices and active monitoring of network traffic on top of the detection of malicious activities.
The process of recording activities happening on an IT system, including OT systems operated via IT. Monitoring and logging typically occurs on network level, where packages are being sent over TCP/IP (internet protocol) and captured at the edges, in the controlling entities (routers and gateways) or as an in-line device (such as a firewall, ids or ips). Network traffic typically records origin and destination IP address, the type of application, and contents. Some of the traffic could have been encrypted.
Network traffic can be captured via a monitoring port on network devices. This results in the recording of all events that have been instructed to be logged. During the monitoring phase, this near real time data can be evaluated and analyzed. On the basis of the traffic patterns can be detected that allow the understanding of how applications (such as ransomware) arrives inside the organization or on how confidential data might leave the organization.
Logging also happens on the device level, allowing to identify the activities taking place on the device (types of applications being used and identities of people accessing the devices). This allows to identify a user with a certain transaction, or allows better for the detection of data manipulation or data theft to take place. With Machine Learning techniques some behavioral actions on a network will be detected prior to the malicious action of theft or abuse taking place. On the basis of patterns and pattern recognition, actions and events which are being used by criminals can be detected and indicating that an incident is taking place.
By utilising similar data from the outside, incidents happening in other locations, in other factories and companies can be recorded and similar patterns (signatures) can be signaled amongst trusted partners. This allows for preventative instructions inside the intrusion prevention systems, which will be able to block IP addresses, block users and applications.
Finally the monitoring and logging is important for forensics. Once an incident has happened, the recorded sessions allow to understand what exactly happened, collect evidence and use as a means for future preventive actions.
In Digital Manufacturing Platforms a logging facitlity should be enabled allowing to record the manipulations and transactions that have happened inside the platform itself, and recording the access and identity of the persons who have been controlling the platform itself.
Penetration Testing (Pentesting) is a term used by Cybersecurity practitioners to describe the process of diligently assessing potential vulnerabilities in the information security infrastructure, including in the case of Manufacturing and Industrial environments also operational technology infrastructures. It typically uses a series of tools to automate the process, but will make use of the expert experiences focusing on known tricks and vulnerabilities. The goal for the pentester is to detect and report the leaks, but not to exploit them. It is also refered to as ethical hacking, in the perspective of not intentionally manipulating equipment, data, stealing data or leaving exploitable software behind. Pentesting is the ultimate means to demonstrate both the capabilities of the security infrastructure, as it is the way to identify the shortcomings upfront. A pentesting report will allow security managers to support their activities by indicating risks, threats, vulnerabilities and indicating the needs for a risk management process. Companies with a higher level of maturity will organize a systemic approach, allowing for pentesting to take place periodically, or following specific changes happening inside the infrastructure. This can also take place in the form of contests, having for instance red teams (the attackers) playing against the defenders (blue team); both utilizing their experiences of pentesting. With a Responsible Disclosure, organizatoins and individuals can call upon the community of ethical hackers (white hats) to help identifying vulnerabilities. These will be reported sometimes in return for a small bonus. Large hacking contests can be organized to test complete platforms. When vulnerabilities are found in technologies, including Platforms which are being sold, they are being reported as CVE's after a grace period of the reporting for about 3 to 6 months. For Digital Manufacturing Platforms pentesting should also take place in the platform itself, by performing software testing and testing the Platform being put into an operational environment, as it uses web and internet technologies making it susceptible for exploitation.
Transmission data protection is the description of the security used for the communication of the data.
This can be Tranmission Layer Protocol (TLP) when considering two or more systems communicating directly communicating with each other over the internet, and securing the communication itself by means of encryption and decryption on either end.
Other means can be by using (other) VPN technologies, where usually an encryption layer between devices and applications running VPN-type services and applications are being used.
Public operators such as Internet Services Providers, Mobile Operators, ... in most cases use encryption technologies to protect the data transmssion over the public network, when providing specific business to business services. In 3G, 4G and the up and coming 5G mobile data provisioning transmission data protection has been enabled.
However, operators and platform providers should assure themselves about which transmission data has been facilitated, or should require a security baseline for it. Additionally, digital platform can start providing transmission security as part of the platform. This will be especially necessary when working with edge devices transmitting and cloud platforms receiving data.
Transmission data protection should also be considered for machines and equipment on site, or nearby. Many robot instructions and their commands for instance, are being transmitted in clear text. Many technologies exist today to prevent this from happening, even at high speeds.
The tranmission data itself should also be protected and prevented from leaking. The transmission data can also be used as a control protocol, checking the transmission for arrival and audit.
Following a risk analysis, and upon the choice of a risk framework and definition of security policies, a password policy can be derived.
The password policy is to be set up by organizations, both end user organizations manufacturers and digital platform and system providers.
Password policies should at least include :
- strong passwords or passphrases
- users to regularly update their passwords
- advise the use of multifactor (use an additional authentication device)
Digital Platform providers should provide a mechanism for single sign on or federated authentication, allowing for passwords not to be stored into the platform itself, but by accepting tokens from third party suppliers.
Physical Security refers to the part of physical access control, borders, gates, identity verfication, passport control, manned guard services, videosurveillance, biometrics and related components. Physical security also considers physical attacks such as terrorist and criminal attacks, fire and water challenges.
Multi-factor authentication describes the necessity for using more than 1 token as a proof of identity. As an example, when a user logs on to to a digital platform the basic means of authentication are user name and password.
In addition to the password (single authentication), the user can be asked for a physical token (RFID-key, ID-card, ...). This can also be a mobile phone, an authenticator app token, a SecurID or Digipass token, or biometric (fingerprint, facial recognition, ...) elements.
In security terminology this related to the concept on assuring someone's identity by something the user knows (password) and something he/she has (physical token). Additional layers can be built into this concept in order to further improve and strenghten the security levels.
When proving someone's identity at the front gate on the basis of an ID-card, Driver License or verifiable photo-ID, it can be enhanced with a log into the system that the person has reached the premise. With his personal RFID-token, he will be able to access his office. Meanwhile video surveillance camera's might have identified him in the building. Finally when logging on to his system on the network, he can be asked for an authentication code coming from his company mobile phone.
These additional levels of authentication harden the security and can be continuously expanded, depending on the security levels required.
Security training and awareness entails awareness creation, security information sessions an materials, education, educational programs, certification of people and all related formats and programs designed to inform and support people in understanding about cybersecurity.
Training & education
Security training programs will need to be an integrated part of a security strategy and policy. Next to the definition of risk, design of security policies describing how people should be getting or not getting access to specific environments, the people operating these environment should be instructed properly.
Security training and education can be system and operation specific, but needs also to accompany the company and plant specific guidelines in security.
Training and education should be a continuous activity, including repetition of elements of importance and strategic relevance.
Security education programs should be adapted to specific departments, or groups of people, depending on their levels of maturity, systems access and responsibilities.
Security education can be educational programs outside of the organizations, at specific dedicated educational organizations (private, high schools, universities, ... ) or within the organization itself. Some companies organize a one day educational course on cybersecurity, while others provide access to courses online.
These educational programs can be followed by assessments, and can lead to the provision of certificates of attendance or qualification.
Programs related to Cybersecurity can be CISSP (Certified Information Security Professional), CISM (Certified Information Security Manager), CISA (Certified Informatio Security Auditor).
Other Cybersecurity educational programs will relate to specific components in the Cybersecurity architecture, such as Firewall, Monitoring, Identity & Access expert.
Organizations can provide educational programs from within their internal organizations (own developments or licensed from educational organizations), or can develop a specific cybersecurity program dedicated to a specific application or service which has been developed.
Cybersecurity awareness programs are more informative than educational programs, typically less attention demanding, less lengthy, but aimed to a specific series of rules, or oriented to relate to a specific behavior instead of knowledge transfer.
The awareness program can indicate that the company is concerned over cybersecurity and draws attention to its employees how to handle incoming emails, watch out for suspicious behavior, means to detect that it is suspicious and what NOT to do with it. It can indicate the impact by means of a short movie, without going into detail on the whole architecture behind it.
Cyber incident reponse capability is referred to as the means of an organization to cope with a cyber incident. Usually organized in a dedicated CSIRT (CyberSecurity Incident Response Team) or a CERT (Cyber Emergency Response Team) has developed a procedure for dealing with incidents (leakages, break-ins, attacks, ...) being detected in the organization and taking the necessary measures to mitigate, prevent and respond. This dedicated team should be empowered to be in control to prevent additional loss, and to fight an attack as it happens. That means that they are required to have a good understanding of the infrastructure and have the necessary means to deflect, increase security, limit access and ensure forensic means to collect during an incident. They should be in direct response and interaction with the crisis management team. During normal operations they will support the organization Security Operations (SOC) Team onsite or remote in coping with day to day alarms, investigating their threat levels and managing with the investigation of minor incidents.