distributed control systems:Implementation techniques

Implementation techniques

Distributed control systems (DCS) differ in terms of complexity and applications. Smaller implementations may consist of a single programmable logic controller (PLC) connected to a computer in a remote office. Larger, more complex DCS installations are also PLC-based, but use special enclosures for subsystems that provide both I/O and communication functionalities. Fully distributed systems enable remote nodes to operate independently of the central control. These nodes can store all of the process data necessary to maintain operations in the event of a communications failure with a central facility.

Distributed control systems consist of a remote control panel, communications medium, and central control panel. They use process-control software and an input/output (I/O) database. Some suppliers refer to their remote control panels as remote transmission units (RTU) or digital communication units (DCU). Regardless of their name, remote control panels contain terminal blocks, I/O modules, a processor, and a communications interface. The communications medium in a distributed control system is a wired or wireless link which connects the remote control panel to a central control panel, SCADA, or human machine interface (HMI). Specialized process-control software is used to read an I/O database with defined inputs and outputs.

A major problem with distributed systems is the lack of a proper use of standards during system development. A common hardware and software interface interconnecting the elements of the system should be specified. The definition of a common interface standard benefits both the decomposition of the system control problem and the integration of its components. Distributed systems do not require common interfaces but the benefits of such an implementation should be evident. The use of Open Systems Interconnection (OSI) standards is most preferable, because it increases participation in the development of the interface beyond an individual engineering project. In other words, any unique or proprietary solution to a problem has limited support because of resource limitations. OSI increases the availability of support hardware and software and leads to a better understood, robust, and reliable system.

What are some of the technical challenges in distributed control systems? The following is a partial list.

(1) Partitioning, synchronization, and load balancing

For certain applications, optimal partitioning of the architecture into distributed nodes may be difficult to determine. The need for synchronization between distributed control programs influences the degree of distribution. Once a set of distributed nodes are identified, the control program needs to be partitioned. Program partitioning should consider static and dynamic balancing of processing load in the distributed nodes.

(2) Monitoring and diagnostics

In a centralized control system, there is a central repository of sensor and actuator data. In such systems, it is easy to construct the state of the process. Process monitoring and failure diagnosis is often performed by observing the temporal evolution of the process state. However, monitoring and diagnosing process failures in distributed control systems will require new methods for aggregating the system state from distributed data.

(3) Fault-tolerance and automatic configuration

We have to architect the system for distributed fault-tolerant operation. If a distributed node fails, the system should stay operational albeit with reduced performance. Such systems will need real-time distributed operating systems and communications technology for adding and removing control nodes without shutting the system down.

Several promising new technologies can provide innovative solutions to the above challenges. The application of graphical programming, human machine interface and simulation technology can simplify programming, debugging and configuration of distributed control systems. For discrete-event systems, methods from Discrete-Event Dynamic Systems (DEDS) theory can be applied to partition and verify the control code and construct observers for diagnostics. Autonomous agents’ technology can be applied to automatic reconfiguration and negotiation between nodes for load sharing. Distributed real-time operating systems will provide the necessary services for implementing the distributed control architecture.

(4) Comprehensive redundancy solutions

To achieve a high level of reliability, many DCS have been designed to have built-in redundancy solutions for all control levels from a single sensor to the enterprise-scale servers. Redundancy features for most components of a distributed control system are provided automatically and no additional programming is required. A comprehensive solution for redundancy in distributed control systems may include the following features.

(a) Redundancy solution for sensors

The reliability assurance system, which can be a subsystem of some controllers, allows control of the quality of signals acquired from sensors and provides redundancy. In the case of a break in communication with the sensors equipped with digital interfaces, hardware invalidation will be stated for all signals received from them. Flags for hardware and software invalidation are trans- mitted in the trace mode channels along with the value measured, as one of the channel attributes, and they can be used in algorithms which can be flexibly tuned by users. There are no limitations whatever for redundancy of sensors or groups thereof (for example, I/O cards). Dual and triple redundancy systems may be built easily. Trace mode may therefore be used to build control systems that monitor the sensor signal quality in real time, and provide redundancy features that increase the overall system reliability.

(b) Redundancy solution for controllers

This type of redundancy is used, as a rule, to ensure reliability of control systems in hazardous processes. Controller redundancy algorithms can be flexibly adjusted by the user and corrected in compliance with the requirements of a particular control system.

By default, the following hot redundancy technologies are implemented in trace mode: a channel database is automatically built for the standby controller; controller redundancy is performed by switching over of data flows to the standby controller in real time; synchronization of real-time data between the main and standby controller.

Dual or triple redundancy provision for controllers does not exclude duplication of sensors. Sensor redundancy may be provided for every signal. The user can determine whether each of the dual redundant controllers will get data of the given technological parameter from its sensor, or whether one sensor is to be used as a source of information for both the controllers.

The reliability assurance system also includes Watch Dog timer support, which helps to reboot controllers and industrial PCs automatically in the case of a system halt.

If another PLC programming system is used instead of micro trace mode, control system reliability is ensured as follows; the hardware validation flag would be generated at the human machine interface of the PC, indicating availability of communication with the PLC. If the PLC or server supports signal quality control, the communication quality flag may be input in a separate trace mode channel. This channel value will be considered by the trace mode for hardware validation/invalidation flag generation. Such a technique helps in developing the template algorithms for signal redundancy (or for groups of signals) in the PLC or in the fail-proof trace mode servers.

(c) Redundancy solution for connection buses

If a system is connected through two bus interfaces (gateways) to a bus (line) of either one or two masters, this redundancy concept does not allow for a redundant bus structure. The switch-over of the master module is not coupled to the slave module. The slave must be applied separately to both masters, so that double the engineering effort may be required with a partial enhancement of the availability.

On the other hand, a twin bus structure provides system redundancy. In this concept two masters communicate through separate bus structures with a slave, which also features two gateways. For a long time no standard existed in the PROFIBUS regarding the implementation of system redundancy. The implementation of various manufacturer-specific redundancy concepts is partly not possible or requires an inordinate amount of engineering effort in the control system. The question is immediately posed whether applications of this type can be maintained in the control system. The user usually prefers to do without a redundant PROFIBUS because control system software cannot be maintained.

(d) Redundancy solution for the I/O interface

Network path redundancy is accomplished by connecting a ring topology and providing alternative cable path among the modules, should a physical break in the ring occur. I/O point redundancy can be achieved by adding modules to the network, with an operating algorithm to ensure correct operation. For example, three input modules may measure the same temperature, and the input temperature is measured as their average, provided all three are within acceptable limits; when any one is outside a predetermined difference from the other two, the two similar readings are believed and a diagnostic is flagged in the system.

Leave a comment

Your email address will not be published. Required fields are marked *