Time-triggered systems
The main alternative to event-triggered systems architectures are time-triggered architectures (see, for example, Kopetz, 1997). As with event-triggered architectures, time-triggered approaches are used in both desktop systems and in embedded systems.
To understand the difference between the two approaches, consider that a hospital doctor must look after the needs of ten seriously ill patients overnight, with the support of some nursing staff. The doctor might consider two ways of performing this task:
● The doctor might arrange for one of the nursing staff to waken her, if there is a significant problem with one of the patients. This is the ‘event-triggered’ solution.
● The doctor might set her alarm clock to ring every hour. When the alarm goes off, she will get up and visit each of the patients, in turn, to check that they are well and, if necessary, prescribe treatment. This is the ‘time-triggered’ solution.
For most doctors, the event-triggered approach will seem the more attractive, because they are likely to get a few hours of sleep during the course of the night. By contrast, with the time-triggered approach, the doctor will inevitably suffer sleep deprivation.
However, in the case of many embedded systems – which do not need sleep – the time-triggered approach has many advantages. Indeed, within industrial sectors where safety is an obvious concern, such as the aerospace industry and, increasingly, the automotive industry, time-triggered techniques are widely used because it is accepted, both by the system developers and their certification authorities, that they help improve reliability and safety (see, for example, Allworth, 1981; MISRA, 1994; Storey, 1996; Nissanke, 1997; Bates, 2000 for discussion of these issues).
The main reason that time-triggered approaches are preferred in safety-related appli- cations is that they result in systems which have very predictable behaviour. If we revis- it the hospital analogy, we can begin to see why this is so.
Suppose, for example, that our ‘event-triggered’ doctor is sleeping peacefully. An apparently minor problem develops with one of the patients and the nursing staff decide not to awaken the doctor but to deal with the problem themselves. After another two hours, when four patients have ‘minor’ problems, the nurses decide that they will have to wake the doctor after all. As soon as the doctor sees the patients, she recognizes that two of them have a severe complications, and she has to begin surgery. Before she can complete the surgery on the first patient, the second patient is very close to death.
Consider the same example with the ‘time-triggered’ doctor. In this case, because the patient visits take place at hourly intervals, the doctor sees each patient before seri- ous complications arise and arranges appropriate treatment. Another way of viewing this is that the workload is spread out evenly throughout the night. As a result, all the patients survive the night without difficulty.
In embedded applications, the (rather macabre) hospital situation is mirrored in the event-driven application by the occurrence of several events (that is, several interrupts) at the same time. This might indicate, for example, that two different faults had been detected simultaneously in an aircraft or simply that two switches had been pressed at the same time on a keypad.
To see why the simultaneous occurrence of two interrupts causes a problem, con- sider what happens in the 8051 architecture in these circumstances. Like many micro- controllers, the original 8051 architecture supports two different interrupt priority lev- els: low and high. If two interrupts (we will call them Interrupt 1 and Interrupt 2) occur in rapid succession, the system will behave as follows:
● If Interrupt 1 is a low-priority interrupt and Interrupt 2 is a high-priority interrupt: The interrupt service routine (ISR) invoked by a low-priority interrupt can be interrupted by a high-priority interrupt. In this case, the low-priority ISR will be paused, to allow the high-priority ISR to be executed, after which the operation of the low-priority ISR will be
completed. In most cases, the system will operate correctly (provided that the two ISRs do not interfere with one another).
● If Interrupt 1 is a low-priority interrupt and Interrupt 2 is also a low-priority interrupt:
The ISR invoked by a low-priority interrupt cannot be interrupted by another low- priority interrupt. As a result, the response to the second interrupt will be at the very least delayed; under some circumstances it will be ignored altogether.
● If Interrupt 1 is a high-priority interrupt and Interrupt 2 is a low-priority interrupt:
The interrupt service routine (ISR) invoked by a high-priority interrupt cannot be interrupted by a low-priority interrupt. As a result, the response to the second interrupt will be at the very least delayed; under some circumstances it will be ignored altogether.
● If Interrupt 1 is a high-priority interrupt and Interrupt 2 is also a high-priority interrupt:
The interrupt service routine (ISR) invoked by a high-priority interrupt cannot be inter- rupted by another high-priority interrupt. As a result, the response to the second interrupt will be at the very least delayed; under some circumstances it will be ignored altogether.
Note carefully what this means! There is a common misconception among the devel- opers of embedded applications that interrupt events will never be lost. This simply is not true. If you have multiple sources of interrupts that may appear at ‘random’ time intervals, interrupt responses can be missed: indeed, where there are several active interrupt sources, it is practically impossible to create code that will deal cor- rectly with all possible combinations of interrupts.
It is the need to deal with the simultaneous occurrence of more than one event that both adds to the system complexity and reduces the ability to predict the behaviour of an event-triggered system under all circumstances. By contrast, in a time-triggered embedded application, the designer is able to ensure that only single events must be handled at a time, in a carefully controlled sequence.
As already mentioned, the predictable nature of time-triggered applications makes this approach the usual choice in safety-related applications, where reliability is a cru- cial design requirement. However, the need for reliability is not restricted to systems such as fly-by-wire aircraft and drive-by-wire passenger cars: even at the lowest level, an alarm clock that fails to sound on time or a video recorder that operates intermit- tently, or a data monitoring system that – once a year – loses a few bytes of data may not have safety implications but, equally, will not have high sales figures.
In addition to increasing reliability, the use of time-triggered techniques can help to reduce both CPU loads and memory usage: as a result, as we demonstrate throughout this book, even the smallest of embedded applications can benefit from the use of this form of system architecture.