An introduction to schedulers:A closer look at pre-emptive schedulers

A closer look at pre-emptive schedulers

The discussion in this section is more technical than the previous sections in this chapter and may be omitted on a first reading of the book.

Various research studies have demonstrated that, compared to pre-emptive schedulers, co-operative schedulers have a number of desirable features. For example, Nissanke (1997, p. 237) notes:

[Pre-emptive] schedules carry greater runtime overheads because of the need for context switching – storage and retrieval of partially computed results. [Co-operative] algorithms do not incur such overheads. Other advantages of [co-operative] algorithms include their better under- standability, greater predictability, ease of testing and their inherent capability for guaranteeing exclusive access to any shared resource or data.

Similarly, Allworth (1981, pp. 53–4) notes:

Significant advantages are obtained when using this [co-operative] technique. Since the processes are not interruptable, poor synchronisation does not give rise to the problem of shared data. Shared subroutines can be implemented without producing re-entrant code or implementing lock and unlock mechanisms.

Also, in a recent presentation, Bates (2000) identified the following four advantages of co-operative scheduling, compared to pre-emptive alternatives:

1 The scheduler is simpler

2 The overheads are reduced

3 Testing is easier

4 Certification authorities tend to support this form of scheduling

Despite these observations, all the authors cited and the vast majority of other workers in this area focus on the use of pre-emptive schedulers. At least part of the reason why pre-emptive approaches are more widely discussed is because of confu- sion over the options available. For example, Bennett (1994, p. 205) states:

If we consider the scheduling of time allocation on a single CPU there are two basic alternatives: [1] cyclic, [2] pre-emptive.

In fact, contrary to Bennett’s assertion, what he refers to as cyclic scheduling is essentially a form of SUPER LOOP [page 162]. As we saw in Chapter 9, this type of architecture is suitable only for use in a restricted range of very simple applications, in particular those where accurate timing is not a key requirement and limited memory and CPU resources are available: SUPER LOOP is not representative of the broad range of co-operative scheduling architectures that are available.

Bennett is, however, not alone: other researchers make similar assumptions (see Barnett, 1995). For example, Locke (1992, p. 37) – in a widely cited publication – suggests that:

Traditionally, there have been two basic approaches to the overall design of application systems exhibiting hard real-time deadlines: the cyclic executive … and the fixed priority [pre- emptive] architecture.

Similarly, Cooling (1991, pp. 292–3) compares co-operative and pre-emptive schedul- ing approaches. Again, however, his discussion of co-operative schedulers is restricted to a consideration of the special case of cyclic scheduling: as a result, his conclusion that a pre-emptive approach is more effective is unsurprising.

Where the different characteristics of pre-emptive and co-operative scheduling are compared equitably, the main concern expressed is often that long tasks will have an impact on the responsiveness of a co-operative scheduler. This concern is succinctly summarized by Allworth (1981):

[The] main drawback with this [co-operative] approach is that while the current process is run- ning, the system is not responsive to changes in the environment. Therefore, system processes must be extremely brief if the real-time response [of the] system is not to be impaired.

There are four main technical reasons why such concerns are, generally, misplaced:

● In many embedded applications, the task duration is extremely brief. For example, consider one of the more complex algorithms considered in this book: proportional integral differential (PID) control. Even the most basic 8051 microcontroller can carry out a PID calculation in around 0.4 ms (see p. 872): even in flight control – where PID algorithms remain in widespread use – sample rates of around 10 ms are common, and a 0.4 ms calculation does not impose a significant processor load.

● Where the system does have long tasks, this is often because the developer is unaware of some simple techniques that can be used to break down these tasks in an appropriate way and – in effect – convert ‘long tasks called infrequently’ into ‘short tasks called frequently’. Such techniques are used throughout this book; they are introduced and explained in Chapter 16.

● In many cases, the increased power of microcontrollers has more than kept up with performance demands in many embedded systems. For example, the PID per- formance figures just given assumed an original 8051 microcontroller, with a 1 MIPS performance level. As we saw in Chapter 3, numerous low-cost members of this family now have performance levels between 5 and 50 MIPS. Often a simple, cost-effective, way of addressing performance concerns is not to use a more com- plex software architecture, but, instead, to update the hardware.

● If upgrades to the task design or microcontroller do not provide sufficient perform- ance improvements, then more than one microcontroller can be used. This is now very common. For example, a typical automotive environment containing more than 40 embedded processors (Leen et al., 1999). With the increased availability of such processing elements, long tasks may be readily ‘migrated’ to another proces- sor, leaving the main CPU free to respond rapidly, if necessary, to other events. (See Part F of this book for numerous examples of this process.)

Finally, it should be noted that the reasons why pre-emptive schedulers have been more widely discussed and used may not be for technical reasons at all: in fact, the use of pre-emptive environments can be seen to have clear commercial advantages for some companies. For example, a co-operative scheduler may be easily constructed, entirely in a high-level programming language, in around 300 lines of ‘C’ code, as we demonstrate in Chapter 9. The code is highly portable, easy to understand and to use and is, in effect, freely available. By contrast, the increased complexity of a pre- emptive operating environment results in a much larger code framework (some ten times the size, even in a simple implementation: Labrosse, 1998). The size and com- plexity of this code makes it unsuitable for ‘in-house’ construction in most situations and therefore provides the basis for a commercial ‘RTOS’ products to be sold, gener- ally at high prices and often with expensive run-time royalties to be paid. The continued promotion and sale of such environments has, in turn, prompted further academic interest in this area. For example, according to Liu and Ha, (1995):

[An] objective of reengineering is the adoption of commercial off-the-shelf and standard operat- ing systems. Because they do not support cyclic scheduling, the adoption of these operating systems makes it necessary for us to abandon this traditional approach to scheduling.

Conclusions

In this chapter, we have explained what a scheduler is and outlined the differences between co-operative and pre-emptive scheduling. We have argued that a co-operative scheduler provides a simple and reliable operating environment that matches pre- cisely the needs of most embedded applications.

Over recent years we have used versions of the co-operative schedulers presented in this book in numerous ‘real’ applications. We have also helped many student groups use this architecture in their first embedded systems. We have no doubt that the cor- rect use of these schedulers not only results in simple, transparent and reliable designs, but also make it easier for ‘desktop’ developers to adapt rapidly to the chal- lenges of embedded system development.

Leave a comment

Your email address will not be published. Required fields are marked *