REAL-TIME OPERATING SYSTEMS:INPUT/OUTPUT DEVICE DRIVERS

INPUT/OUTPUT DEVICE DRIVERS

Input/output devices

Input/output is abbreviated to I/O. The I/O subsystem, composed of I/O devices, device controllers, and associated input/output software, is a major component of both computer and industrial control systems. All I/O devices are classified as either block or character devices. A block special device causes I/O to be buffered in large pieces, whereas the character device causes I/O to occur one character (byte) at a time. Some devices can be both block and character devices, and must have entries for each mode.

Input and output devices allow the computer system to interact with the outside environment. Some examples of input devices are keyboard, mouse, microphone, bar code reader, and graphics tablet, and examples of output devices are monitor, printer, and speaker.

For industrial control, a good example of I/O systems is the data acquisition I/O modules, which are important subsystems in most industrial control systems, in particular in SCADA networks. Data acquisition is the processing of multiple electrical or electronic inputs from devices such as sensors, timers, relays, and solid-state circuits for the purpose of monitoring, analyzing and/or controlling systems and processes. Device performance, analog outputs, form factor, computer bus, connection to host, and environmental parameters are all important specifications to consider for data acquisition I/O modules.

One of the important tasks of an operating system is to control all of its I/O devices, which includes such tasks as issuing commands concerning data transfer or status polling, catching and processing interrupts, as well as handling different kind of errors. This is the topic of this section.

Device drivers

Device drivers are specific programs which contain device-dependent codes. Each can handle one device type, or one class of closely related devices. For example, some kinds of dumb terminal can be controlled by a single terminal driver. On the other hand, a dumb hardcopy terminal and an intelligent graphics terminal are so different that different drivers must be used. Each device controller has one or more registers used to receive its commands. The device drivers issue these commands, and check that they are carried out properly. Thus, a communication driver is the only part of the operating system that knows how many registers the associated serial controller has, and what they are used for. In general, a device driver has to accept requests from the device-independent software above it, and to check that they are carried out correctly. For example, a typical request is to read a block of data from the disk. If the device driver is idle, it starts carrying out the request immediately, but if it is already busy with another request, it will enter the new request into a queue, which will be dealt with as soon as possible.

To carry out an I/O request, the device driver must decide which controller operations are required, and in what sequence. It starts issuing the corresponding commands by writing them into the controller’s device registers. In many cases, the device driver must wait until the controller does some work, so it blocks itself until the interrupt comes in to unblock it. Sometimes, however, the I/ O operation finishes without delay, so the driver does not need to block. After the operation has been completed, it must check for errors. Status information is then returned back to its caller. Buffering is also an issue, for both block and character devices. For block devices, the hardware generally insists on reading and writing entire blocks at once, but user processes are free to read and write in arbitrary units. If a user process writes half a block, the operating system will normally keep the data internally until the rest of the data are written, at which time the block can go out to the disk. For character devices, users can write data to the system faster than it can be output, necessitating buffering. A keyboard input can also arrive before it is needed, also requiring buffering.

Error handling is done by the device drivers. Most errors are highly device-dependent, so only the device driver knows what to do, such as retry or ignore, and so on. A typical error is caused by a disk block that has been damaged and cannot be read any more. After the driver has tried to read the block a certain number of times, it gives up and informs the device-independent software about the error. How it is treated from here on is a task for the device-independent software. If the error occurred while reading a user file, it may be sufficient to report the error back to the caller. However, if it occurred while reading a critical system data structure, such as the block containing the bit map showing which blocks are free, the operating system may have no choice but to print an error message and terminate.

(1) Device driver content

Device drivers make up the major part of all operating systems kernels. Like other parts of the operating system, they operate in a highly privileged environment and can cause disaster if they get things wrong. Device drivers control the interaction between the operating system and the device that they are controlling. For example, the file system makes use of a general block device interface when writing blocks to an IDE disk. The driver takes care of the details and makes device-specific things happen. Device drivers are specific to the controller chip that they are driving, which is why, for example, you need the NCR810 SCSI driver if your system has an NCR810 SCSI controller.

Every device driver has two important data structures; the device information structure and the static structure. These are used to install the device driver and to share information among the entry point routines. The device information structure is a static file that is passed to the install entry point. The purpose of the information structure is to pass the information required to install a major device into the install entry point where it is used to initialize the static structure. The static structure is used to pass information between the different entry points, and is initialized with the information stored in the information structure. The operating system communicates with the driver through its entry point routines.

(2) Device Driver status

The entry point routines provide an interface between the operating system and the user applications. For example, when a user makes an open system call, the operating system responds by calling the open entry point routine, if it exists. There is a list of defined entry points, but every driver does not need to use all of them.

(a) Install. This routine is called once for each major device when it is configured into the system. The install routine is responsible for allocating and initializing data structures and the device hardware, if present. It receives the address of a device information structure that holds the parameters for a major device. The install routine for a character driver should follow this pseudocode.

(b) Open. The open entry point performs the initialization for a minor device. Every open system call results in the invocation of the open entry point. The open entry point is not re-entrant; therefore, only one user task can be executing this entry point’s code at any time for a particular device.

(c) Close. The close entry point is invoked when the last open file descriptor for a particular device file is closed.

(d) Read. The read entry point copies a certain number of bytes from the device into the user’s buffer.

(e) Write. The write entry point copies a certain number of bytes from the user’s buffer to the device.

(f) Select. The select entry point supports I/O polling or multiplexing. The code for this entry point is complicated and the discussion of this code or structure is probably better left until needed. Unless you have a slow device, most likely it will never be needed.

(g) Uninstall. The uninstall entry point is called once when the major device is uninstalled from the system. Any dynamically allocated memory or interrupt vectors set in the install entry point should be freed in this entry point.

(h) Strategy. The strategy entry point is valid only for block devices. Instead of having a read and write entry point, block device drivers have a strategy entry point routine that handles both reading and writing.

Request contention

Dealing with race conditions is one of the difficult aspects of an I/O device driver. The most common way of protecting data from concurrent access is I/O request contention, traditionally operating by means of the I/O request queue.

The most important function in a block driver is the request function, which performs the low-level operations related to reading and writing data. Each block driver works with at least one I/O request queue. This queue contains, at any given time, all of the I/O operations that the operating system would like to see done on the driver’s devices. The management of this queue is complicated; the performance of the system depends on how it is done. The I/O request queue is a complex data structure that is accessed in many places in the operating system. It is entirely possible that the operating system needs to add more requests to the queue at the same time that the device driver is taking requests off it. The queue is thus subject to the usual sort of race conditions, and must be protected accordingly.

A variant of this latter case can also occur if the request function returns while an I/O request is still active. Many drivers for real hardware will start an I/O operation, then return; the work is completed in the driver’s interrupt handler. We will look at interrupt-handling methodology in detail later in this chapter; for now it is worth mentioning, however, that the request function can be called while these operations are still in progress.

Some drivers handle request function re-entrance by maintaining an internal request queue. The request function simply removes any new requests from the I/O request queue and adds them to the internal queue, which is then processed through a combination of task schedulers and interrupt handlers.

One other detail regarding the behavior of the I/O request queue is relevant for block drivers that deal with clustering. It has to do with the queue head: the first request on the queue. For historical compatibility reasons, the operating system (almost) always assumes that a block driver is processing the first entry in the request queue, hence to avoid corruption resulting from conflicting activity, the operating system will never modify a request once it gets to the head of the queue. No further clus- tering will happen on that request, and the elevator code will not put other requests in front of it.

The queue is designed with physical disk drives in mind. With disks, the amount of time required to transfer a block of data is typically quite small. The amount of time required to position the head (seek) to do that transfer, however, can be very large. Thus, the operating system works to minimize the number and extent of the seeks performed by the device.

Two things are done to achieve these goals. One is the clustering of requests to adjacent sectors on the disk. Most modern file systems will attempt to lay out files in consecutive sectors; as a result, requests to adjoining parts of the disk are common. The operating system also applies an “elevator” algorithm to the requests. An elevator in a skyscraper goes either up or down; it will continue to move in one direction until all of its “requests” (people wanting on or off) have been satisfied. In the same way, the operating system tries to keep the disk head moving in the same direction for as long as possible; this approach tends to minimize seek times while ensuring that all requests get satisfied eventually. Requests are not made of random lists of buffers; instead, all of the buffer heads attached to a single request will belong to a series of adjacent blocks on the disk. Thus a request is, in a sense, a single operation referring to a (perhaps long) group of blocks on the disk. This grouping of blocks is called clustering.

I/O operations
(1) Interrupt-driven I/O

In an interrupt-driven I/O, the dedicated I/O microprocessors can conduct I/O operations. Whenever a data transfer to or from the managed hardware might be delayed for any reason, the driver writer should implement buffering. Data buffers help to detach data transmission and reception from the write and read system calls, and overall system performance benefits.

A good buffering mechanism leads to interrupt-driven I/O, in which an input buffer is filled at interrupt time and is emptied by processes that read the device; an output buffer is filled by processes that write to the device, and is emptied at interrupt time. For interrupt-driven data transfer to happen successfully, the hardware should be able to generate interrupts with the following semantics:

(a) For input, the device interrupts the microprocessor when new data have arrived and are ready to be retrieved. The actual actions to perform depend on whether the device uses I/O ports, memory mapping, or DMA (direct memory access).

(b) For output, the device delivers an interrupt either when it is ready to accept new data or to acknowledge a successful data transfer. Memory-mapped and DMA-capable devices usually generate interrupts to tell the system they have finished with the buffer.

However, interrupt-driven I/O introduces the problem of synchronizing concurrent access to shared data items and all the issues related to race conditions. This related topic was discussed in the previous subsection.

(2) Memory-mapped read and write

Memory-mapped and port-mapped I/O are two complementary methods of performing input and output between the CPU and I/O devices.

Memory-mapped I/O uses the same bus to address both memory and I/O devices. CPU instructions used to read and write to memory are also used in accessing I/O devices. To accommodate the I/O devices, areas of CPU addressable space must be reserved for I/O rather than memory, which needs CPU hardware support. This does not have to be permanent; for example, the Commodore 64 could bank switch between its I/O devices and regular memory. The I/O devices monitor the CPU’s address bus and respond to any CPU access of their assigned address space, mapping the address to their hardware registers.

Port-mapped I/O uses a special class of CPU instructions. This is generally found on Intel microprocessors, specifically the IN and OUT instructions, which can read and write a single byte to an I/O device. I/O devices have a separate address space from general memory, either from an extra I/O pin on the CPU’s physical interface, or an entire bus dedicated to I/O.

(3) Bus-based read and write

In bus-based I/O, the microprocessor has a set of address, data, and control ports corresponding to bus lines, and uses the bus to access memory as well as peripherals. The microprocessor has the bus protocol built into its hardware. Alternatively, the bus may be equipped with memory read and write plus input and output command lines. Now, the command line specifies whether the address refers to a memory location or an I/O device. The full range of addresses may be available for both. Again, with 16 address lines, the system may now support both 64K memory locations and 64K I/O addresses. Because the address space for I/O is isolated from that for memory, this is referred to as isolated I/O, also known as mapped I/O or standard I/O.

Incoming search terms:

Leave a comment

Your email address will not be published. Required fields are marked *