Chapter 11. Ports
No computer has everything you need built in to it. The computer gains its power from what you connect to it—its peripherals. Somehow your computer must be able to send data to its peripherals. It needs to connect, and it makes its connections through its ports.
Today’s computers include two or three of these modern interfaces. Those that you’re likely to encounter include Universal Serial Bus (USB), today’s general-purpose choice; FireWire, most popular as a digital video interface; IrDA, a wireless connection most often used for beaming data between handheld computers; Bluetooth, a radio-based networking system most suited for voice equipment; legacy serial ports, used by a variety of slower devices such as external modems, drawing tablets, and even PC-to-PC connections; and legacy parallel ports, most often used by printers. Table 11.1 compares these port alternatives.
If you were only to buy new peripherals to plug into your computer, you could get along with only USB ports. Designed to be hassle-free for installing new gear, USB is fast enough (at least in its current version 2.0) that you need not consider any other connection. You can consider all the other ports special-purpose designs—the legacy pair (parallel and serial) for accommodating stuff that would otherwise be gathering dust in your attic; FireWire for plugging in your digital camcorder; IrDA for talking to your notebook computer; and Bluetooth for, well, you’ll think of something.
Universal Serial Bus
In 1995, Compaq, Digital, IBM, Intel, Microsoft, NEC, and Northern Telecom, determined to design a better interface, pooled their efforts and laid the groundwork for the Universal Serial Bus, better known as USB. Later that year, they started the Universal Serial Bus Implementers Forum and in 1996 unveiled the new interface to the world. The world yawned.
Aimed at replacing both legacy serial and parallel port designs, the USB design corrected all three of their shortcomings. To improve performance, they designed USB with a 12Mbps data rate (with an alternative low-speed signaling rate of 1.5Mbps). To eliminate wiring hassles and worries about connector gender, crossover cables, and device types, they developed a strict wiring system with exactly one type of cable to serve all interconnection needs. And to allow one jack on the back of a computer to handle as many peripherals as necessary, they designed the system to accommodate up to 127 devices per port. In addition, they built in Plug-and-Play support so that every connection could be self-configuring. You could even hot-plug new devices and use them immediately without reloading your operating system.
On April 27, 2000, a new group, led by Compaq, Hewlett-Packard, Intel, Lucent, Microsoft, NEC, and Philips, published a revised USB standard, version 2.0. The key change was an increase in performance, upping the speed from 12Mbps to 480Mbps. The new system incorporates all the protocols of the old and is fully backward compatible. Devices will negotiate the highest common speed and use it for their transfers. Connectors and cabling remained unchanged.
Background
Designed for those who would rather compute than worry about hardware, the premise underlying USB is the substitution of software intelligence for cabling confusion. USB handles all the issues involved in linking multiple devices with different capabilities and data rates with a layer-cake of software. Along the way, it introduces its own new technology and terminology.
USB divides serial hardware into two classes: hubs and functions. A USB hub provides jacks into which you can plug functions. A USB function is a device that actually does something. USB’s designers imagined that a function may be anything you can connect to your computer, including keyboards, mice, modems, printers, plotters, scanners, and more.
Rather than a simple point-to-point port, the USB acts as an actual bus that allows you to connect multiple peripherals to one jack on your computer with all the linked functions (devices) sharing exactly the same signals. Information passes across the bus in the form of packets, and all functions receive all packets. Your computer accesses individual functions by adding a specific address to the packets, and only the function with the correct address acts on the packets addressed to it.
The physical manifestation of USB is a port—a jack on the back of your computer or in a hub. Although your computer’s USB port can handle up to 127 devices, each physical USB port connects to a single device. To connect multiple devices, you need multiple jacks. Typically, a new computer comes equipped with two USB ports. When you need more, you add a hub, which offers multiple jacks to let you plug in several devices. You can plug one hub into another to provide several additional jacks and ports to connect more devices.
The USB design envisions a hierarchical system with hubs connected to hubs connected to hubs. In that each hub allows for multiple connections, the reach of the USB system branches out like a tree—or a tree’s roots. Figure 11.1 gives a conceptual view of the USB wiring system.
Your computer acts as the base hub for a USB system and is termed the host. The circuitry in your computer that controls this integral hub and the rest of the USB system is called the buscontroller. Each USB system has one and only one bus controller.
Under USB 2.0, a device can operate at any of three speeds: Low speed is 1.5Mbps. Full speed is 12Mbps, and high speed is 480Mbps.
USB 2.0 is backward compatible with USB 1.1—all USB 1.1 devices will work with USB 2.0 devices, and vice versa, but USB 1.1 will impose its speed limit on USB 2.0 devices. The mixing of speeds makes matters complicated. If you plug both a USB 2.0 and a USB 1.1 device into a USB 2.0 hub, both devices will operate at their respective top speeds. But if a USB 1.1 hub appears in the chain between USB 2.0 devices, the slower hub will limit the speed of the overall system. Plug any USB 2.0 device into a USB 1.1 hub—even a USB 2.0 hub—and it will degrade to USB 1.1 operation (and if it’s a hub, so will all the devices connected to that hub).
Other than this speed issue, the USB system doesn’t care which device you plug into which hub or how many levels down the hub hierarchy you put a particular device. All the system requires is that you properly plug everything together following its simple rule: Each device must plug into a hub. The USB software then sorts everything out. This software, making up the USB protocol, is the most complex part of the design. In comparison, the actual hardware is simple—but the hardware won’t work without the protocol.
The wiring hardware imposes no limit on the number of devices/functions you can connect in a USB system. You can plug hubs into hubs into hubs, fanning out into as many ports as you like. You do face limits, however. The protocol constrains the number of functions on one bus to 127 because of addressing limits. Seven bits are allowed for encoding function addresses, and one of the potential addresses (128) is reserved.
In addition, the wiring limits the distance at which you can place functions from hubs. The maximum length of a USB cable is five meters. Because hubs can regenerate signals, however, your USB system can stretch out for greater distances by making multiple hops through hubs.
As part of the Plug-and-Play process, the USB controller goes on a device hunt when you start up your computer. It interrogates each device to find out what it is. It then builds a map that locates each device by hub and port number. These become part of the packet address. When the USB driver sends data out the port, it routes that data to the proper device by this hub-and-port address.
USB requires specific software support. Any device with a USB connector will have the necessary firmware to handle USB built in. But your computer will also require software to make the USB system work. Your computer’s operating system must know how to send the appropriate signals to its USB ports. All Windows versions starting with Windows 98 have USB support. Windows 95 and Windows NT do not. In addition, each function must have a matching software driver. The function driver creates the commands or packages the data for its associated device. An overall USB driver acts as the delivery service, providing the channel—in USB terminology, a pipe—for routing the data to the various functions. Consequently, each USB you add to your computer requires software installation along with plugging in the hardware.
Connectors
The USB system involves four different styles of connectors—two chassis-mounted jacks and two plugs at the ends of cables. Each jack and plug comes in two varieties: A and B.
Hubs have A jacks. These are the primary outward manifestation of the USB port—the wide, thin USB slots you’ll find on the back of your computer. The matching A plug attaches to the cable that leads to the USB device. In the purest form of USB, this cable is permanently affixed to the device, and you need not worry about any other plugs or jacks.
This configuration may someday become popular when manufacturers discover they can save the cost of a connector by integrating the cable. Unfortunately, too many manufacturers have discovered that by putting a jack on their USB devices they save the cost of the cable by not including it with the device.
To accommodate devices with removable cables (and manufacturers that don’t want the add the expense of a few feet of wire to their USB devices), the USB standard allows for a second, different style of plug and jack meant only to be used for inputs to USB devices. If a USB device (other than a hub) requires a connector so that, as a convenience, you can remove the cable, it uses a USB B jack, which is a small, nearly square hole into which you slide the mating B plug.
The motivation behind this multiplicity of connectors is to prevent rather than cause confusion. All USB cables will have an A plug at one end and a B plug at the other. One end must attach to a hub and the other to a device. You cannot inadvertently plug things together incorrectly.
Because all A jacks are outputs and all B jacks are inputs, only one form of detachable USB cable exists—one with an A plug at one end and a B plug at the other. No crossover cables or adapters are needed for any USB wiring scheme.
Cable
The USB system uses two kinds of cable—that meant for low-speed connections and that meant for full- and high-speed links. But you only have to worry about the higher-speed variety. Low-speed cables, those capable of supporting only 1.5Mbps signaling rates, must be permanently attached to the equipment using them. Higher-speed cables can be either permanently attached or removable.
Both speeds of physical USB wiring use a special four-wire cable. Two conductors in the cable transfer the data as a differential digital signal. That is, the voltage on the two conductors is of equal magnitude and opposite polarity so that when subtracted from one another (finding the difference) the result cancels out any noise that ordinarily would add equally to the signal on each line. In addition, the USB cable includes a power signal, nominally five-volts DC, and a ground return. The power signal allows you to supply power for external serial devices through the USB cable. The two data wires are twisted together as a pair. The power cables may or may not be.
The difference between low- and higher-speed cables is that the capacitance of the low-speed cable is adjusted to support its signaling rate. In addition, the low speed does not need twisted-pair wires, and the standard doesn’t require them.
All removable cables must be able to handle both full-speed and high-speed connections. To achieve its high data rate, the USB specification requires that certain physical characteristics of the cable be carefully controlled. Even so, the maximum length permitted for any USB cable is five meters.
One limit on cable length is the inevitable voltage drop suffered by the power signal. All wires offer some resistance to electrical flow, and the resistance is proportional to the wire gauge. Hence, lower wire gauges (thicker wires) have lower resistance. Longer cables require lower wire gauges. At maximum length, the USB specification requires 20-gauge wire, which is one step (two gauge numbers) thinner than ordinary lamp cord.
The individual wires in the USB cable are color-coded. The data signals form a green-white pair, with the +Data signal on green. The positive five-volt signal rides on the red wire. The ground wire is black. Table 11.2 sums up this color code.
Normally, you cannot connect one computer to another using USB. The standard calls for only one USB controller in the entire interconnection system. Physically the cabling system prevents you from making such a connection. The exception is that some cables have a bridge built in that allows two USB hosts to talk to each other. The bridge is active circuitry that converts the signals.
Protocol
As with all more recent interface introductions, the USB design uses a packet-based protocol using No Return-to-Zero Inverted (NRZI) data coding. All message exchanges require the swapping of three packets. The exchange begins with the host sending out a token packet. The token packet bears the address of the device meant to participate in the exchange as well as control information that describes the nature of the exchange. A data packet holds the actual information that is to be exchanged. Depending on the type of transfer, either the host or the device will send out the data packet. Despite the name, the data packet may contain no information. Finally, the exchange ends with a handshake packet, which acknowledges the receipt of the data or other successful completion of the exchange. A fourth type of packet, called Special, handles additional functions.
Each packet starts with two components—a Sync Field and a Packet Identifier—each one byte long. The Sync Field is a series of bits that serves as a consistent burst of clock pulses so that the devices connected to the USB bus can reset their timing and synchronize themselves to the host. The Sync Field appears as three on/off pulses followed by a marker two pulses wide. The Packet Identifier byte includes four bits to define the nature of the packet itself and another four bits as check-bits that confirm the accuracy of the first four. The four bits provide a code that allows for the definition of 16 different kinds of packets.
USB uses the 16 values in a two-step hierarchy. The two more significant bits specify one of the four types of packets. The two lesser significant bits subdivide the packet category. Table 11.3 lists the PIDs of the four basic USB packet types.
Token Packets
Only the USB host sends out token packets. Each token packet takes up four bytes, which are divided up into five functional parts. Figure 11.2 graphically shows the layout of a token packet.
The two bytes take the standard form of all USB packets. The first byte is a Sync Field that marks the beginning of the token’s bit-stream. The second byte is the Packet Identifier.
The PID byte defines four types of token packets. These include an Out packet that carries data from the host to a device; an In packet that carries data from the device to the host; a Setup packet that targets a specific endpoint; and a Start of Frame packet that helps synchronize the system.
Data Packets
The actual information transferred through the USB system takes the form of data packets. As with all USB packets, a data packet begins with a one-byte Sync Field followed by the Packet Identifier. The actual data follows as a sequence of 0 to 1,023 bytes. A two-byte cyclic redundancy check verifies the accuracy of only the Data Field, as shown in Figure 11.3. The PID field relies on its own redundancy check mechanism.
Handshake Packets
Handshake packets handle flow-control in the USB system. All are two bytes long, comprised of nothing more than the Sync Field and a Packet Identifier byte that acknowledges proper receipt of a packet, as shown in Figure 11.4.
Standard
The USB standard is maintained by USB Implementers Forum. You can download a complete copy of the current version of the specifications from the USB Website.
USB Implementers Forum
5440 SW Westgate Dr., Suite 217
Portland, OR 97221
Phone: 503-296-9892
Fax: 503-297-1090
Web site: www.usb.org
FireWire
Also known as IEEE-1394, I.link, and DV, FireWire is a serial interface that’s aimed at high-throughput devices such as a hard disk and tape drives, as well as consumer-level multimedia devices such as digital camcorders, digital VCRs, and digital televisions. Originally it was conceived as a general-purpose interface suitable for replacing legacy serial ports, but with blazing speed. However, it has been most used in digital video—at least so far. Promised new performance and a choice of media may rekindle interest in FireWire as a general-purpose, high-performance interconnection system.
For the most part, FireWire is a hardware interface. It specifies speeds, timing, and a connection system. The software side is based on SCSI. In fact, FireWire is one of the several hardware interfaces included in the SCSI-3 standards.
As with other current port standards, FireWire continues to evolve. Development of the standard began when the Institute of Electrical and Electronic Engineers (IEEE) assigned a study group the task of clearing the murk of thickening morass of serial standards in September 1986. Hardly four months later (in January 1987), the group had already outlined basic concepts underlying FireWire, some of which still survive in today’s standard—including low cost, a simplified wiring scheme, and arbitrated signals supporting multiple devices. The IEEE approved the first FireWire standard (as IEEE 1394-1995) in 1995, based on a design with one connector style and two speeds (100 and 200Mbps).
In the year 2000, the institute approved the standard IEEE 1394a-2000, which boasts a new, miniaturized connector, a higher speed (400Mbps), and streamlined signaling that makes connections quicker (because of reduced overhead) and more reliable.
As this is written, the engineers at the institute are developing a successor standard, IEEE 1394b, that will quadruple the speed of connections, add both fiber-optic and twisted-pair wiring schemes, and add a new, more reliable transport protocol.
For now, FireWire is best known as a 400Mbps connection system for plugging digital camcorders into computers, letting you capture video images (live or tape), edit them, and publish them on CD or DVD.
Overview
FireWire differs from today’s other leading port standard, USB, in that it is a point-to-point connection system. That is, you plug a FireWire device directly into the port on your computer. To accommodate more than one FireWire device (the standard allows for a maximum of 63 interconnected devices), the computer host may have multiple jacks or the FireWire device may have its own input jack so that you can daisy-chain multiple devices to a single computer port.
Although FireWire does not use hubs in the network or USB sense, in its own nomenclature it does. In the FireWire scheme, a device with a single FireWire port is a leaf. A device with two ports is called a pass-through, and a device with three ports is called a branch or hub. Pass-through and branch nodes operate as repeaters, reconstituting the digital signal for the next hop. Each FireWire system also has a single root, which is the foundation around which the rest of the system organizes itself.
You can daisy-chain devices with up to 16 links to the chain. After that, the delays in relaying the signals from device to device go beyond those set in the standard. Accommodating larger numbers of devices requires using branches to create parallel data paths.
Under the current standard (1394a), FireWire allows a maximum cable length of 4.5 meters (about 15 feet). With 16 links to a daisy-chain, two FireWire devices could be separated by as much as 72 meters (almost 200 feet).
Each FireWire cable contains two active connections for a full-duplex design (signals travel both ways simultaneously in the cable on different wire pairs). Connectors at each end of the cable are the same, so wiring is easy—you just plug things together. Software takes care of all the details of the connection. The exception is that the 1394a standard also allows for a miniaturized connector to fit in tight places (such as a camcorder).
FireWire also allows for engineers to use the same signaling system for backplane designs. That is, FireWire could be used as an expansion bus inside a computer as well as the port linking to external peripherals. Currently, however, FireWire is not used as a backplane inside personal computers.
The protocol used by FireWire uses 64-bit addressing. The 63 device limitation per chain results from only six bits being used for node identification. The rest of the addressing bits provide for large networks and the use of direct memory addressing—10 bits for network identifications and 48 bits for memory addresses (the same as the latest Intel microprocessor, enough for uniquely identifying 281TB of memory per device). A single device may use multiple identifications.
Signaling
To minimize noise, data connections in FireWire use differential signals, which means it uses two wires that carry the same signal but of opposite polarity. Receiving equipment subtracts the signal on one wire from that on the other to find the data as the difference between the two signals. The benefit of this scheme is that any noise gets picked up by the wires equally. When the receiving equipment subtracts the signals on the two wires, the noise gets eliminated—the equal noise signals subtracted from each other equals zero.
The original FireWire standard used a patented form of signal coding called data strobe coding, using two differential wire pairs to carry a single data stream. One pair carried the actual data; the second pair, called the strobe lines, complimented the state of the data pair so that one and only one of the pairs changed polarity every clock cycle. For example, if the data line carried two sequential bits of the same value, the strobe line reversed polarity to mark the transition between them. If a sequence of two bits changed the polarity of the data lines (a one followed by a zero, or zero followed by a one), the strobe line did not change polarity. Summing the data and strobe lines together exactly reconstructed the clock signal of the sending system, allowing the sending and receiving devices to precisely lock up.
FireWire operates as a two-way channel, with different pairs of wire used for sending and receiving. When one pair is sending data, the other operates as its strobe signal. In receiving data, the pair used for strobe in sending contains the data, and the other (data in sending) carries the receive strobe. In other words, as a device shifts from sending and receiving, it shifts which wire pairs it uses for data and strobe.
The 1394b specification alters the data coding to use a system termed 8B/10B coding, developed by IBM. The scheme encodes eight-bit bytes in 10-bit symbols that guarantee a sequence of more than five identical bits never occurs and that the number of ones and zeros in the code balance—a characteristic important to engineers because it results in no shift of the direct current voltage in the system.
Configuration
FireWire allows you to connect multiple devices together and uses an addressing system so that the signals sent through a common channel are recognized only by the proper target device. The linked devices can independently communicate among themselves without the intervention of your computer. Each device can communicate at its own speed—a single FireWire connection shifts between speeds to accommodate each device. Of course, a low-speed device may not be able to pass through higher-speed signals, so some forethought is required to put together a system in which all devices operate at their optimum speeds.
FireWire eliminates such concern about setting device identifications with its own automated configuration process. Whenever a new device gets plugged into a FireWire system (or when the whole system gets turned on), the automatic configuration process begins. By signaling through the various connections, each device determines how it fits into the system, either as a root node, a branch, a pass-through, or a leaf. The node also sends out a special clock signal. Once the connection hierarchy is set up, the FireWire devices determine their own ID numbers from their location in the hierarchy and send identifying information (ID and device type) to their host.
You can hot-plug devices into a FireWire tree. That is, you can plug in a new device to a group of FireWire devices without switching off the power. When you plug in a new device, the change triggers a bus reset that erases the system’s stored memory of the previous set of devices. Then the entire chain of devices goes through the configuration process again and is assigned an address. The devices then identify themselves to one another and wait for data transfers to begin.
Arbitration
FireWire transfers data in packets, a block of data preceded by a header that specifies where the data goes and its priority. In the basic cable-based FireWire system, each device sharing a connection gets a chance to send one packet in an arbitration period that’s called a fairness interval. The various devices take turns until all have had a chance to use the bus. After each packet gets sent, a brief time called the sub-action gap elapses, after which another device can send its packet. If no devices start to transmit when the sub-action gap ends, all devices wait a bit longer, stretching the time to an arbitration reset gap. After that time elapses, a new fairness interval begins, and all devices get to send one more packet. The cycle continues.
To handle devices that need a constant stream of data for real-time display, such as video or audio signals, FireWire uses a special isochronous mode. Every 125 microseconds, one device in the FireWire that needs isochronous data sends out a special timing packet that signals that isochronous devices can transmit. Each takes a turn in order of its priority, leaving a brief isochronous gap delay between their packets. When the isochronous gap delay stretches out to the sub-action gap length, the devices using ordinary asynchronous transfers take over until the end of the 125-microsecond cycle when the next isochronous period begins.
The scheme guarantees that video and audio gear can move its data in real time with a minimum of buffer memory. (Audio devices require only a byte of buffer; video may need as many as six bytes.) The 125-microsecond period matches the sampling rate used by digital telephone systems to help digital telephone services.
The new 1394b standard also brings a new arbitration system called Bus Owner/Supervisor/Selector (BOSS). Under this scheme, a device takes control of the bus as the BOSS by being the last device to acknowledge the receipt of a packet sent to it (rather than broadcast over the entire tree) or by receiving a specific grant of control. The BOSS takes full command of the tree, even selecting the next node to be the BOSS.
Connectors
In the original FireWire version only a single, small, six-pin connector was defined for all purposes. Each cable had an identical connector on each end, and all FireWire ports were the same. The contacts were arranged in two parallel rows on opposite sides inside the metal-and-plastic shield of the connector. The asymmetrical “D” shape of the active end of the connector ensured that you plugged it in properly. Figure 11.5 shows this connector.
The revised 1394a standard added a miniaturized connector. To keep it compact, the design omitted the two power contacts. This design is favored for personal electronic devices such as camcorders. Figure 11.6 shows this connector.
The 1394b standard will add two more connectors to the FireWire arsenal, each with eight contacts. The Beta connector is meant for systems that use only the new 1394b signaling system and do not understand the earlier versions of the standard. In addition, the new standard defines a bilingual connector, one that speaks both the old and new FireWire standards. The new designs are keyed so that while both beta and bilingual connectors will fit a bilingual port, only a beta connector will fit a beta-only port. Figure 11.7 shows this keying system.
Cabling
In current form, FireWire uses ordinary copper wires in a special cable design. Two variations are allowed—one with solely four signal wires and one with the four signal wires and two power wires. In both implementations, data travels across two shielded twisted pairs of AWG 28 gauge wire, with a nominal impedance of 110 ohms. In the six-wire version, two AWG 22 gauge wires additionally carry power at 8 to 33 volts with up to 1.5 amperes to power a number of peripherals. Another shield will cover the entire collection of conductors.
The upcoming IEEE 1394b standard also allows for two forms of fiber optical connection—glass and plastic—as well as ordinary Category 5 twisted-pair network cable. The maximum length of plastic fiber optical connections is 50 meters (about 160 feet); for glass optical fiber, the maximum length is 100 meters (about 320 feet). Either style of optical connection can operate at speeds of 100 or 200Mbps. Category 5 wire allows connections of up to 100 meters but only at the lowest data rate sanctioned by the standard, which is 100Mbps.
All FireWire cables are crossover cables. That is, the signals that appear on pins 1 and 2 at one end of the cable “cross over” to pins 3 and 4 at the other end. This permits all FireWire ports to be wired the same and serve both as inputs and outputs. The same connector can be used at each end of the cable, and no keying is necessary, as is the case with USB.
The FireWire wiring scheme depends on each of the devices that are connected together to relay signals to the others. Pulling the plug to one device could potentially knock down the entire connection system. To avoid such difficulties and dependencies, FireWire uses its power connections to keep in operation the interface circuitry in otherwise inactive devices. These power lines could also supply enough current to run entire devices. No device may draw more than three watts from the FireWire bus, although a single device may supply up to 40 watts. The FireWire circuitry itself in each interface requires only about two milliwatts.
The FireWire wiring standard allows for up to 16 hops of 4.5 meters (about 15 feet) each. As with current communications ports, the standard allows you to connect and disconnect peripherals without switching off power to them. You can daisy-chain FireWire devices or branch the cable between them. When you make changes, the network of connected devices will automatically reconfigure itself to reflect the alterations.
IrDA
The one thing you don’t want with a portable computer is a cable to tether you down; yet most of the time you have to plug into one thing or another. Even a simple and routine chore like downloading files from your notebook machine into your desktop computer gets tangled in cable trouble. Not only do you have to plug in both ends, reaching behind your desktop machine is only a little more elegantly done than fishing into a catch basin for a fallen quarter—and, more likely than not, unplugging something else that you’ll inevitably need later only to discover the dangling cord—but you’ve got to tote that writhing cable along with you wherever you go. There has to be a better way.
There is. You can link your computer to other systems and components with a light beam. On the rear panel of many notebook computers, you’ll find a clear LED or a dark red window through which your system can send and receive invisible infrared light beams. Although originally introduced to allow you to link portable computers to desktop machines, the same technology can tie in peripherals such as modems and printers, all without the hassle of plugging and unplugging cables.
History
On June 28, 1993, a group of about 120 representatives from 50 computer-related companies got together to take the first step in cutting the cord. Creating what has come to be known as the Infrared Developers Association (IrDA), this group aimed at more than making your computer more convenient to carry. It also saw a new versatility and, hardly incidentally, a way to trim its own costs.
The idea behind the get together was to create a standard for using infrared light to link your computer to peripherals and other systems. The technology had already been long established, not only in television remote controls but also in a number of notebook computers already on the market. Rather than build a new technology, the goal of the group was to find common ground, a standard so that the products of all manufacturers could communicate with the computer equivalent of sign language.
Hardly a year later, on June 30, 1994, the group approved its first standard. The original specification, now known as IrDA version 1.0, essentially gave the standard RS-232C port an optical counterpart, one with the same data structure and, alas, speed limit. In August 1995, IrDA took the next step and approved high-speed extensions that pushed the wireless data rate to 4Mbps.
Overview
More than a gimmicky cordless keyboard, IrDA holds an advantage that makes computer manufacturers—particularly those developing low-cost machines—eye it with interest. It can cut several dollars from the cost of a complex system by eliminating some expensive hardware, a connector or two, and a cable. Compared to the other wireless technology, radio, infrared requires less space because it needs only a tiny LED instead of a larger and more costly antenna. Moreover, infrared transmissions are not regulated by the FCC as are radio transmissions. Nor do they cause interference to radios, televisions, pacemakers, and airliners. The range of infrared is more limited than radio and restricted to the line of sight over a narrow angle. However, these weaknesses can become strengths for those who are security conscious.
The original design formulated by IrDA was for a replacement for serial cables. To make the technology easy and inexpensive to implement with existing components, it was based on the standard RS-232C port and its constituent components. The original IrDA standard used asynchronous communication using the same data frame legacy serial ports as well as its data rates from 2400 to 115,200 bits per second.
To keep power needs low and prevent interference among multiple installations in a single room, IrDA kept the range of the system low, about one meter (three feet). Similarly, the IrDA system concentrates the infrared beam used to carry data because diffusing the beam would require more power for a given range and be prone to causing greater interference among competing units. The laser diodes used in the IrDA system consequently focus their beams into a cone with a spread of about 30 degrees.
After the initial serial-port replacement design was in place, IrDA worked to make its interface suitable for replacing parallel ports as well. That goal led to the creation of the IrDA high-speed standards for transmissions at data rates of 0.576, 1.152, and 4.0Mbps. The two higher speeds use a packet-based synchronous system that requires a special hardware-based communication controller. This controller monitors and controls the flow of information between the host computer’s bus and communications buffers.
Consequently, a watershed of differences separate low-speed and high-speed IrDA systems. Although IrDA designed the high-speed standard to be backward compatible with old equipment, making the higher speeds work requires special hardware. In other words, although high-speed IrDA devices can successfully communicate with lower-speed units, such communications are constrained to the speeds of the lower-speed units. Low-speed units cannot operate at high speeds without their hardware being upgraded.
IrDA defines not only the hardware but also the data format used by its system. The group has published six standards to cover these aspects of IrDA communications. The hardware itself forms the physical layer. In addition, IrDA defines a link access protocol termed IrLAP and a link management protocol called IrLMP that describe the data formats used to negotiate and maintain communications. All IrDA ports must follow these standards. In addition, IrDA has defined an optional transport protocol and optional Plug-and-Play extensions to allow for the smooth integration of the system into modern computers. The group’s IrCOMM standard describes a standard way for infrared ports to emulate conventional computer serial and parallel ports.
Infrared Light
Infrared light is invisible electromagnetic radiation that has a wavelength longer than that of visible light. Whereas you can see light that ranges in wavelength from 400 angstroms (deep violet) to 700 angstroms (dark red), infrared stretches from 700 angstroms to 1000 or more. IrDA specifies that the infrared signal used by computers for communication has a wavelength between 850 and 900 angstroms.
Data Rates
All IrDA ports must be able to operate at one basic speed—9600 bits per second. All other speeds are optional.
The IrDA specification allows for all the usual speed increments used by conventional serial ports, from 2400bps to 115,200bps. All these speeds use the default modulation scheme, Return-to-Zero Inverted (RZI). High-speed IrDA version 1.1 adds three additional speeds, 576Kbps, 1.152Mbps, and 4.0Mbps, based on a pulse-position modulation scheme.
Regardless of the speed range implemented by a system or used for communications, IrDA devices first establish communications at the mandatory 9600bps speed using the Link Access Protocol. Once the two devices establish a common speed for communicating, they switch to it and use it for the balance of their transmissions.
Pulse Width
The infrared cell of an IrDA transmitter sends out its data in pulses, each lasting only a fraction of the basic clock period or bit-cell. The relatively wide spacing between pulses makes each pulse easier for the optical receiver to distinguish.
At speeds up to and including 115,200 bits per second, each infrared pulse must be at least 1.41 microseconds long. Each IrDA data pulse nominally lasts just 3/16th of the length of a bit-cell, although pulse widths a bit more than 10 percent greater remain acceptable. For example, each bit cell of a 9600bps signal would occupy 104.2 microseconds (that is, one second divided by 9600). A typical IrDA pulse at that data rate would last 3/16th that period, or 19.53 microseconds.
At higher speeds, the minimum pulse length is reduced to 295.2 nanoseconds at 576Kbps and to only 115 nanoseconds at 4.0Mbps. At these higher speeds, the nominal pulse width is one-quarter of the character cell. For example, at 4.0Mbps, each pulse is only 125 nanoseconds long. Again, pulses about 10 percent longer remain permissible. Table 11.4 summarizes the speeds and pulse lengths.
Modulation
Depending on the speed at which a link operates, it may use one of two forms of modulation. At speeds lower than 4.0Mbps, the system employs Return-to-Zero Invert (RZI) modulation.
At the 4.0Mbps data rate, the IrDA system shifts to pulse position modulation. Because the IrDA system involves four discrete pulse positions, it is abbreviated 4PPM.
IrDA requires data to be transmitted only in eight-bit format. In terms of conventional serial-port parameters, a data frame for IrDA comprises a start bit, eight data bits, no parity bits, and a stop bit, for a total of 10 bits per character. Note, however, that zero insertion may increase the length of a transmitted byte of data. Any inserted zeroes are removed automatically by the receiver and do not enter the data stream. No matter the form of modulation used by the IrDA system, all byte values are transmitted with the least significant bit first.
Note that with RZI modulation, long sequences of logical ones will suppress pulses for the entire duration of the sequence. To prevent such a lengthy gap from appearing in the signal and causing a loss of sync, moderate speed IrDA systems add extra pulses to the signal with bit-stuffing (as discussed in Chapter 8).
Format
The IrDA system doesn’t deal with data at the bit or byte level but instead arranges the data transmitted through it in the form of packets, which the IrDA specification also terms frames. A single frame can stretch from 5 to 2050 bytes (and sometimes more) in length. As with other packetized systems, an IrDA frame includes address information, data, and error correction, the last of which is applied at the frame level. The format of the frame is rigidly defined by the IrDA Link Access Protocol standard, discussed later.
Aborted Frames
Whenever a receiver detects a string of seven or more consecutive logical ones—that is, an absence of optical pulses—it immediately terminates the frame in progress and disregards the data it received (which is classed as invalid because of the lack of error-correction data). The receiver then awaits the next valid frame, signified by a start-of-frame flag, address field, and control field. Any frame that ends in this summary manner is termed an aborted frame.
A transmitter may intentionally abort a frame or a frame may be aborted because of an interruption in the infrared signal. Anything that blocks the light path will stop infrared pulses from reaching the receiver and, if long enough, abort the frame being transmitted.
Interference Suppression
High-speed systems automatically mute lower-speed systems that are operating in the same environment to prevent interference. To stop the lower-speed link from transmitting, the high-speed system sends out a special Serial Infrared Interaction Pulse (SIP) at intervals no longer than half a second. The SIP is a pulse 1.6 microseconds long, followed by 7.1 microseconds of darkness, parameters exactly equal to a packet start pulse. When the low-speed system sees what it thinks is a start pulse, it automatically starts looking for data at the lower rates, suppressing its own transmission for half a second. Before it has a chance to start sending its own data (if any), another SIP quiets the low-speed system for the next half second.
Bluetooth
Radio yields the most versatile connection system: no wires and no worries. Because radio waves can slip through most office walls, desktop ornaments, office supplies, and even employees, radio-based links eliminate the line-of-sight requirements of optical links such as IrDA. Linking devices with radio waves consequently yields the most convenient connection for workers to free their peripherals from the chains of interconnecting cables. A radio link can provide a reliable, cord-free connection that eliminates the snarl on the rear of every desktop PC. It also allows you to link your wireless devices—in particular, your cell phone—to your PC and keep everything wireless. Hardly a novel idea, of course, but one that has been a long time coming for practical connections.
Finally, a single standard may bring the wireless dream to life. To provide common ground and a standard for radio-based connections between PCs, their peripherals, and related communications equipment, several major corporations worked together to develop the Bluetooth specification. They designed the standard for the utmost in convenience, coupled with low cost but sacrificing range—Bluetooth is a short-range system suitable for linking devices in an office suite rather than across miles like a cell phone.
Originally conceived as a way to link cellular devices to PCs, the actual specification transcends its origins. Bluetooth makes possible not only cell phone connections but also could allow you to use your keyboard or mouse without a physical connection to your PC and without fretting about office debris blocking optical signals. But Bluetooth is more than a simple interface. It can become a small wireless network of intercommunicating devices, handling both voice and data with equal ease. Although not a rival to traditional networking systems—its speed limitations alone see to that—Bluetooth adds versatility that combines cell phone and PC technology.
The Bluetooth promoters refer to its multidevice links as a piconet (smaller than even a micronet), able to link up to eight devices. Bluetooth allows even greater assemblages of equipment by linking piconets together and accommodating temporarily inactive equipment within its reach.
On the other hand, the data speed of the Bluetooth system is modest. At most, Bluetooth can move bits at a claimed rate of about 723Kbps asymmetrically—that is, the high rate is in one direction; the return channel is slower, about one-fifth that rate. Moreover, Bluetooth slows to accommodate bidirectional data and phone conversations. Despite the modest data rate, however, the Bluetooth bit-rate is high enough to handle three simultaneous telephone conversations or a combination of voice and data simultaneously.
Good as it sounds, Bluetooth currently has a number of handicaps to overcome. It is not supported by any version of Microsoft Windows in current release (including the initial release of Windows XP). Microsoft, however, promises to add its own native support for Bluetooth in subsequent releases of Windows.
History
Certainly Bluetooth was not the first attempt at creating radio-based data links for computers. Wireless schemes for exchanging data have been around longer than personal computers. But Bluetooth differs from any previous radio-based data-exchange system in that it was conceived as an open standard for the computer and communications industries to facilitate the design of compatible wireless hardware.
As with so many modern standards, Bluetooth represents the work of an industry consortium. In May 1998, representatives from five major corporations involved with PCs, office equipment, and cellular telephones jointly conceived the idea of Bluetooth and began working toward creating the standard. The five founders were Ericsson (Telefonaktiebolaget LM Ericsson), International Business Machines Corporation, Intel Corporation, Nokia Corporation, and Toshiba Corporation. Together they formed the Bluetooth Special Interest Group (SIG) and started work on the standard and the technologies needed to make it a reality. The SIG released the first version of the specification, Bluetooth 1.0, on July 24, 1999. A slightly revised version was released in December 1999.
Membership in the Bluetooth SIG grew to nine on December 1, 1999, when 3Com Technologies, Lucent Technologies, Microsoft Corporation, and Motorola, Inc., joined the group. In addition, over 1,200 individuals and companies have adopted the technology by entering an agreement with the SIG that allows them to use the standard and share the intellectual property required to implement it.
Although support for Bluetooth has been slow in coming, manufacturers have adapted the technology for low-speed computer peripherals (such as wireless keyboards and mice). Owing to the success of other wireless technologies, most successful applications of Bluetooth are in communications products.
Overview
Bluetooth is a wireless packetized communications system that allows multiple devices to share data in a small network. Heir to both cell phone and digital technologies, it nestles between several existing standards, embracing them. It can link to your PC using a USB connection, and it shares logical layers with IrDA. It not only handles data like a traditional serial port but also can carry more than 60 RS-232C connections.
In theory, a Bluetooth system operates entirely transparently. Devices link themselves together without you having to do anything. All you need to do is turn on your Bluetooth devices and bring them within range of one another. For example, available devices should automatically pop up on your Windows desktop—at least once Windows gains Bluetooth support. You can then drag files to and from the device as if it were a local Windows resource.
Behind the scenes, however, things aren’t quite so simple. The Bluetooth system must accommodate a variety of device and data types. It needs to keep in constant contact with each device. It must be able to detect when a new device appears and when other devices get switched off or venture out of range. It has to moderate the conversations between units, ensuring they don’t all try to talk at the same time and interfere with one another.
Software Side
As an advanced interface, Bluetooth heavily processes the raw data it transmits. It repackages serial data bits into packets with built-in error control. It then combines a series of packets of related serial data into links. It further processes the links through multiplexing so that several serial streams can simultaneously share a single Bluetooth connection.
Bluetooth packetizes data, breaking a serial input stream into small pieces, each containing address and optional error-correction information. A series of packets that starts as a single data stream and is later reconstructed into a replica of that stream is a link.
The Bluetooth standard supports two kinds of data links: synchronous and asynchronous. Synchronous data is typically voice information, such as audio from telephone conversations. Asynchronous data is typically computer data. The chief difference is that synchronous data is time dependent, so synchronous packets get transmitted once without regard to their reception. If a synchronous packet gets lost during transmission, it is forever lost.
Synchronous links between Bluetooth devices provide a full-duplex channel with an effective data rate of 64Kbps in each direction. In effect, a synchronous link is a standard digital telephone channel with eight-bit resolution and an 8-KHz sampling rate. The Bluetooth standard allows for two devices to simultaneously share three such synchronous links, the equivalent of three real-time telephone conversations. All links begin asynchronously because commands can only be sent in asynchronous packets. After the link is established, the master and slave can negotiate to switch over to a synchronous link for voice transfers or to move data asynchronously.
Each piconet has a single master and, potentially, multiple slaves. Each piconet shares a single communications channel with all the devices (master and slave) locked together on a common frequency and using a common clock, as discussed later. The single channel is subdivided into one or more links—asynchronous and/or synchronous.
To handle contention between multiple links on its single channel, Bluetooth uses time-division multiplexing. That is, each separate packet of a link gets a time period for transmission. The standard divides the communications channel into time slots, each 625 microseconds long. The shortest single packet fits a single slot with room to spare, although Bluetooth allows packets to stretch out for up to five slots. Bluetooth allows a maximum length of single-slot packets of 366 microseconds. The system accommodates larger packets by letting them extend through up to five slots, filling the entire time of four of the slots and part of the fifth.
In the Bluetooth system, each packet also defines a hop. That is, after each packet is sent, the Bluetooth system switches to (or hops to) another carrier frequency. As noted later, frequency-hopping helps ensure the integrity of Bluetooth transmissions. The minimum hop length corresponds to a single slot, although a hop can last for up to five slots to accommodate a long packet.
The time division duplexing of the Bluetooth system works by assigning even-numbered slots to the master and odd-numbered slots to the slaves. Masters can begin their transmissions only in even-numbered slots. If a packet lasts for an even number of slots (two or four), no slave can begin until the next odd-numbered slot. In effect, then, packets use an odd number of slots even if they use only a shorter, even number of slots.
Hardware Side
Bluetooth hardware provides the connection that carries the processed and packetized data. Although in that Bluetooth makes a wireless connection, its hardware is essentially invisible—the system requires a collection of circuits to transmit and receive the data properly.
As with all radio systems, Bluetooth starts with a carrier wave and modulates it with data. Unlike most common radio systems, however, Bluetooth does not use a single fixed carrier frequency but rather hops to different frequencies more than a thousand times each second. As a serial transmission system, time is important to Bluetooth to sort out data bits. Each Bluetooth device maintains a clock that helps it determine when each bit in its serial stream appears. Bluetooth cleverly combines these necessary elements to make a wireless communications network.
Clocks
Each Bluetooth device has its own internal clock that paces its communications. The clock of each device operates independently at approximately the necessary rate.
For the Bluetooth signals to be effectively demodulated, the clocks of the master and slaves must be synchronized. The master device sets the frequency for all the slaves with which it communicates. The slaves determine the exact frequency of the clock from the packet data. The preamble of each packet contains a predetermined pattern of several cycles, which the slaves can use to achieve synchrony. The Bluetooth system does not alter the operation of the clock of the slaves, however. Instead, it stores the difference between the master and slave clocks and uses this difference value to maintain its lock on the master.
When another master takes control during a communication session, each slave readjusts the stored difference value to maintain its precise frequency coordination.
Topology
Bluetooth is designed to handle a variety of connection types. The basic link is point-to-point, two devices communicating only with one another. In such a link, one device operates as the master and the other as the slave. In a piconet configuration, a single master can communicate with up to seven active slaves (a total of eight devices intercommunicating). In addition, other slaves may lock on to the master’s signal and be ready to communicate without sending out active signals. Such inactive slaves are said to be in a parked state.
The master in the piconet determines which devices can communicate (that is, which slaves are active or parked). In addition, several piconets can be linked together into a scatternet, with the master of one piconet communicating to a master or slave in another.
Frequencies
Bluetooth operates at radio frequencies assigned to industrial, scientific, and medical devices; a range termed the ISM band. This range of frequencies in the UHF (Ultra High Frequency) band has been set aside throughout most of the world for unlicensed, low-power electronic equipment. Near the top of the UHF range, the ISM band uses frequencies about twice that of the highest UHF television channel.
The exact frequencies available vary somewhat in North America, Europe, and Japan. In addition, France and Spain differ from the rest of Europe (although both countries are working on moving to the standards used throughout the rest of Europe).
Bluetooth uses channels one megahertz wide for its signals. Rather than operating on a single channel, a Bluetooth system uses them all. It uses the channels one at a time but switches between them to help minimize interference and fading. It can also help keep communications secure. Only the devices participating in a piconet know which channel they will hop to next.
In Europe (except France and Spain) and North America, the Bluetooth system can hop between 79 different channels. Elsewhere, the choices are limited to 23 channels. The available frequencies and number of channels available are summarized in the Table 11.5.
A given Bluetooth system does not operate on one frequency but rather uses them all, hopping from one channel to another, up to 1,600 times per second. If a given asynchronous packet does not get through on one frequency due to interference (and is therefore not acknowledged), the next hop will send out a duplicate packet at a different frequency.
Unfortunately, Bluetooth does not have the entire 2.4GHz band to itself. The IEEE 802.11 wireless-networking standard currently uses the same frequencies, and interference between the two systems (where both are active) is inevitable. Although in the long term IEEE 802.11 will migrate to the 5GHz frequency range, at present the only way to entirely prevent interference between the two systems is to use one or the other, not both.
Power
The Bluetooth specification defines three classes of equipment based on transmitter power. Class 1 devices are the most powerful and can transmit with up to 100 milliwatts of output power. Class 3 devices transmit with less than 1 milliwatt. Table 11.6 lists the maximum and minimum output powers for each power class.
As with any radio-based system, greater power increases the coverage area, so a Class 1 device will have greater range than a Class 3 device (about 10 times greater because radio propagation follows an inverse-square law). On the downside, greater output power means the need for greater input power, which directly translates into battery drain. That 100 mW of output power will require about 100 times the battery power as a 1-mW device. Fortunately, even Class 1 devices are modest power consumers compared to other facets of notebook computers. For example, the power needs of a Class 1 device are less than one-tenth the demand of a typical display screen.
Modulation
Bluetooth uses Gaussian frequency shift keying (FSK)—that is, the presence of a data bit alters (or shifts) the frequency of the carrier wave. Bluetooth specifies the polarity of the FSK modulation. It represents a binary one with a positive deviation of the carrier wave and a binary zero with a negative deviation. The raw data rate is 1Mbps (one million symbols per second).
Because of how the digital code affects the frequency shift keying modulation, the information content of the modulation affects the deviation of the signal. The Bluetooth standard specifies that the minimum deviation should never be smaller than 115KHz. The maximum deviation will be between 140 and 175KHz.
Components
Bluetooth architecture builds a system from three parts: a radio unit, a link control unit, and a support unit that provides link management and the host terminal interface. These are functional divisions, and all will be integrated into most handheld Bluetooth devices. In your PC, all three will likely reside on a Bluetooth interface card that installs like any other expansion board in a standard PCI slot.
The radio unit implements the hardware aspects of Bluetooth described earlier. It determines the power and coverage of the Bluetooth device, and its circuitry creates the carrier wave (altering its frequency for each hop), modulates it, amplifies it, and radiates it through an antenna. The ultra-high frequencies used by the Bluetooth system have a short wavelength that allows the antenna to be integrated invisibly into the cases of many mobile devices.
The link control unit is the mastermind of the Bluetooth system. It implements the various control and management protocols for setting up and maintaining the wireless connection. It searches out and identifies new devices wanting to join the piconet, tracks the frequency hopping, and controls the operating state of the device.
The support unit provides the actual interface between the logic of the host device and the Bluetooth connection. It adapts the signals of the host to match the Bluetooth system, both electrically and logically. For example, in a PC-based Bluetooth interface card, the support unit adapts the parallel bus signals of the PCI connection into the packetized serial form used by Bluetooth. It also checks data coming in from the wireless connection for errors and requests retransmission when necessary.
Standards and Coordination
The Bluetooth Special Interest Group promulgates the Bluetooth specifications. It also facilitates the licensing of Bluetooth intellectual property. You can obtain the complete specification from the SIG at www.bluetooth.com.
RS-232C Serial Ports
The day you win the vmega-lottery and instantly climb into wealth and social status, you may be tempted to leave your old friends to belch and scratch while drinking someone else’s beer. But it’s hard to leave old friends behind, particularly when you need someone to watch the house during your ’round-the-world cruise. So it is with the classic RS-232C port. It’s got so many bad habits it’s hard to talk about in polite company, but it’s just too dang useful to forget about.
The serial port is truly the old codger of computer interfaces, a true child of the ’60s. An industry trade group, the Electronics Industry Association (EIA) hammered out the official RS-232C specification in 1969, but the port had been in use for years at the time. It found ready acceptance on the first personal computer because no other electronic connection for data equipment was so widely used. The ports survive today because some folks still want to connect gear they bought in 1969 to their new computers.
Electrical Operation
RS-232C ports are asynchronous. They operate without a clock signal. But in order for two devices to communicate they need at least a general idea of what rate to expect data. Consequently, you must set the speed of each RS-232C port before you begin communicating, and the speeds of any two connected ports must match.
You have quite a wide variety to choose from. The serial ports in computers generally operate at any speed in the odd-looking sequence that runs 150, 300, 600, 1200, 2400, 4800, 9600, 19,200, 38,400, 57,600, and 115,200 bits per second.
The RS-232C moves data one byte at a time. To suit its asynchronous nature, each byte requires its own packing into a serial frame. In this form the typical serial port takes about a dozen bits to move a byte—a frame comprises two start bits, eight data bits, one parity bit, and one stop bit to indicate the end of the frame. As a result, a serial port has overhead of about one-third of its potential peak data rate. A 9600 bit per second serial connection actually moves text at about 800 characters per second (6400bps).
A basic RS-232C connection needs only three connections: one for sending data, one for receiving data, and a common ground. Most serial links also use hardware flow-control signals. The most common serial port uses eight separate connections.
RS-232C port use single-ended signaling. Although this design simplifies the circuitry to make the ports, it also limits the potential range of a connection. Long cable runs are apt to pick up noise and blur high-data-rate signals. You can probably extend a 9600bps connection to a hundred feet or more. At a quarter mile, you’ll probably be down to 1200 or 300bps (slower than even cheap printers can type).
Because the RS-232C port originated in the data communications rather than computer industry, some of its terminology is different from that used to describe other ports. For example, the low and high logic states are termed space and mark in the RS-232C scheme. Space is the absence of a bit, and mark is the presence of a bit. On the serial line, a space is a positive voltage; a mark is a negative voltage.
In other words, when you’re not sending data down a serial line, it has an overall positive voltage on it. Data will appear as a serial of negative-going pulses. The original design of the serial port specification called for the voltage to shift from a positive 12 volts to negative 12 volts. Because 12 volts is an uncommon potential in many computers, the serial voltage often varies from positive 5 to negative 5 volts.
Connectors
The physical manifestation of a serial port is the connector that glowers on the rear panel of your computer. It is where you plug your serial peripheral into your computer. And it can be the root of all evil—or so it will seem after a number of long evenings during which you valiantly try to make your serial device work with your computer, only to have text disappear like phantoms at sunrise. Again, the principal problem with serial ports is the number of options they allow designers. Serial ports can use either of two styles of connectors, each of which has two options in signal assignment. Worse, some manufacturers venture bravely in their own directions with the all-important flow-control signals. Sorting out all these options is the most frustrating part of serial port configuration.
25-Pin
The basic serial port connector is called a 25-pin D-shell. It earns its name from having 25 connections arranged in two rows that are surrounded by a metal guide that roughly takes the form of a letter D. The male variety of this connector—the one that actually has pins inside it—is normally used on computers. Most, but hardly all, serial peripherals use the female connector (the one with holes instead of pins) for their serial ports. Although both serial and parallel ports use the same style 25-pin D-shell connectors, you can distinguish serial ports from parallel ports because on most computers the latter use female connectors. Figure 11.8 shows the typical male serial port DB-25 connector that you’ll find on the back of your computer.
Although the serial connector allows for 25 discrete signals, only a few of them are ever actually used. Serial systems may involve as few as three connections. At most, computer serial ports use 10 different signals. Table 11.7 lists the names of these signals, their mnemonics, and the pins to which they are assigned in the standard 25-pin serial connector.
Note that in the standard serial cable, the signal ground (which is the return line for the data signals on pins 2 and 3) is separated from the chassis ground on pin 1. The chassis ground pin is connected directly to the metal chassis or case of the equipment, much like the extra prong of a three-wire AC power cable, and it provides the same protective function. It ensures that the case of the two devices linked by the serial cable are at the same potential, which means you won’t get a shock if you touch both at the same time. As wonderful as this connection sounds, it is often omitted from serial cables. On the other hand, the signal ground is a necessary signal that the serial link cannot work without. You should never connect the chassis ground to the signal ground.
Nine-Pin
If nothing else, using a 25-pin D-shell connector for a serial port is a waste of at least 15 pins. Most serial connections use fewer than the complete 10; some as few as four with hardware handshaking, and three with software flow control. For the sake of standardization, the computer industry sacrificed the cost of the other unused pins for years until a larger—or smaller, depending on your point of view—problem arose: space. A serial port connector was too big to fit on the retaining brackets of expansion boards along with a parallel connector. In that all the pins in the parallel connector had an assigned function, the serial connector met its destiny and got miniaturized.
Moving to a nine-pin connector allowed engineers to put connectors for both a serial port and a parallel port on the retaining bracket of a single expansion board. This was an important concern because all ports in early computers were installed on expansion boards. Computer makers could save the cost of an entire expansion board by putting two ports on one card. Later, after most manufacturers moved to putting ports on computer motherboards, the smaller port design persisted.
As with the 25-pin variety of serial connector, the nine-pin serial jack on the back of computers uses a male connector. Figure 11.9 shows the nine-pin male connector that’s used on some computers for serial ports.
Nine-pin connectors necessarily have different pin assignments than 25-pin connectors. Table 11.8 lists the signal assignments on the most common nine-pin implementation of the RS-232C port.
Other than the rearrangement of signals, the nine-pin and 25-pin serial connectors are essentially the same. All the signals behave identically, no matter the size of the connector on which they appear.
Signals
Serial communications is an exchange of signals across the serial interface. These signals involve not just data but also the flow-control signals that help keep the data flowing as fast as possible—but not too fast.
First, we’ll look at the signals and their flow in the kind of communication system for which the serial port was designed—linking a computer to a modem. Then we’ll examine how attaching a serial peripheral to a serial port complicates matters and what you can do to make the connection work.
Definitions
As with space and mark, RS-232C ports use other odd terminology. Serial terminology assumes that each end of a connection has a different type of equipment attached to it. One end has a data terminal connected to it. In the old days when the serial port was developed, a terminal was exactly that—a keyboard and a screen that translated typing into serial signals. Today, a terminal is usually a computer. For reasons known but to those who revel in rolling their tongues across excess syllables, the term Data Terminal Equipment is often substituted. To make matters even more complex, many discussions talk about DTE devices, which means exactly the same thing as data terminals.
The other end of the connection has a data set, which corresponds to a modem. Often engineers substitute the more formal name Data Communication Equipment or talk about DCE devices.
The distinction between data terminals and data sets (or DTE and DCE devices) is important. Serial communications were originally designed to take place between one DTE and one DCE, and the signals used by the system are defined in those terms. Moreover, the types of RS-232 serial devices you wish to connect determines the kind of cable you must use. First, however, let’s look at the signals; then we’ll consider what kind of cable you need to carry them.
Transmit Data
The serial data leaving the RS-232 port travels on what is called the Transmit Data line, which is usually abbreviated TXD. The signal on it comprises the long sequence of pulses generated by the UART in the serial port. The data terminal sends out this signal, and the data set listens to it.
Receive Data
The stream of bits going the other direction—that is, coming in from a distant serial port—goes through the ReceiveData line (usually abbreviated RXD) to reach the input of the serial port’s UART. The data terminal listens on this line for the data signal coming from the data set.
Data Terminal Read
When the data terminal is able to participate in communications—that is, it is turned on and in the proper operating mode—it signals its readiness to the data set by applying a positive voltage to the Data Terminal Ready line, which is abbreviated DTR.
Data Set Ready
When the data set is able to receive data—that is, it is turned on and in the proper operating mode—it signals its readiness by applying a positive voltage to the Data Set Ready line, which is abbreviated DSR. Because serial communications must be “two way,” the data terminal will not send out a data signal unless it sees the DSR signal coming from the data set.
Request to Send
When the data terminal is on and capable of receiving transmissions, it puts a positive voltage on its Request to Send line, usually abbreviated RTS. This signal tells the data set that it can send data to the data terminal. The absence of an RTS signal across the serial connection will prevent the data set from sending out serial data. This allows the data terminal to control the flow of the data set to it.
Clear to Send
The data set, too, needs to control the signal flow from the data terminal. The signal it uses is called Clear to Send, which is abbreviated CTS. The presence of the CTS signal in effect tells the data terminal that the coast is clear and the data terminal can blast data down the line. The absence of a CTS signal across the serial connection will prevent the data terminal from sending out serial data.
Carrier Detect
The serial interface standard shows its roots in the communication industry with the Carrier Detect signal, which is usually abbreviated CD. This signal gives a modem, the typical data set, a means of signaling to the data terminal that it has made a connection with a distant modem. The signal says that the modem or data set has detected the carrier wave of another modem on the telephone line. In effect, the carrier detect signal gets sent to the data terminal to tell it that communications are possible. In some systems, the data terminal must see the carrier detect signal before it will engage in data exchange. Other systems simply ignore this signal.
Ring Indicator
Sometimes a data terminal has to get ready to communicate even before the flow of information begins. For example, you might want to switch your communications program into answer mode so that it can deal with an incoming call. The designers of the serial port provided such an early warning in the Ring Indicator signal, which is usually abbreviated RI. When a modem serving as a data set detects ringing voltage—the low-frequency, high-voltage signal that makes telephone bells ring—on the telephone line to which it is connected, it activates the RI signal, which alerts the data terminal to what’s going on. Although useful in setting up modem communications, you can regard the ring indicator signal as optional because its absence usually will not prevent the flow of serial data.
Signal Ground
All the signals used in a serial port need a return path. The signal ground provides this return path. The single ground signal is the common return for all other signals on the serial interface. Its absence will prevent serial communications entirely.
Flow Control
Serial ports can use both hardware and software flow control. Hardware flow control involves the use of special control lines that can be (but don’t have to be) part of a serial connection. Your computer signals whether it is ready to accept more data by sending a signal down the appropriate wire. Software flow control involves the exchange of characters between computer and serial peripherals. One character tells the computer your peripheral is ready, and another warns that it can’t deal with more data. Both hardware and software flow control take more than one form. As a default, computer serial ports use hardware flow control (or hardware handshaking). Most serial peripherals do, too. In general, hardware flow control uses the Carrier Detect, Clear to Send, and Data Set Ready signals.
Software flow control requires your serial peripheral and computer to exchange characters or tokens to indicate whether they should transfer data. The serial peripheral normally sends out one character to indicate it can accept data and a different character to indicate that it is busy and cannot accommodate more. Two pairs of characters are often used: XON/XOFF and ETX/ACK.
Cables
The design of the standard RS-232 serial interface anticipates that you will connect a data terminal to a data set. When you do, all the connections at one end of the cable that link them are carried through to the other end—pin for pin, connection for connection. The definitions of the signals at each end of the cable are the same, and the function and direction of travel (whether from data terminal to data set or the other way around) of each are well defined. Each signal goes straight through from one end to the other. Even the connectors are the same at either end. Consequently, a serial cable should be relatively easy to fabricate.
In the real world, nothing is so easy. Serial cables are usually much less complicated or much more complicated than this simple design. Unfortunately, if you plan to use a serial connection for a printer or plotter, you have to suffer through the more complex design.
Straight-Through Cables
Serial cables are often simpler than pin-for-pin connections from one end to the other because no serial link uses all 25 connector pins. Even with the complex handshaking schemes used by modems, only nine signals need to travel from the data terminal to the data set, computer to modem. (For signaling purposes, the two grounds are redundant—most serial cables do not connect the chassis ground.) Consequently, you need only make these 10 connections to make virtually any data terminal–to–data set link work. Assuming you have a 25-pin D-shell connector at either end of your serial cable, the essential pins that must be connected are 2 through 8, 20, and 22 on a 25-pin D-shell connector. This is usually called a nine-wire serial cable because the connection to pin 7 uses the shield of the cable rather than a wire inside. With nine-pin connectors at either end of your serial cable, all nine connections are essential.
Not all systems use all the handshaking signals, so you can often get away with fewer connections in a serial cable. The minimal case is a system that uses software handshaking only. In that case, you need only three connections: Transmit Data, Receive Data, and the signal ground. In other words, you need only connect pins 2, 3, and 7 on a 25-pin connector or pins 2, 3, and 5 on a nine-pin serial connector (providing, of course, you have the same size connector at each end of the cable).
Although cables with an intermediary number of connections are often available, they are not sufficiently less expensive than the nine-wire cable to justify the risk and lack of versatility. Therefore, you should limit your choices to a nine-wire cable for systems that use hardware handshaking or three-wire cables for those that you’re certain use only software flow control.
Manufacturers use a wide range of cable types for serial connections. For relatively low data rates and reasonable lengths of serial connections, you can get away with just about anything, including twisted-pair telephone wire. To ensure against interference, you should use shielded cable, which wraps a wire braid or aluminum-coated plastic film around inner conductors to prevent signals leaking out or in. The shield of the cable should be connected to the signal ground. (Ideally, the signal ground should have its own wire, and the shield should be connected to the chassis ground, but most folks just don’t bother.)
Adapter Cables
If you need a cable with a 25-pin connector at one end and a nine-pin connector at the other, you cannot use a straight-through design, even when you want to link a data terminal to a data set. The different signal layouts of the two styles of connectors are incompatible. After all, you can’t possibly link pin 22 on a 25-pin connector to a nonexistent pin 22 on a nine-pin connector.
This problem is not uncommon. Even though the nine-pin connector has become a de facto standard on computers, most other equipment, including serial plotters, printers, and modems, has stuck with the 25-pin standard. To get from one connector type to another, you need an adapter. The adapter can take the form of a small assembly with a connector on each end of an adapter cable, typically from six inches to six feet long.
Crossover Cables
As long as you want to connect a computer serial port that functions to a modem, you should have no problem with serial communications. You will be connecting a data terminal to a data set, exactly what engineers designed the serial systems for. Simply sling a cable with enough conductors to handle all the vital signals between the computer and modem and—voilà—serial communications without a hitch. Try it, and you’re likely to wonder why so many people complain about the capricious nature of serial connections.
When you want to connect a plotter or printer to a computer through a serial port, however, you will immediately encounter a problem. The architects of the RS-232 serial system decided that both computers and the devices are data terminal (DTE) devices. The designations actually made sense, at least at that time. You were just as likely to connect a serial printer (such as a teletype) to a modem as you were a computer terminal. There was no concern about connecting a printer to a computer because computers didn’t even exist back then.
When you connect a plotter or printer and your computer—or any two DTE devices—together with an ordinary serial cable, you will not have a communication system at all. Neither machine will know that the other one is even there. Each one will listen on the serial port signal line that the other is listening to, and each one will talk on the line that the other talks on. One device won’t hear a bit of what the other is saying.
The obvious solution to the problem is to switch some wires around. Move the Transmit Data wire from the computer to where the Receive Data wire goes on the plotter or printer. Route the computer’s Receive Data wire to the Transmit Data wire of the plotter or printer. A simple crossover cable does exactly that, switching the Transmit and Receive signals at one end of the connection.
Many of the devices that you plug into a computer are classed as DTE (or data terminals), just like the computer. All these will require a crossover cable. Table 11.9 lists many of the devices you might connect to your computer and whether they function as data terminals (DTE) or data sets (DCE).
Note that some people call crossover cables null modem cables. This is not correct. A null modem is a single connector used in testing serial ports. It connects the Transmit Data line to the Receive Data line of a serial port as well as crosses the handshaking connections within the connector as described earlier. Correctly speaking, a null modem cable is equipped with this kind of wiring at both ends. It will force both serial ports constantly on and prevent any hardware flow control from functioning at all. Although such a cable can be useful, it is not the same as a crossover cable. Substituting one for the other will lead to some unpleasant surprises—such as text dropping from sight from within documents as mysteriously and irrecoverably as D. B. Cooper.
UARTs
A serial port has two jobs to perform. It must repackage parallel data into serial form, and it must send power down a long wire with another circuit at the end, which is called driving the line.
Turning parallel data into serial is such a common electrical function that engineers created special integrated circuits that do exactly that. Called Universal Asynchronous Receiver/Transmitter chips, or UARTs, these chips gulp down a byte or more of data and stream it out a bit at a time. In addition, they add all the other accouterments of the serial signal—the start, parity, and stop bits. Because every practical serial connection is bidirectional, the UART works both ways, sending and receiving, as its name implies.
Because the UART does all the work of serializing your computer’s data signals, its operation is one of the limits on the performance of serial data exchanges. Computers have used three different generations of UARTs, each of which imposes its own constraints. Early computers used 8250 UARTs, and later machines shifted to the higher-speed 16450 UART. Both of these chips had one-byte buffers that were unable to keep up with normal communications when multitasking software came into use. The replacement was the 16550A UART (commonly listed as 16550AF and 16550AFN, with the last initials indicating the package and temperature rating of the chip), which has a 16-byte First In, First Out (FIFO) buffer.
To maintain backward compatibility with the 16450, the 16550 ignores its internal buffer until it is specifically switched on. Most communications programs activate the buffer automatically. Physically, the 16550 and 16450 will fit and operate in the same sockets, so you can easily upgrade the older chip to the newer one.
Modern computers do not use separate UARTs. However, all the UART circuitry—usually exactly equivalent to that of the 16550A—is built into the circuitry of nearly all chipsets. To your software, the chipset acts exactly as if your serial ports were on separate expansion boards, just as they were in the first personal computers.
Register Function
The register at the base address assigned to each serial port is used for data communications. Bytes are moved to and from the UART using the microprocessor’s OUT and IN instructions. The next six addresses are used by other serial port registers. They are, in order, the Interrupt Enable register, the Interrupt Identification register, the Line Control register, the Modem Control register, the Line Status register, and the Modem Status register. Another register, called the Divisor Latch, shares the base address used by the Transmit and Receive registers and the next higher register used by the Interrupt Enable register. It is accessed by toggling a setting in the Line Control register.
This latch stores the divisor that determines the operating speed of the serial port. Whatever value is loaded into the latch is multiplied by 16. The resulting product is used to divide down the clock signal supplied to the UART chip to determine the bit rate. Because of the factor of 16 multiplication, the highest speed the serial port can operate at is limited to 1/16th the supplied clock (which is 1.8432MHz). Setting the latch value to its minimum, 1, results in a bit rate of 115,200.
Registers not only store the values used by the UART chip but also are used to report back to your system how the serial conversation is progressing. For example, the line status register indicates whether a character that has been loaded to be transmitted has actually been sent. It also indicates when a new character has been received.
Logical Interface
Your computer controls the serial port UART through a set of seven registers built in to the chip. Although your programs could send data and commands to the UART (and, through it, to your serial device) by using the hardware address of the registers on the chip, this strategy has disadvantages. It requires the designers of systems to allocate once and forever the system resources used by the serial port. The designers of the original IBM computer were loathe to make such a permanent commitment. Instead they devised a more flexible system that allows your software to access ports by name. In addition, they worked out a way that port names would be assigned properly and automatically, even if you didn’t install ports in some predetermined order.
Port Names
The number of serial ports you can use in a computer varies with the operating system. Originally, personal computers could only use two, but in 1987 the designers of DOS expanded the possible port repertory to include COM3 and COM4. Under Windows 3.1, up to nine serial ports could be used. Windows versions since Windows 95 extend serial port support to 128.
Without special drivers, Windows recognizes four serial ports. When Windows checks your system hardware each time it boots up, it check the ranges of addresses normally used by the UART chips for serial ports. Each UART has seven registers that control it, and these are usually identified by a base address, the input/output port used by the first of these registers. The usual computer design allows for four base addresses for a serial port.
Current Windows versions search the nominal base addresses for serial ports and assign their serial port drivers to those that are active. Devices out of the normal range—including the serial ports built in to internal modems—require their own drivers to match their hardware.
Interrupts
Serial ports normally operate as interrupt-driven devices. That is, when they must perform an action immediately, they send a special signal called an interrupt to your computer’s microprocessor. In the traditional computer design, only two interrupts are available for serial ports, as listed in Table 11.10.
Systems with more than two serial ports (or oddly assigned interrupts) have to share two interrupts between these serial ports—one port is often assigned to two ports. This sometimes results in problems, particularly when a mouse is connected to a serial port. However, because all new computers have a dedicated mouse port, this problem no longer occurs.
Parallel Ports
The defining characteristic of the parallel port design is implicit in its name. The port is “parallel” because it conducts its signals through eight separate wires—one for each bit of a byte of data—that are enclosed together in a single cable. The signal wires literally run in parallel from your computer to their destination—or at least they did. Better cables twist the physical wires together but keep their signals straight (and parallel).
In theory, having eight wires means you can move data eight times as fast through a parallel connection than through a single wire. All else being equal, simple math would make this statement true. Although a number of practical concerns make such extrapolations impossible, throughout its life, the parallel port has been known for its speed. It beat its original competitor, the RS-232 port, hands down, outrunning the serial port’s 115.2Kbps maximum by factors from two to five, even in early computers. The latest incarnations of parallel technology put the data rate through the parallel connection at over 100 times faster than the basic serial port rate.
In simple installations (for example, when used for its original purpose of linking a printer to your computer), the parallel port is a model of installation elegance. Just plug in your printer, and the odds are it will work flawlessly—or that whatever flaws appear won’t have anything to do with the interconnection.
Despite such rave reviews, parallel ports are not trouble-free. All parallel ports are not created equal. A number of different designs have appeared during the brief history of the computer. Although new computers usually incorporate the latest, most versatile, and highest speed of these, some manufacturers skimp. Even when you buy a brand-new computer, you may end up with a simple printer port that steps back to the first generation of computer design.
A suitable place to being this saga is to sort out this confusion of parallel port designs by tracing the parallel port’s origins. As it turns out, the history of the parallel port is a long one, older than even the personal computer, although the name and our story begin with its introduction.
History
Necessity isn’t just the mother of invention. It also spawned the parallel port. As with most great inventions, the parallel port arose with a problem that needed to be solved. When IBM developed its first computer, its engineers looked for a simplified way to link to a printer, something without the hassles and manufacturing costs of a serial port. The simple parallel connection, already used in a similar form by some printers, was an elegant solution. Consequently, IBM’s slightly modified version became standard equipment on the first computers. Because of its intended purpose, it quickly gained the “printer port” epithet. Not only were printers easy to attach to a parallel port, they were the only thing that you could connect to these first ports at the time.
In truth, the contribution of computer-makers to the first parallel port was minimal. They added a new connector that better fit the space available on the computer. The actual port design was already being used on computer printers at the time. Originally created by printer-maker Centronics Data Computer Corporation and used by printers throughout the 1960s and 1970s, the connection was electrically simple, even elegant. It took little circuitry to add to a printer or computer, even in the days when designers had to use discrete components instead of custom-designed circuits. A few old-timers still cling to history and call the parallel port a Centronics port.
The computer parallel port is not identical to the exact Centronics design, however. In adapting it to the computer, IBM substituted a smaller connector. The large jack used by the Centronics design had 36 pins and was too large to put where IBM wanted it—sharing a card-retaining bracket with a video connector on the computer’s first Monochrome Display Adapter. In addition, IBM added two new signals to give the computer more control over the printer and adjusted the timing of the signals traveling through the interface. All that said, most Centronics-style printers worked just fine with the original computer.
At the time, the computer parallel port had few higher aspirations. It did its job, and did it well, moving data in one direction (from computer to printer) at rates from 50 to 150Kbps. The computer parallel port, or subtle variations of it, became ubiquitous if not universal. Any printer worth connecting to a computer used a parallel port (or so it seemed).
In 1987, however, IBM’s engineers pushed the parallel port in a new direction. The motive behind the change was surprising—not a desire to improve communication but rather a band-aid solution for a temporary problem (for which it was hardly ever used). The company decided to adopt the 3.5-inch floppy disk drives for its new line of PS/2 computers at a time when all the world’s computer data was mired on 5.25-inch diskettes. The new computers made no provision for building in the bigger drives. Instead, IBM believed that the entire world would instantly switch over to the new disk format. People would need to transfer their data once and only once to the new disk format. To make the transfer possible, the company released its Data Migration Facility, a fancy name for a cable and a couple disks. You used the cable to connect your old computer to your new PS/2 and software on the disks to move files through the parallel port from the old machine and disks to the new.
Implicit in this design is the ability of the PS/2 parallel port to receive data as well as send it out, as to a printer. The engineers tinkered with the port design and made it work both ways, creating a bidirectional parallel port. Because of the design’s intimate connection with the PS/2, it is sometimes termed the PS/2 parallel port.
The Data Migration Facility proved to be an inspirational idea despite its singular shortcoming of working in only one direction. As notebook computers became popular, they also needed a convenient means to move files between machines. The makers of file transfer programs such as Brooklyn Bridge and LapLink knew a good connection when they saw it. By tinkering with parallel port signals, they discovered that they could make any parallel port operate in both directions and move data to and from computers.
The key to making bidirectional transfers on the old-fashioned one-way ports was to redefine signals. They redirected tours of the signals in the parallel connector that originally had been designed to convey status information back from the printer to your computer. These signals already went in the correct direction. All that the software mavens did was to take direct control of the port and monitor the signals under their new definitions. Of course, four signals can’t make a byte. They were limited to shifting four bits through the port in the backward direction. Because four bits make a nibble, the new parallel port operating mode soon earned the name nibble mode.
This four-bits-at-a-time scheme had greater implications than just a new name. Half as many bits also means half the speed. Nibble mode operates at about half the normal parallel port rate—still faster than single-line serial ports but not full parallel speed.
If both sides of a parallel connection had bidirectional ports, however, data transfers ran at full speed both ways. Unfortunately, as manufacturers began adapting higher-performance peripherals to use the parallel port, what once was fast performance became agonizingly slow. Although the bidirectional parallel port more than met the modest data transfer needs of printers and floppy disk drives, it lagged behind other means of connecting hard disks and networks to computers.
Engineers at network adapter maker Xircom Incorporated decided to do something about parallel performance and banded together with notebook computer maker Zenith Data Systems to find a better solution. Along the way, they added Intel Corporation and formed a triumvirate called Enhanced Parallel Port Partnership. They explored two ways of increasing the data throughput of a parallel port. They streamlined the logical interface so that your computer would need less overhead to move each byte through the port. In addition, they tightly defined the timing of the signals passing through the port, minimizing wasted time and helping ensure against timing errors. They called the result of their efforts the Enhanced Parallel Port (EPP).
On August 10, 1991, the organization released its first description of what it thought the next generation of parallel ports should be and do. The organization continued to work on a specification until March 1992, when it submitted Release 1.7 to the Institute of Electrical and Electronic Engineers (IEEE) to be considered as an industry standard.
Although the EPP version of the parallel port could increase its performance by nearly tenfold, that wasn’t enough to please everybody. The speed potential made some engineers see the old parallel port as an alternative to more complex expansion buses such as the SCSI system. With this idea in mind, Hewlett-Packard joined with Microsoft to make the parallel port into a universal expansion standard called the extended capabilities port (ECP). In November 1992, the two companies released the first version of the ECP specification, aimed at computers that use the ISA expansion bus. This first implementation added two new transfer modes to the EPP design—a fast two-way communication mode between a computer and its peripherals, and another two-way mode with performance further enhanced by simple integral data compression—and defined a complete software control system.
The heart of the ECP innovation was a protocol for exchanging data across a high-speed parallel connection. The devices at the two ends of each ECP transfer negotiate the speed and mode of data movement. Your computer can query any ECP device to determine its capabilities. For example, your computer can determine what language your printer speaks and set up the proper printer driver accordingly. In addition, ECP devices tell your computer the speed at which they can accept transmissions and the format of the data they understand. To ensure the quality of all transmissions, the ECP specification included error detection and device handshaking. It also allowed the use of data compression to further speed transfers.
On March 30, 1994, the IEEE Standards Board approved its parallel port standard, IEEE-1284-1994. The standard included all the basic modes and parallel port designs, including both ECP and EPP. It was submitted to the American National Standards Institute and approved as a standard on September 2, 1994.
The IEEE 1284 standard marked a watershed in parallel port design and nomenclature. The standard defined (or redefined) all aspects of the parallel connection, from the software interface in your computer to the control electronics in your printer. It divided the world of parallel ports in two: IEEE 1284-compatible devices, which are those that will work with the new interface, which in turn includes just about every parallel port and device ever made; and IEEE 1284-compliant devices, which are those that understand and use the new standard. This distinction is essentially between pre- and post-standardization ports. You can consider IEEE 1284-compatible ports to be “old technology” and IEEE 1284-compliant ports to be “new technology.”
Before IEEE 1284, parallel ports could be divided into four types: standard parallel ports, bidirectional parallel ports (also known as PS/2 parallel ports), enhanced parallel ports, and extended capabilities ports. The IEEE specification redefined the differences in ports, classifying them by the transfer mode they use. Although the terms are not exactly the same, you can consider a standard parallel port one that is able to use only nibble-mode transfers. A PS/2 or bidirectional parallel port from the old days is one that can also make use of byte-mode transfers. EPP and ECP ports are those that use EPP and ECP modes, as described by the IEEE 1284 specification.
EPP and ECP remain standards separate from IEEE 1284, although they have been revised to depend on it. Both EPP and ECP rely on their respective modes as defined in the IEEE specification for their physical connections and electrical signaling. In other words, IEEE 1284 describes the physical and electrical characteristics of a variety of parallel ports. The other standards describe how the ports operate and link to your applications.
Connectors
The best place to begin any discussion of the function and operation of the parallel port is the connector. After all, the connector is what puts the port to work. It is the physical manifestation of the parallel port, the one part of the interface and standard you can actually touch or hold in your hand. It is the only part of the interface that most people will ever have to deal with. Once you know the ins and outs of parallel connectors, you’ll be able to plug in the vast majority of computer printers and the myriad of other things that now suck signals from what was once the printer’s port.
Unfortunately, as with the variety of operating modes, the parallel port connector itself is not a single thing.
Parallel ports use three different connectors, called A, B, and C.
The A connector
The A connector appears only on computers as the output of a parallel port. Technically, it is described as a female 25-pin D-shell connector. Engineers chose this particular connector pragmatically—it was readily available and was the smallest connector that could handle the signals required in a full parallel connection. After it became ubiquitous, the IEEE adopted it as its 1284-A connector. Figure 11.10 shows a conceptual view of the A connector.
Of the 25 contacts on this parallel port connector, 17 are assigned individual signals for data transfer and control. The remaining eight serve as ground returns. Under the IEEE 1284 specification, the definition of each signal on each pin is dependent on the operating mode of the port. Only the definitions change; the physical wiring inside your computer and inside the cables does not change—if it did, shifting modes would be far from trivial. The altered definitions change the protocol, which is the signal handshaking that mediates each transfer.
A single physical connector on the back of your computer can operate in any of these five modes, and the signal definitions and their operation will change accordingly. Table 11.11 lists these five modes and their signal assignments.
Along with standardized signal assignments, IEEE 1284 also gives us a standard nomenclature for describing the signals. In Table 11.11, as well as all following tables that refer to the standard, signal names prefaced with a lowercase n indicate that the signal goes negative when active (that is, the absence of a voltage means the signal is present).
Mode changes are negotiated between your computer and the printer or other peripheral connected to the parallel port. Consequently, both ends of the connection switch modes together so that the signal assignments remain consistent at both ends of the connection. For example, if you connect an older printer that only understands compatibility mode, your computer cannot negotiate any other operating mode with the printer. It will not activate its EPP or ECP mode, so your printer will never get signals it cannot understand. This negotiation of the mode ensures backward compatibility among parallel devices.
The B Connector
The parallel port input to printers is quite a different connector from that on your computer. In fact, the design predates personal computers, having first been used by a printer company, Centronics, which gave the connector and parallel port its alternate name (now falling into disuse). The design is a 36-pin ribbon connector (the contacts take the form of thin metal ribbons) in a D-shell. Figure 11.11 shows this connector as a jack that would appear on the back of a printer.
The assignment of signals to the individual pins of this connector has gone through three stages. The first standard was set by Centronics for its printers. In 1981, IBM altered this design somewhat by redefining several of the connections. Finally, in 1994, the IEEE published its standard assignments, which (like those of the A-connector) vary with operating mode.
The C Connector
Given a chance to start over with a clean slate and no installed base, engineers would hardly come up with the confusion of two different connectors with an assortment of different, sometimes-compatible operating modes. The IEEE saw the creation of the 1284 standard as such an opportunity, one they were happy to exploit. To eliminate the confusion of two connectors and the intrinsic need for adapters to move between them, they took the logical step: They created a third connector, IEEE 1284-C.
For the most part, the C connector is just the B connector with some of the air let out. Figure 11.12 shows a jack that you’d find on the back of equipment using C connectors.
Adapters
The standard printer cable for computers is an adapter cable. It rearranges the signals of the A connector to the scheme of the B connector. Ever since the introduction of the first computer, you needed this sort of cable just to make your printer work. Over the years they have become plentiful and cheap.
To cut costs, many makers of adapter cables group all the grounds together as a single common line so that you need only 18 instead of 25 conductors in the connecting cable. Cheap adapters, which do not meet the IEEE 1284 standard, use this approach.
A modern printer cable should contain a full 25 connections with the ground signals divided up among separate pins. A true IEEE 1284 printer cable is equipped with an A connector on one end and a B connector on the other, with the full complement of connections in between.
As new peripherals with the 1284-C connector become available, you’ll need to plug them into your computer. To attach your existing computer to a printer or other device using the C connector, you’ll need an adapter cable to convert the A connector layout to the C connector design. On the other hand, if your next computer or parallel adapter uses the C connector and you plan to stick with your old printer, you’ll need another variety of adapter—one that translates the C connector layout to that of the B connector.
Cable
The high-speed modes of modern parallel ports make them finicky. When your parallel port operates in EPP or ECP mode, cable quality becomes critical, even for short runs. Signaling speed across one of these interfaces can be in the megahertz range. The frequencies far exceed the reliable limits of even short runs of the dubious low-cost printer cables. Consequently, the IEEE 1284 specification precisely details a special cable for high-speed operation. Figure 11.13 offers a conceptual view of the construction of this special parallel data cable.
Unlike standard parallel wiring, the data lines in IEEE 1284 cables must be double-shielded to prevent interference from affecting the signals. Each signal wire must be twisted with its ground return. Even though the various standard connectors do not provide separate pins for each of these grounds, the ground wires must be present and run the full length of the cable.
The differences between old-fashioned “printer” cables and those that conform to the IEEE 1284 standard are substantial. Although you can plug in a printer with either a printer or IEEE 1284–compliant cable, devices that exploit the high-speed potentials of the EPP or ECP designs may not operate properly with a noncompliant cable. Often, even when a printer fails to operate properly, the cable may be at fault. Substituting a truly IEEE 1284–compliant cable will bring reluctant connections to life.
Electrical Operation
In each of its five modes, the IEEE 1284 parallel port operates as if it were some kind of completely different electronic creation. When in compatibility mode, the IEEE 1284 port closely parallels the operation of the plain-vanilla printer port of bygone days. It allows data to travel in one direction only, from computer to printer. Nibble mode gives your printer (or more likely, another peripheral) a voice and allows it to talk back to your computer. In nibble mode, data can move in either of two directions, although asymmetrically. Information flows faster to your printer than it does on the return trip. Byte mode makes the journey fully symmetrical.
With the shift to EPP mode, the parallel port becomes a true expansion bus. A new way of linking to your computer’s bus gives it increased bidirectional speed. Many systems can run their parallel ports 10 times faster in EPP mode than in compatibility, nibble, or byte mode. ECP mode takes the final step, giving control in addition to speed. ECP can do just about anything any other expansion interface (including SCSI) can do.
Because of these significant differences, the best way to get to know the parallel port is by considering each separately as if it were an interface unto itself. Our examination will follow from simple to complex, which also mirrors the history of the parallel port.
Note that IEEE 1284 deals only with the signals traveling through the connections of the parallel interface. It establishes the relationship between signals and their timing. It concerns itself neither with the data that is actually transferred, with the command protocols encoded in the data, nor with the control system that produces the signals. In other words, IEEE 1284 provides an environment under which other standards such as EPP and ECP operate. That is, ECP and EPP modes are not the ECP and EPP standards, although those modes are meant to be used by the parallel ports operating under respective standards.
Compatibility Mode
The least common denominator among parallel ports is the classic design that IBM introduced with its first computer. It was conceived strictly as a interface for the one-way transfer of information. Your computer sends data to your printer and expects nothing in return. After all, a printer neither stores information nor creates information on its own.
In conception, this port is like a conveyor that unloads ore from a bulk freighter or rolls coal out of a mine. The raw material travels in one direction. The conveyor mindlessly pushes out stuff and more stuff, perhaps creating a dangerously precarious pile, until its operator wakes up and switches it off before the pile gets much higher than his waist.
If your printer had unlimited speed or an unlimited internal buffer, such a one-way design would work. But like the coal yard, your printer has a limited capacity and may not be able to cart off data as fast as the interface shoves it out. The printer needs some way of sending a signal to your computer to warn about a potential data overflow. In electronic terms, the interface needs feedback of some kind—it needs to get information from the printer that your computer can use to control the data flow.
To provide the necessary feedback for controlling the data flow, the original Centronics port design and IBM’s adaptation of it both included several control signals. These were designed to allow your computer to monitor how things are going with your printer—whether data is piling up, whether the printer has sufficient paper or ribbon, and even whether the printer is turned on. Your computer can use this information to moderate the outflowing gush of data or to post a message warning you that something is wrong with your printer. In addition, the original parallel port included control signals sent from your computer to the printer to tell it when the computer wanted to transfer data and to tell the printer to reset itself. The IEEE 1284 standard carries all these functions into compatibility mode.
Strictly speaking, then, even this basic parallel port is not truly a one-way connection, although its feedback provisions were designed strictly for monitoring rather than data flow. For the first half of its life, the parallel port kept to this design. Until the adoption of IEEE 1284, this was the design you could expect for the port on your printer and, almost as likely, those on your computer.
Each signal flowing through the parallel port in compatibility mode has its own function in handling the transfer of data.
Data Lines
The eight data lines of the parallel interface convey data in all operating modes. In compatibility mode, they carry data from the host to the peripheral on connector pins 2 through 9. The higher numbered pins are the more significant to the digital code. To send data to the peripheral, the host puts a pattern of digital voltages on the data lines.
Strobe Line
The presence of signals on the data lines does not, in itself, move information from host to peripheral. As your computer gets its act together, the pattern of data bits may vary in the process of loading the correct values. No hardware can ensure that all eight will always pop to the correct values simultaneously. Moreover, without further instruction your printer has no way of knowing whether the data lines represent a single character or multiple repetitions of the same character.
To ensure reliable communications, the system requires a means of telling the peripheral that the pattern on the data lines represents valid information to be transferred. The strobe line does exactly that. Your computer pulses the strobe line to tell your printer that the bit-pattern on the data lines is a single valid character that the printer should read and accept. The strobe line gives its pulse only after the signals on the data lines have settled down. Most parallel ports delay the strobe signal by about half a microsecond to ensure that the data signals have settled. The strobe itself lasts for at least half a microsecond so that your printer can recognize it. (The strobe signal can last up to 500 microseconds.) The signals on the data lines must maintain a constant value during this period and slightly afterward so that your printer has a chance to read them.
The strobe signal is “negative going.” That is, a positive voltage (+5VDC) stays on the strobe line until your printer wants to send the actual strobe signal. Your computer then drops the positive voltage to near zero for the duration of the strobe pulse. The IEEE 1284 specification calls this signal nStrobe.
Busy Line
Sending data to your printer is a continuous cycle of setting up the data lines, sending the strobe signal, and putting new values on the data lines. The parallel port design typically requires about two microseconds for each turn of this cycle, allowing a perfect parallel port to dump out nearly half a million characters a second into your hapless printer. (As you will see, the actual maximum throughput of a parallel port is much lower than this.)
For some printers, coping with that data rate is about as daunting as trying to catch machine gun fire with your bare hands. Before your printer can accept a second character, its circuitry must do something with the one it has just received. Typically, the printer will need to move the character into its internal buffer. Although the character moves at electronic speeds, it does not travel instantaneously. Your printer needs to be able to tell your computer to wait for the processing of the current character before sending the next.
The parallel port’s busy line gives your printer the needed breathing room. Your printer switches on the busy signal as soon as it detects the strobe signal and keeps the signal active until it is ready to accept the next character. The busy signal can last for a fraction of a second (even as short as a microsecond), or your printer could hold it on indefinitely while it waits for you to correct some error. No matter how long the busy signal is on, it keeps your computer from sending out more data through the parallel port. It functions as the basic flow-control system.
Acknowledge Line
The final part of the flow-control system of the parallel port is the acknowledge line. It tells your computer that everything has gone well with the printing of a character or its transfer to the internal buffer. In effect, it is the opposite of the busy signal, telling your computer that the printer is ready rather than unready. Whereas the busy line says, “Whoa,” the acknowledge line says, “Giddyap!” The acknowledge signal is the opposite of the busy signal in another way: It is negative going whereas the busy signal is positive going. The IEEE 1284 specification calls this signal nAck.
When your printer sends out the acknowledge signal, it completes the cycle of sending a character. Typically the acknowledge signal on a conventional parallel port lasts about eight microseconds, stretching a single character cycle across the port to 10 microseconds. (IEEE 1284 specifies the length of nAck to be between 0.5 and 10 microseconds.) If you assume the typical length of this signal for a conventional parallel port, the maximum speed of the port works out to about 100,000 characters per second.
Select
In addition to transferring data to the printer, the basic parallel port allows your printer to send signals back to your computer so that your computer can monitor the operation of the printer. The original IBM design of the parallel interface includes three such signals that tell your computer when your printer is ready, willing, and able to do its job. In effect, these signals give your computer the ability to remotely sense the condition of your printer.
The most essential of these signals is select. The presence of this signal on the parallel interface tells your computer that your printer is online (that is, the printer is switched on and is in its online mode, ready to receive data from your computer). In effect, it is a remote indicator for the online light on your printer’s control panel. If this signal is not present, your computer assumes that nothing is connected to your parallel port and doesn’t bother with the rest of its signal repertory.
Because the rest state of a parallel port line is an absence of voltage (which would be the case if nothing were connected to the port to supply the voltage), the select signal takes the form of a positive signal (nominally +5VDC) that in compatibility mode under the IEEE 1284 specification stays active the entire period your printer is online.
Paper Empty
To print anything your printer needs paper, and the most common problem that prevents your printer from doing its job is running out of paper. The paper empty signal warns your computer when your printer runs out. The IEEE 1284 specification calls this signal PError for paper error, although it serves exactly the same function.
Paper empty is an information signal. It is not required for flow control because the busy signal more than suffices for that purpose. Most printers will assert their busy signals for the duration of the period they are without paper. Paper empty tells your computer the specific reason that your printer has stopped data flow. This signal allows your operating system or application to flash a message on your monitor to warn you to load more paper.
Fault
The third printer-to-computer status signal is fault, a catchall for warning of any other problems that your printer may develop—out of ink, paper jams, overheating, conflagrations, and other disasters. In operation, fault is actually a steady-state positive signal. It dips low (or off) to indicate a problem. At the same time, your printer may issue its other signals to halt the data flow, including busy and select. It never hurts to be extra sure. Because this signal is “negative going,” the IEEE specification calls it nFault.
Initialize Printer
In addition to the three signals your printer uses to warn of its condition, the basic parallel port provides three control signals that your computer can use to command your printer without adding anything to the data stream. Each of these three provides its own hard-wired connection for a specific purpose. These include one to initialize the printer, another to switch it to online condition (if the printer allows a remote control status change), and a final signal to tell the printer to feed the paper up one line.
The initialize printer signal helps your computer and printer keep in sync. Your printer can send a raft of different commands to your printer to change its mode of operation, change font, alter printing pitch, and so on. Each of your applications that share your printer might send out its own favored set of commands. And many applications are like sloppy in-laws that come for a visit and fail to clean up after themselves. The programs may leave your printer in some strange condition, such as set to print underscored boldface characters in agate size type with a script typeface. The next program you run might assume some other condition and blithely print out a paycheck in illegible characters.
Initialize printer tells your printer to step back to ground zero. Just as your computer boots up fresh and predictably, so does your printer. When your computer sends your printer the initialize printer command, it tells the printer to boot up (that is, reset itself and load its default operating parameters with its startup configuration of fonts, pitches, typefaces, and the like). The command has the same effect as you switching off the printer and turning it back on and simply substitutes for adding a remote control arm on your computer to duplicate your actions.
During normal operation, your computer puts a constant voltage on the initialize printer line. Removing the voltage tells your printer to reset. The IEEE 1284 specification calls this negative-going signal nInit.
Select Input
The signal that allows your computer to switch your printer online and offline is called select input. The IEEE 1284 specification calls it nSelectIn. It is active, forcing your printer online, when it is low or off. Switching it to high deselects your printer.
Not all printers obey this command. Some have no provisions for switching themselves on- and offline. Others have setup functions (such as a DIP switch) that allow you to defeat the action of this signal.
Auto Feed XT
At the time IBM imposed its print system design on the rest of the world, different printers interpreted the lowly carriage return in one of two ways. Some printers took it literally. Carriage return means to move the printhead carriage back to its starting position on the left side of the platen. Other printers thought more like typewriters. Moving the printhead full left also indicated the start of a new line, so they obediently advanced the paper one line when they got a carriage return command. IBM, being a premiere typewriter-maker at the time, opted for this second definition.
To give printer developers flexibility, however, the IBM parallel port design included the Auto Feed XT signal to give your computer command of the printer’s handling of carriage returns. Under the IEEE 1284 specification, this signal is called nAutoFd. By holding this signal low or off, your computer commands your printer to act in the IBM and typewriter manner, adding a line feed to every carriage return. Making this signal high tells your printer to interpret carriage returns literally and only move the printhead. Despite the availability of this signal, most early computer printers ignored it and did whatever their setup configuration told them to do with carriage returns.
Nibble Mode
Early parallel ports used unidirectional circuitry for their data lines. No one foresaw the need for your computer to acquire data from your printer, so there was no need to add the expense or complication of bidirectional buffers to the simple parallel port. This tradition of single-direction design and operation continues to this day in the least expensive (which, of course, also means cheapest) parallel ports.
Every parallel port does, however, have five signals that are meant to travel from the printer to your computer. These include (as designated by the IEEE 1284 specification) nAck, Busy, PError, Select, and nFault. If you could suspend the normal operation of these signals temporarily, you could use four of them to carry data back from the printer to your computer. Of course, the information would flow at half speed, four bits at a time.
This means of moving data is the basis of nibble mode, so called because the computer community calls half a byte (the aforementioned four bits) a nibble. Using nibble mode, any parallel port can operate bidirectionally full speed forward but half speed in reverse.
Nibble mode requires that your computer take explicit command and control the operation of your parallel port. The port itself merely monitors all its data and monitoring signals and then relays the data to your computer. Your computer determines whether to regard your printer’s status signals as backward-moving data. Of course, this system also requires that the device at the other end of the parallel port (your printer or whatever) know that it has switched into nibble mode and understand what signals to put where and when. The IEEE 1284 specification defines a protocol for switching into nibble mode and how computer and peripherals handle the nibble-mode signals.
The process is complex, involving several steps. First, your computer must identify whether the peripheral connected to it recognizes the IEEE standard. If not, all bets are off for using the standard. Products created before IEEE 1284 was adopted needed special drivers that matched the port to a specific peripheral. Because the two were already matched, they knew everything they needed to know about each other without negotiation. The pair could work without understanding the negotiation process or even the IEEE 1284 specification. Using the specification, however, allows your computer and peripherals to do the matching without your intervention.
Once your computer and peripheral decide they can use nibble mode, your computer signals to the peripheral to switch to the mode. Before the IEEE 1284 standard, the protocol was proprietary to the parallel port peripheral. The standard gives all devices a common means of controlling the switchover.
After both your computer and parallel port peripheral have switched to nibble mode, the signals on the interface get new definitions. In addition, nibble mode itself operates in two modes or phases, and the signals on the various parallel port lines behave differently in each mode. These modes include reverse idle phase and reverse data transfer phase.
In reverse idle phase, the PtrClk signal (nAck in compatibility mode) operates as an attention signal from the parallel port peripheral. Activating this signal tells the parallel port to issue an interrupt inside your computer, signaling that the peripheral has data available to be transferred. Your computer acknowledges the need for data and requests its transfer by switching the HostBusy signal (nAutoFd in compatibility mode) to low or off. This switches the system to reverse data transfer phase. Your computer switches the HostBusy signal to high again after the completion of the transfer of a full data byte. When the peripheral has mode data ready and your computer switches HostBusy back to low again, another transfer begins. If it switches to low without the peripheral having data available to send, the transition reengages reverse idle phase.
Because moving a byte from peripheral to computer requires two nibble transfers, each of which requires the same time as one byte transfer from computer to peripheral, reverse transfers in nibble mode operate at half speed at best. The only advantage of nibble mode is its universal compatibility. Even before the IEEE 1284 specification, it allowed any parallel port to operate bidirectionally. Because of this speed penalty alone, if you have a peripheral and parallel port that lets you choose the operating mode for bidirectional transfers, nibble mode is your least attractive choice.
Byte Mode
Unlike nibble mode, byte mode requires special hardware. The basic design for byte-mode circuitry was laid down when IBM developed its PS/2 line of computers and developed the Data Migration Facility. By incorporating bidirectional buffers in all eight of the data lines of the parallel port, IBM enabled them to both send and receive information on each end of the connection. Other than that change, the new design involved no other modifications to signals, connector pin assignments, or the overall operation of the port. Before the advent of the IEEE standard, these ports were known as PS/2 parallel ports or bidirectional parallel ports.
IEEE 1284 does more than put an official industry imprimatur on the IBM design, however. The standard redefines the bidirectional signals and adds a universal protocol for negotiating bidirectional transfers.
As with nibble mode, a peripheral in byte mode uses the PtrClk signal to trigger an interrupt in the host computer to advise that the peripheral has data available for transfer. When the computer services the interrupt, it checks the port nDataAvail signal, a negative-going signal that indicates a byte is available for transfer when it goes low. The computer can then pulse off the HostBusy signal to trigger the transfer using the HostClk (nStrobe) signal to read the data. The computer raises the HostBusy signal again to indicate the successful transfer of the data byte. The cycle can then repeat for as many bytes as need to be sent.
Because byte mode is fully symmetrical, transfers occur at the same speed in either direction. The speed limit is set by the performance of the port hardware, the speed at which the host computer handles the port overhead, and the length of timing cycles set in the IEEE 1284 specification. Potentially the design could require as little as four micro seconds for each byte transferred, but real-world systems peak at about the same rate as conventional parallel ports (100,000 bytes per second).
Enhanced Parallel Port Mode
When it was introduced, the chief innovation of the Enhanced Parallel Port (EPP) was its improved performance, thanks to a design that hastened the speed at which your computer could pack data into the port. The EPP design altered port hardware so that instead of using byte-wide registers to send data through the port, your computer could dump a full 32-bit word of data directly from its bus into the port. The port would then handle all the conversion necessary to repackage the data into four-byte-wide transfers. The reduction in computer overhead and more efficient hardware design enabled a performance improvement by a factor of 10 in practical systems. This speed increase required more stringent specifications for printer cables. The IEEE 1284 specification does not get into the nitty-gritty of linking the parallel port circuitry to your computer, so it does not guarantee that a port in EPP mode will deliver all this speed boost. Moreover, the IEEE 1284 cable specs are not as demanding as the earlier EPP specs.
EPP mode of the IEEE 1284 specification uses only six signals in addition to the eight data lines for controlling data transfers. Three more connections in the interface are reserved for use by individual manufacturers and are not defined under the standard.
A given cycle across the EPP mode interface performs one of four operations: writing an address, reading an address, writing data, or reading data. The address corresponds to a register on the peripheral. The data operations are targeted on that address. Multiple data bytes may follow a single address signal as a form of burst mode.
nWrite
Data can travel both ways through an EPP connection. The nWrite signal tells whether the contents of the data lines are being sent from your computer to a peripheral or from a peripheral to your computer. When the nWrite signal is set to low, it indicates data is bound for the peripheral. When set to high, it indicates data sent from the peripheral.
nDStrobe
Soundboards are heavy feeders when it comes to system resources. A single soundboard may require multiple interrupts, a wide range of input/output ports, and a dedicated address range in High DOS memory. Because of these extensive resource demands, the need for numerous drivers, and often-poor documentation, soundboards are the most frustrating expansion products to add to a computer. In fact, a soundboard may be the perfect gift to surreptitiously gain revenge, letting you bury the hatchet with an estranged friend without the friend knowing you’ve sliced solidly into his back.
As with other parallel port transfers, your system needs a signal to indicate when the bits on the data lines are valid and accurate. EPP mode uses a negative-going signal called nDStrobe for this function in making data operations. Although this signal serves the same function as the strobe signal on a standard parallel port, it has been moved to a different pin, that used by the nAutoFd signal in compatibility mode.
nAStrobe
To identify a valid address on the interface bus, the EPP system uses the nAStrobe signal. This signal uses the same connection as nSelectIn during compatibility mode.
nWait
To acknowledge that a peripheral has properly received a transfer, it deactivates the negative-going nWait signal (making it a positive voltage on the bus). By holding the signal positive, the peripheral signals the host computer to wait. Making the signal negative indicates that the peripheral is ready for another transfer.
Intr
A peripheral connected to the EPP interface can signal to the host computer that it requires immediate service by sending out the Intr signal. The transition between low and high states of this signal indicates a request for an interrupt (that is, the signal is “edge triggered”). EPP mode does not allocate a signal to acknowledge that the interrupt request was received.
nInit
The escape hatch for EPP mode is the nInit signal. When this signal is activated (making it low), it forces the system out of EPP mode and back into compatibility mode.
Extended Capabilities Port Mode
When operating in ECP mode, the IEEE 1284 port uses seven signals to control the flow of data through the standard eight data lines. ECP mode defines two data-transfer signaling protocols—one for forward transfers (from computer to peripheral) and one for reverse transfers (peripheral to computer)—and the transitions between them. Transfers are moderated by closed-loop handshaking, which guarantees that all bytes get where they are meant to go, even if the connection is temporarily disrupted.
Because all parallel ports start in compatibility mode, your computer and its peripherals must first negotiate with one another to arrange to shift into ECP mode. Your computer and its software initiate the negotiation (as well as manage all aspects of the data transfers). Following a successful negotiation to enter into ECP mode, the connection enters its forward idle phase.
HostClk
To transfer information or commands across the interface, your computer starts from the forward idle phase and puts the appropriate signals on the data line. To signal to your printer or other peripheral that the values on the data lines are valid and should be transferred, your computer activates its HostClk signal, setting it to a logical high.
PeriphAck
The actual transfer does not take place until your printer or other peripheral acknowledges the HostClk signal by sending back the PeriphAck signal, setting it to a logical high. In response, your computer switches the HostClk signal to low. Your printer or peripheral then knows it should read the signals on the data lines. Once it finishes reading the data signals, the peripheral switches the PeriphAck signal to low. This completes the data transfer. Both HostClk and PeriphAck are back to their forward idle phase norms, ready for another transfer.
nPeriphRequest
When a peripheral needs to transfer information back to the host computer or to another peripheral, it makes a request by driving the nPeriphRequest signal low. The request is a suggestion rather than a command because only the host computer can initiate or reverse the flow of data. The nPeriphRequest signal typically causes an interrupt in the host computer to make this request known.
nReverseRequest
To allow a peripheral to send data back to the host or to another device connected to the interface, the host computer activates the nReverseRequest signal by driving it low, essentially switching off the voltage that otherwise appears there. This signals to the peripheral that the host computer will allow the transfer.
nAckReverse
To acknowledge that it has received the nReverseRequest signal and that it is ready for a reverse-direction transfer, the peripheral asserts its nAckReverse signal, driving it low. The peripheral can then send information and commands through the eight data lines and the PeriphAck signal.
PeriphClk
To begin a reverse transfer from peripheral to computer, the peripheral first loads the appropriate bits onto the data lines. It then signals to the host computer that it has data ready to transfer by driving the PeriphClk signal low.
HostAck
Your computer responds to the PeriphClk signal by switching the HostAck signal from its idle logical low to a logical high. The peripheral responds by driving PeriphClk high.
When the host accepts the data, it responds by driving the HostAck signal low. This completes the transfer and returns the interface to the reverse idle phase.
Data Lines
Although the parallel interface uses the same eight data lines to transfer information as do other IEEE 1284 port modes, it supplements them with an additional signal to indicate whether the data lines contain data or a command. The signal used to make this nine-bit information system changes with the direction of information transfer. When ECP mode transfers data from the computer host to a peripheral (that is, during a forward transfer), it uses the HostAck signal to specify a command or data. When a peripheral originates the data being transferred (a reverse transfer), it uses the PeriphAck signal to specify a command or data.
Logical Interface
Your computer controls each of its parallel ports through a set of three consecutive input/output ports. The typical computer sets aside three sets of these ports for three parallel ports, although most systems provide the matching hardware only for one. The base addresses used by parallel ports include 03BC(hex), 0378(hex), and 0278(hex).
When Windows boots up, it scans these addresses and assigns a logical name to each. These names are LPT1, LPT2, and LPT3. The name is a contraction of Line Printer, echoing the original purpose of the port. The port with the name LPT1 can also use the alias PRN. You can use these names at the system command prompt to identify a parallel port and the printer connected to it.
The computer printer port was designed to be controlled by a software driver. Your computer’s BIOS provides a rudimentary driver, but most advanced operating systems similarly take direct hardware control of the parallel port through their own software drivers. Windows includes a parallel port driver of its own. You may need to install drivers for any device that connects to your parallel port. For example, every printer requires it own driver (many of which—but not all—are built in to Windows).
Control
Even in its immense wisdom, a microprocessor can’t fathom how to operate a parallel port by itself. It needs someone to tell it how to move the signals around. Moreover, the minutiae of constantly taking care of the details of controlling a port would be a waste of the microprocessor’s valuable time. Consequently, system designers created help systems for your computer’s big brain. Driver software tells the microprocessor how to control the port, and port hardware handles all the details of port operation.
As parallel ports have evolved, so have these aspects of their control. The software that controls the traditional parallel port that’s built in to the firmware of your computer has given way to a complex system of drivers. The port hardware, too, has changed to both simplify operation and to speed it up.
These changes don’t follow the neat system of modes laid down by IEEE 1284. Instead, they have undergone a period of evolution in reaching their current condition.
Traditional Parallel Ports
In the original computer, each of its parallel ports linked to the computer’s microprocessor through three separate I/O ports, each controlling its own register. The address of the first of these registers served as the base address of the parallel port. The other two addresses are next higher in sequence. For example, when the first parallel port in a computer has a base address of 0378 (hex), the other two I/O ports assigned it have addresses of 0379 (hex) and 037A (hex).
The register at the base address of the parallel port serves as a data latch, called the printer data register, which temporarily holds the values passed along to it by your computer’s microprocessor. Each of the eight bits of this port is tied to one of the data lines leading out of the parallel port connector. The correspondence is exact. For example, the most significant bit of the register connects to the most significant bit on the port connector. When your computer’s microprocessor writes values to the base register of the port, the register latches those values until your microprocessor sends newer values to the port.
Your computer uses the next register on the parallel port, corresponding to the next I/O port, to monitor what the printer is doing. Termed the printer status register, the various bits that your microprocessor can read at this I/O port carry messages from the printer back to your computer. The five most significant bits of this register directly correspond to five signals appearing in the parallel cable: Bit 7 indicates the condition of the busy signal; bit 6, acknowledge; bit 5, paper empty; bit 4, select; and bit 3, error. The remaining three bits of this register (bits 2, 1, and 0—the least significant bits) served no function in the original computer parallel port.
To send commands to your printer, your computer uses the third I/O port, offset two ports from the base address of the parallel port. The register there, called the printer control register, relays commands through its five least significant bits. Of these, four directly control corresponding parallel port lines. Bit 0 commands the strobe line; bit 1, the Auto Feed XT line; bit 2, the initialize line; and bit 3, the select line.
To enable your printer to send interrupts to command the microprocessor’s attention, your computer uses bit 4 of the printer control register. Setting this bit to high causes the acknowledge signal from the printer to trigger a printer interrupt. During normal operation your printer, after it receives and processes a character, changes the acknowledge signal from a logical high to a low. Set bit 4, and your system detects the change in the acknowledge line through the printer status register and executes the hardware interrupt assigned to the port. In the normal course of things, this interrupt simply instructs the microprocessor to send another character to the printer.
All the values sent to the printer data register and the printer control register are put in place by your computer’s microprocessor, and the chip must read and react to all the values packed into the printer status register. The printer gets its instructions for what to do from firmware that is part of your system’s ROM BIOS. The routines coded for interrupt vector 017 (hex) carry out most of these functions. In the normal course of things, your applications call interrupt 017 (hex) after loading appropriate values into your microprocessor’s registers, and the microprocessor relays the values to your printer. These operations are very microprocessor intensive. They can occupy a substantial fraction of the power of a microprocessor (particularly that of older, slower chips) during print operations.
Enhanced Parallel Ports
Intel set the pattern for Enhanced Parallel Port (EPP) by integrating the design into the 386SL chipset (which comprised a microprocessor and a support chip, the 386SL itself, and the 82360SL I/O subsystem chip, which together required only memory to make a complete computer). The EPP was conceived as a superset of the standard and PS/2 parallel ports. As with those designs, compatible transfers require the use of the three parallel port registers at consecutive I/O port addresses. However, EPP adds five new registers to the basic three. Although designers are free to locate these registers wherever they want because they are accessed using drivers, in the typical implementation, these registers occupy the next five I/O port addresses in sequence.
EPP Address Register
The first new register (offset three from the base I/O port address) is called the EPP address register. It provides a direct channel through which your computer can specify addresses of devices linked through the EPP connection. By loading an address value in this register, your computer could select from among multiple devices attached to a single parallel port, at least once parallel devices using EPP addressing become available.
EPP Data Registers
The upper four ports of the EPP system interface (starting at offset four from the base port) link to the EPP data registers, which provide a 32-bit channel for sending data to the EPP data buffer. The EPP port circuitry takes the data from the buffer, breaks it into four separate bytes, and then sends the bytes through the EPP data lines in sequence. Substituting four I/O ports for the one used by standard parallel ports moves the conversion into the port hardware, relieving your system from the responsibility of formatting the data. In addition, your computer can write to the four EPP data registers simultaneously using a single 32-bit double-word in a single clock cycle in computers that have 32-bit data buses. In lesser machines, the EPP specification also allows for byte-wide and word-wide (16-bit) write operations through to the EPP data registers.
Unlike standard parallel ports, which require your computer’s microprocessor to shepherd data through the port, the Enhanced Parallel Port works automatically. It requires no other signals from your microprocessor after it loads the data in order to carry out a data transfer. The EPP circuitry itself generates the data strobe signal on the bus almost as soon as your microprocessor writes to the EPP data registers. When your microprocessor reads data from the EPP data registers, the port circuitry automatically triggers the data strobe signal to tell whatever device that’s sending data to the EPP connection that your computer is ready to receive more data. The EPP port can consequently push data through to the data lines with a minimum of transfer overhead. This streamlined design is one of the major factors that enables the EPP to operate so much faster than standard ports.
Fast Parallel Port Control Register
To switch from standard parallel port to bidirectional to EPP operation requires only plugging values into one of the registers. Although the manufacturers can use any design they want, needing only to alter their drivers to match, most follow the pattern set in the SL chips. Intel added a software-controllable fast parallel port control register as part of the chipset. This corresponds to the unused bits of the standard parallel port printer control register.
Setting the most significant bit (bit 7) of the fast parallel port control register to high engages EPP operation. Setting this bit to low (the default) forces the port into standard mode. Another bit controls bidirectional operation. Setting bit 6 of the fast parallel port control register to high engages bidirectional operation. When low, bit 6 keeps the port unidirectional.
In most computers, an EPP doesn’t automatically spring to life. Simply plugging your printer into EPP hardware won’t guarantee fast transfers. Enabling the EPP requires a software driver that provides the link between your software and the EPP hardware.
Extended Capabilities Ports
As with other variations on the basic parallel port design, your computer controls an Extended Capabilities Port (ECP) through a set of registers. To maintain backward compatibility with products requiring access to a standard parallel port, the ECP design starts with the same trio of basic registers. However, it redefines the parallel port data in each of the port’s different operating modes.
The ECP design supplements the basic trio of parallel port registers with an additional set of registers offset at port addresses 0400 (hex) higher than the base registers. One of these, the extended control register, controls the operating mode of the ECP port.
As with other improved parallel port designs, ECP behaves exactly like a standard parallel port in its default mode. Your programs can write bytes to its data register (located at the port’s base address, just as with a standard parallel port) to send the bits through the data lines of the parallel connection. Switch to EPP or ECP mode, and your programs can write at high speed to a register as wide as 32 bits. The ECP design allows for transfers 8, 16, or 32 bits wide at the option of the hardware designer.
To allow multiple devices to share a single parallel connection, the ECP design incorporates its own addressing scheme that allows your computer to separately identify and send data to up to 128 devices. When your computer wants to route a packet or data stream through the parallel connection to a particular peripheral, it sends out a channel address command through the parallel port. The command includes a device address. When an ECP parallel device receives the command, it compares the address to its own assigned address. If the two do not match, the device ignores the data traveling through the parallel connection until your computer sends the next channel address command through the port. When your computer fails to indicate a channel address, the data gets broadcast to all devices linked to the parallel connection.