Introduction to principles of computer architecture : organization of the book and case study: what happened to supercomputers?

 
Organization of the Book
We explore the inner workings of computers in the chapters that follow. Chapter 2 covers the representation of data, which provides background for all of the chapters that follow. Chapter 3 covers methods for implementing computer arithmetic. Chapters 4 and 5 cover the instruction set architecture, which serves as a vehicle for understanding how the components of a computer interact. Chapter 6 ties the earlier chapters together in the design and analysis of a control
image
unit for the instruction set architecture. Chapter 7 covers the organization of memory units, and memory management techniques. Chapter 8 covers input, output, and communication. Chapter 9 covers advanced aspects of single-CPU systems (which might have more than one processing unit). Chapter 10 covers advanced aspects of multiple-CPU systems, such as parallel and distributed architectures, and network architectures. Finally, in Appendices A and B, we look into the design of digital logic circuits, which are the building blocks for the basic components of a computer.
 Case Study: What Happened to Supercomputers?
[Note from the authors: The following contribution comes from Web page http://www.paralogos.com/DeadSuper created by Kevin D. Kissell at kevink@acm.org. Kissell’s Web site lists dozens of supercomputing projects that have gone by the wayside. One of the primary reasons for the near-extinction of supercomputers is that ordinary, everyday computers achieve a significant fraction of supercomputing power at a price that the common person can afford. The price-to-performance ratio for desktop computers is very favorable due to low costs achieved through mass market sales. Supercomputers enjoy no such mass markets, and continue to suffer very high price-to-performance ratios.
Following Kissell’s contribution is an excerpt from an Electrical Engineering Times article that highlights the enormous investment in everyday microprocessor development, which helps maintain the favorable price-to-performance ratio for low-cost desktop computers.]
image
The Passing of a Golden Age?
From the construction of the first programmed computers until the mid 1990s, there was always room in the computer industry for someone with a clever, if sometimes challenging, idea on how to make a more powerful machine. Computing became strategic during the Second World War, and remained so during the Cold War that followed. High-performance computing is essential to any modern nuclear weapons program, and a computer technology “race” was a logical corollary to the arms race. While powerful computers are of great value to a number of other industrial sectors, such as petroleum, chemistry, medicine, aero- nautical, automotive, and civil engineering, the role of governments, and particularly the national laboratories of the US government, as catalysts and incubators for innovative computing technologies can hardly be overstated. Private industry may buy more machines, but rarely do they risk buying those with single-digit serial numbers. The passing of Soviet communism and the end of the Cold War brought us a generally safer and more prosperous world, but it removed the raison d’etre for many merchants of performance-at-any-price.
Accompanying these geopolitical changes were some technological and economic trends that spelled trouble for specialized producers of high-end computers. Microprocessors began in the 1970s as devices whose main claim to fame was that it was possible to put a stored-program computer on a single piece of silicon. Competitive pressures, and the desire to generate sales by obsoleting last year’s product, made for the doubling of microprocessor computing power every 18 months, Moore’s celebrated “law.” Along the way, microprocessor designers borrowed almost all the tricks that designers of mainframe and numerical supercomputers had used in the past: storage hierarchies, pipelining, multiple functional units, multiprocessing, out-of-order execution, branch prediction, SIMD processing, speculative and predicated execution. By the mid 1990s, research ideas were going directly from simulation to implementation in microprocessors des- tined for the desktops of the masses. Nevertheless, it must be noted that most of the gains in raw performance achieved by microprocessors in the preceding decade came, not from these advanced techniques of computer architecture, but from the simple speedup of processor clocks and quantitative increase in processor resources made possible by advances in semiconductor technology. By 1998, the CPU of a high-end Windows-based personal computer was running at a higher clock rate than the top-of-the-line Cray Research supercomputer of 1994.
It is thus hardly surprising that the policy of the US national laboratories has shifted from the acquisition of systems architected from the ground up to be supercomputers to the deployment of large ensembles of mass-produced micro- processor-based systems, with the ASCI project as the flagship of this activity. As of this writing, it remains to be seen if these agglomerations will prove to be sufficiently stable and usable for production work, but the preliminary results have been at least satisfactory. The halcyon days of supercomputers based on exotic technology and innovative architecture may well be over.
Invest or die: Intel’s life on the edge
By Ron Wilson and Brian Fuller
SANTA CLARA, Calif. — With about $600 million to pump into venture companies this year, Intel Corp. has joined the major leagues of venture-capital firms. But the unique imperative that drives the microprocessor giant to invest gives it influence disproportionate to even this large sum. For Intel, venture investments are not just a source of income; they are a vital tool in the fight to survive.
Survival might seem an odd preoccupation for the world’s largest semiconductor company. But Intel, in a way all its own, lives hanging in the balance. For every new generation of CPUs, Intel must make huge investments in process development, in buildings and in fabs-an investment too huge to lose.
Gordon Moore, Intel chairman emeritus, gave scale to the wager. “An R&D fab today costs $400 million just for the building. Then you put about $1 billion of equipment in it. That gets you a quarter-micron fab for maybe 5,000 wafers per week, about the smallest practical fab. For the next generation,” Moore said, “the minimum investment will be $2 billion, with maybe $3 billion to $4 billion for any sort of volume production. No other industry has such a short life on such huge investments.”
Much of this money will be spent before there is a proven need for the microprocessors the fab will pro- duce. In essence, the entire $4 billion per fab is bet on the proposition that the industry will absorb a huge number of premium-priced CPUs that are only some- what faster than the currently available parts. If for just one generation that didn’t happen-if everyone judged, say, that the Pentium II was fast enough, thank you-the results would be unthinkable.
“My nightmare is to wake up some day and not need any more computing power,” Moore said.
SUMMARY
Computer architecture deals with those aspects of a computer that are visible to a programmer, while computer organization deals with those aspects that are at a more physical level and are not made visible to a programmer. Historically, programmers had to deal with every aspect of a computer – Babbage with mechanical gears, and ENIAC programmers with plugboard cables. As computers grew in sophistication, the concept of levels of machines became more pronounced, allowing computers to have very different internal and external behaviors while man- aging complexity in stratified levels. The single most significant development that makes this possible is the stored program computer, which is embodied in the von Neumann model. It is the von Neumann model that we see in most conventional computers today.
Further Reading
The history of computing is riddled with interesting personalities and mile- stones. (Anderson, 1991) gives a short, readable account of both during the last century. (Bashe et. al., 1986) give an interesting account of the IBM machines. (Bromley, 1987) chronicles Babbage’s machines. (Ralston and Reilly, 1993) give short biographies of the more celebrated personalities. (Randell, 1982) covers the history of digital computers. A very readable Web based history of computers by Michelle A. Hoyle can be found at http://www.interpac.net/~eingang/Lec-ture/toc.html. (SciAm, 1993) covers a readable version of the method of finite differences as it appears in Babbage’s machines, and the version of the analytical difference engine created by the Science Museum in London.
(Tanenbaum, 1999) is one of a number of texts that popularizes the notion of levels of machines.
Anderson, Harlan, Dedication address for the Digital Computer Laboratory at the University of Illinois, April 17, 1991, as reprinted in IEEE Circuits and Sys- tems: Society Newsletter, vol. 2, no. 1, pp. 3–6, (March 1991).
Bashe, Charles J., Lyle R. Johnson, John H. Palmer, and Emerson W. Pugh,
IBM’s Early Computers, The MIT Press, (1986).
Bromley, A. G., “The Evolution of Babbage’s Calculating Engines,” Annals of the History of Computing, 9, pp. 113-138, (1987).
Randell, B., The Origins of Digital Computers, 3/e, Springer-Verlag, (1982). Ralston, A. and E. D. Reilly, eds., Encyclopedia of Computer Science, 3/e, van
Nostrand Reinhold, (1993).
Tanenbaum, A., Structured Computer Organization, 4/e, Prentice Hall, Engle- wood Cliffs, New Jersey, (1999).
PROBLEMS
1.1 Moore’s law, which is attributed to Intel founder Gordon Moore, states that computing power doubles every 18 months for the same price. An unrelated observation is that floating point instructions are executed 100 times faster in hardware than via emulation. Using Moore’s law as a guide, how long will it take for computing power to improve to the point that floating point instructions are emulated as quickly as their (earlier) hardware counterparts?

Leave a comment

Your email address will not be published. Required fields are marked *