Reference STATE OF THE ART Stan Augarten

ISBN 0-89919-195-9
Index
Scanned

INTRODUCTION
 
State of the Art
INTRODUCTION


To see a world in a grain of sand,
And a Heaven in a wild flower,
Hold Infinity in the palm of your hand,
And Eternity in an hour.

               -William Blake


A revolution has been taking place in electronics. Spurred by the invention of the transistor in 1947 and of the integrated circuit (IC) in the late 1950s and early 1960s, computers and other electronic devices have become smaller, cheaper, more powerful, and more versatile. The fruits of this revolution are all around us in the form of hundreds of inventions based on the IC: personal computers, robots, digital watches, video cassette recorders, CAT scanners, the space shuttle, and, alas, smart bombs and cruise missiles. For better or worse, we have entered a new technological era.

  The star of the revolution is the IC chip, or microchip (all synonyms, although microchip is a journalistic popularization eschewed by the electronic industry. IC and chip are the preferred terms). And what is an IC? It is a small piece of silicon that has been engineered, by a process akin to contact printing, to manipulate electrical signals. It looks like nothing so much as a dull fleck of aluminum, rarely bigger than a quarter of an inch square, and often much smaller. Yet it is a phenomenon of electronic ingenuity, capable of storing hundreds of thousands of bits of information and executing millions of operations a second. Some chips, known as microprocessors, are even more powerful than the multimillion-dollar computers of the 1950s and 1960s.

  Silicon, the chief constituent of ICs, is the second most abundant substance on the surface of the earth, and, in combination with oxygen, the chief ingredient of sand - which makes Blake's lovely verse a fitting ode to the IC. Silicon is a semiconductor, a type of solid whose ability to conduct electricity lies somewhere between that of an insulator like rubber, which is an extremely poor carrier of electrons, and that of a conductor like copper, which is a very good carrier. Because of its semiconducting properties and its almost unlimited abundance, silicon can be endowed, at very low cost, with near-magical electrical properties. Thankfully, it is one element we will never run out of.

  State of the Art is the story of the IC - of how it was invented, how it works, how it is made, and how it is used. The story is almost entirely an American one, since the chip was invented in the United States and until recently was manufactured almost exclusively here. The semiconductor industry is centered on a stretch of land between San Francisco and San Jose, California; alive with intellectual energy, nouveau wealth, and capitalist ambition, the area has come to be known as Silicon Valley. Major IC companies are also located in Texas, Arizona, New York, Massachusetts, and other parts of the country. Nevertheless, other nations, particularly Japan, have recently begun producing chips in bulk, and they will no doubt eventually contribute to the ongoing development of the IC - one of the most important and, as the photos in this book demonstrate, beautiful inventions of the twentieth century.

Scanned image of p. v

The Language of Electronics


  Most electronic machines, including all computer, speak a common language; binary math, in which all numbers, no matter how large, are represented as a combination of ones and zeros. There are no other digits, and, surprisingly enough, no others are needed. In the binary system, a one is represented just as it is in the decimal system - as simply a 1 - but a two is written as 10, a three as 11, a four as 100, a five as 101, and a six as 110. At eight, the number lengthens to 1000, and at sixteen, it expands once again, to 100000. Seventeen is expressed as 10001, eighteen as 10010. Cumbersome for human use, binary notation is ideal for electronic equipment, because any number may be expressed as a string of on and off electrical impulses - a sort of Morse code for computers.

  In some ways, binary numbers are easier to use than decimal figures. since there are only two digits in the binary system, addition involves no more than the toting up of ones and zeros, and the carrying forward of a binary digit, or bit, to the next highest power of two. (A byte, incidentally, is eight bits, enough information for a computer to characterize a letter of a decimal digit.) Multiplication is also a simple process, as any bit can be multiplied by only one or zero. Once again, the process is reduced to addition:

    111
  x 110  =  
    000
   111
  111  
 101010
    7
 x  6
   42


  Subtraction may be performed in the usual fashion or through an ancient mathematical trick known as "casting out nines," in which numbers are transformed into their complements - their mirror images, so to speak - and are added together. Since division may be carried out by toting up how many times one number can be subtracted from another, all arithmetical operations in binary math may be reduced to addition. That particular feature of the binary system goes far in making computers possible, and it is joined by yet another mathematical coincidence of equal impact on the operation of electronic machines.

  In the nineteenth century, a brilliant British mathematician by the name of George Boole devised a system of algebra, or mathematical logic, capable of determining whether a statement is true or false. the statement must first be reduced to a true or false proposition, then subjected to simple logical operations governed by truth tables - arrays of simple formulas that manipulate numbers in accordance with fixed rules. Since almost any problem can be reduced to a series of true or false, yes or no propositions, Boolean logic can be used to perform an almost endless variety of chores, including addition.

  In the late 1930s, Claude E. Shannon, an uncommonly precocious graduate student at MIT, had a historic insight; he realized that the simple rules of Boolean logic made an ideal operating system for computers, then in the early stages of development. with its yes or no, true or false statements, Boolean logic was the ideal mathematical mate for binary math, whose ones and zeros could be made to stand for the black-and-white dichotomies of Boolean algebra. Shannon showed that both binary math and Boolean logic can be mimicked by electronic circuits arranged to shuttle on and off pulses along Boolean pathways. In Shannon's day, these circuits were made out of electromechanical gadgets known as relays, which physically opened and closed like trap doors.

Scanned image of p. vi



  Regardless of what they're composed of, the circuits that carry out Boolean algebra are called logic gates. Logic gates are marvels of mathematical economy, only three types of gates, called AND, OR, and NOT, being needed for a computer to perform almost any logic or arithmetic operation, including the addition of one and one. Each gate has a specific function, such as converting a binary one into a zero or vice versa. In a very real sense, computers are composed of nothing more than logic gates stretched out to the horizon in a vast numerical irrigation system operating close to the speed of light.

How Computers Add.

How Computers Add. This is a half-adder, one of a computer's basic logic circuits. It consists of four logic gates which, in the drawing above, are adding one and one. An OR gate accepts two binary inputs: if either of them is one, the output is also one. An AND gate accepts two inputs: if both of them are one, the output is one; otherwise, it is zero. A NOT gate, also called an inverter, converts single input into its opposite value (a one into a zero, and vice versa). A logic chip, or any IC with logic circuitry, may contain tens of thousands of such gates.

Transistors, or How to Put Mud to Work


  Composed of relays, the first digital computers were bulky, slow, expensive and noisy. Many unflattering things have been said about them. With all those relays clicking on and off in response to digital impulses, they sounded like chattering mechanical teeth. (Physicist Jeremy Bernstein described the Mark I, built at Harvard during World War II and one of the first large computers, as sounding "like a roomful of ladies knitting.") By the late 1940s, however, computers had become as quiet as rustling lilies, for by then they were being made out of tubes.

  Invented by the American Lee de Forest in 1906, the vacuum tube (technically the three-electrode valve, or triode) could amplify electric current as well as turn it on or off. tubes were much faster and cheaper than relays, although not necessarily smaller, and they were also noiseless; but tubes had problems of their own. for instance, the first true electronic computer, ENIAC (for Electronic Numerical Integrator And Computer), christened at the University of Pennsylvania in 1945, had 18,000 tubes; they burned enough power to light a small town and gave off a great deal of heat. And when ENIAC blew a tube, as it did several times a day, finding and replacing the device was a technician's nightmare.

  The invention of the transistor two years after the birth of ENIAC eventually did away with the vacuum tube. The transistor is a solid-state version of the tube, able to turn on or off in a fraction of a second and, in the process, greatly amplify an incoming electrical signal. Unlike a tube, however, it does these things without the help of coiled wires, metal plates, glass capsules, or vacuums. For a transistor is made out of a semiconductor, typically silicon or germanium, that has been chemically contaminated, or doped, with impurities to allow it to carry current. One of its earliest forms, the junction transistor (the second picture in this book), looks like nothing more than a lump of mud.

Scanned image of p. vii



  The transistor is the wheel, the steam engine, the wellspring of electronics. From the $10 digital watch to a $10-million mainframe computer capable of executing a hundred million operations a second, almost all electronic equipment is based on this humble device. Like most inventions, the transistor did not spring into the world full-blown but developed gradually. (for a full account of the creation of the transistor, see pp. 2 - 5.)

  The first transistor, the point-contact transistor (shown in the first picture), was invented by physicists John Bardeen and Walter Brattain at Bell Telephone Laboratories in 1947. Three years later, their colleague William Shockley developed the junction transistor, a vastly improved model that made the transistor commercially viable and launched the electronic revolution. For their pioneering work, all three scientists won the 1956 Nobel Prize in physics.

  To understand how the transistor and its descendant, the IC, work, we must first examine the atomic structure of silicon (the early transistors were fashioned out of germanium, but nowadays they are almost all made of silicon, which is somewhat easier to insulate electrically).

  It is the crystalline form of silicon, which has a lattice-like atomic structure, that is used to make transistors. Normally, there are four electrons in the outer shell of each silicon atom, but in the crystalline state each atom shares these four outermost electrons with its immediate neighbors. Therefore, each atom in a silicon crystal actually has eight, not four, electrons in its outer shell. In its natural condition, however, crystalline silicon is too rigid to conduct electricity, which is why doping is necessary.

  As the scientists at Bell Labs discovered, silicon can be doped with impurities to enable it to carry an electric charge. The most commonly used dopants are boron, with three electrons in its outer shell, and phosphorus, with five. When a phosphorus atom is inserted in a crystal of silicon under the right conditions, the newcomer will displace a silicon atom without disturbing other atoms in the vicinity. There will be a slight change in the crystal, however. That extra electron in the outer ring of the phosphorus atom won't be able to find a home in the crystal's interatomic bonds, and it will languish, like a wallflower, in the neighborhood of the phosphorus nucleus.

  If more phosphorus atoms are introduced into the crystal, more silicon atoms will be shoved out, and the crystal will gain still more free electrons. And electrons carry a negative charge. So if we now apply a negative voltage across the crystal, all those loose electrons will be propelled through the material, like spare change put to good use. (Voltage, by the way, is a measure of electromotive force. It is to electricity as pressure is to water.)

  In the case of boron, the other dopant, which has only three electrons in the outer shell, it's not an extra electron that's brought to the crystal but a positive ion, or hole - the lack of an electron. When a positive voltage is applied, the hole passes through the crystal like a bubble through water.

  The first transistor, the point-contact model, had two principal drawbacks: it was somewhat unpredictable and was difficult to make. But those problems were overcome with the development of the junction, or bipolar, transistor. The junction transistor is a kind of electrical sandwich that comes in two forms: a central layer of boron-doped silicon between two layers of phosphorus-doped silicon, or the other way around. (Again, the first junction transistors were made out of germanium.)

  To understand how the transistor and the IC work, we must first grasp some definitions. Boron-doped silicon is known as p-type silicon, because it conducts positive ions. Phosphorus-doped silicon is called n-type silicon, because it mobilizes negatively charged electrons. A junction transistor consisting of a layer of p-type silicon between two layers of n-type

Scanned image of p. viii

silicon is referred to an npn transistor. (There are also pnp transistors, but we won't discuss their operation here.) The inner layer is called the base, the outer layers the emitter and the collector. These terms are perfectly apropos, since the emitter issues electrons, the collector collects them, and the base supervises the whole affair.

Junction Transistor.

Junction Transistor. A junction transistor is made up of layers of silicon that have been impregnated with extra electrons and positive ions. The drawing above shows an npn transistor: a slice of positively doped silicon, the base, between two layers of negatively doped silicon, the emitter and the collector. When a positive voltage is applied to the terminals, electrons rush from the emitter to the collector, while holes travel toward the ground. It is the voltage applied to the base that both controls and amplifies the current leaving the collector.



  To operate a junction transistor, we need only apply a positive voltage to the base and a higher positive voltage to the collector; the emitter must be wired to a ground (a circuit that dissipates electricity). Two things will happen when we turn on the electricity. the positive ions of the p-type base will be repelled by the positive voltage into the negatively charged emitter (but they won't penetrate the collector, because it is receiving an even higher voltage). Meanwhile, the electrons respond to a different drummer. Lured by the positive voltages of the base and the collector, they rush out of the emitter, through the base, and into the collector. As they flow through the device, their ranks swelled by the extra electrons in the doped emitter and collector, they become a huge horde, and push out of the transistor through the positive terminals attached to the base and the collector.

  In a well-designed transistor, nearly all the electrons complete the journey from the emitter to the collector, amplifying the current (the flow of electrons) applied to the base by at least a hundredfold. Amplifications, or gains, of as much as a thousand or more are possible under certain circumstances. Even a tiny boost in the base's voltage - again, the base is the controlling component - greatly amplifies the current.

  In general, transistors have two entirely different applications. In so-called analog devices - radios, televisions, and the like - they serve primarily as amplifiers (hence the term transistor radio). But in digital equipment - computers, calculators, video games, and so on - they also function as switches, turning on or off millions of times a second in response to the true or false, yes or no, conducting or nonconducting impulses of binary math. Incidentally, the transistor's amplifying properties serve it well by allowing it to be switched on with very little energy and yet still yield a high output.

The Lilliputian World of the IC


  A single, self-contained transistor is called a discrete component. Throughout the 1950s and early 1960s, electronic equipment was composed largely of discretes - transistors, resistors (which retard the flow of electricity), capacitors (which store it), and so on. Like cookies cut from molds, discretes were manufactured

Scanned image of p. ix

separately, packaged in their own containers, and soldered or wired together onto masonite-like circuit boards, which were then installed in computers, oscilloscopes, and other electronic equipment. Whenever an electronic device called for a transistor, a little tube of metal containing a pinhead-sized speck of silicon had to be soldered to a circuit board. The entire manufacturing process, from transistor to circuit board, was expensive and cumbersome.

  Enter the IC. Instead of packaging discretes in separate containers, engineers found a way to install any number of them on the same piece of doped silicon. The transistor, in other words, ate its tail. A circuit board made up of discretes is like a chessboard composed of large, separate squares that have been glued together; an IC, on the other hand, is like a tiny board that has been imprinted with a checkered pattern. Modern ICs only a fraction the size of a typical discrete contain hundreds of thousands of transistors. ICs are not only smaller than discretes, they are also much cheaper, more reliable, flexible, and faster.

  The making of ICs is perhaps the most precise and exacting process in industry. It is a slow, painstaking affair, prone to error at every step, and it begins with the creation of a cylinder of raw crystalline silicon. Such cylinders are usually grown to order by one of the many specialty firms catering to the semiconductor industry. The growing process resembles the making of candles, with cylinders that are from two to five inches in diameter and eighteen inches long (although sometimes they are as long as four feet) being pulled slowly out of vats of molten silicon.

  Once they have been delivered to the semiconductor firms, the cylinders are sliced with a diamond saw into wafers less than four thousandths of an inch thick and are polished to a mirror-smooth finish. A five-inch wafer, the largest now in use, is big enough for as many as five hundred chips.

  In the early days of the industry, designing ICs was almost a black art, the province of a single engineer or a small team of engineers working at a drafting table and making up the rules as they went along. But the business has come a long way since then, and those seat-of-the-pants procedures have given way to more or less formalized rules inscribed in college textbooks and company operations manuals.

  The first microprocessor, the Intel 4004 (p. 30), introduced in 1971, was designed by three men in less than a year; one of the first 32-bit microprocessors, the Intel iAPX 432, unveiled in 1981, consumed a hundred man-years of time and many millions of dollars. This is not to imply, by the way, that the design of chips today always requires this much time and expense; many ICs are easy to create, particularly with the help of computer-aided design (CAD) systems, which can simulate prospective ICs in much the same way that computers play war games or run analyses of the economy.

  Once the overall design of a chip has been validated by computer simulation, engineers divide the layout into small, easily manageable blocks of circuitry and refine and simulate them repeatedly. When these blocks have been perfected, and the chip design as a whole has been tested down to the last transistor - an enormous task in some cases, since advanced ICs today contain up to five hundred thousand components - the computer generates master blueprints.

  Because an IC may consist of as many as fifteen layers, each laid down in coordinated stages, the computer must turn out a blueprint for each stage. These blueprints (which don't really resemble blueprints, but are actually finely detailed color drawings) may range from four to five hundred times the actual size of the chip and enable engineers to check for errors. Some computers are even programmed to do their own checking.

  If the design works, the computer produces a tape of the IC's layout, which is then fed into a pattern generator. This machine creates a set of optical reticles ten times the size of the actual chip, one reticle per layer.

Scanned image of p. x

Making an IC.

Making an IC. ICs are made out of razor-thin wafers of silicon etched with as many as five hundred chips. First, racks of wafers are rolled into extremely hot (about 2,000º F) ovens filled with steam or an oxygen-laden gas. In effect, the wafers are rusted, covered with a thin, electrically insulating layer of silicon dioxide (hereafter referred to as oxide) that forestalls short circuits. Next they are coated with a photoresist, an emulsion that reacts to ultraviolet light only.
  Then the first photomask is placed over the chip, aligned precisely, and exposed to ultraviolet light (a). (The example above shows only a portion of a chip.) The light hardens the exposed oxide, while the soft, unexposed oxide - the area that lay beneath the dark portion of the mask - is etched away in an acid bath (b). The chip is covered with polycrystalline silicon - the thing crosshatched layer - and a photoresist, then exposed to ultraviolet light through a second mask (c). The unnecessary polycrystalline is removed by acid, leaving an elevated, L-shaped bar (d).
  Phosphorus is then diffused into the lower features (e). The chip gets another coating of oxide and a photoresist, and a third mask, outlining the locations of the aluminum contacts, is placed over it. The chip is then subjected to a further dose of ultraviolet light (f). An acid bath washes away the soft, unexposed oxide - the areas that were beneath the dark parts of the mask (g). A layer of aluminum is diffused over the IC. The chip is then covered with a photoresist and exposed to a fourth mask (h). A final acid bath removes the superfluous aluminum, leaving "wires" in the appropriate spots (i).
  The result is an MOS transistor (pp. 12-13), the components of which are easiest to make out in (e): the black tubs are the source and the drain, the thickly dotted bar is the gate. Before leaving the factory, the chips are tested individually by computer, and the faulty ones are marked. The wafers are sliced with a diamond scribe, the bad chips are discarded, and the good ones are wired into ceramic or plastic packages and sealed, tested again, and shipped to the user. If the production run involves a new chip, the yield of working ICs is usually less than 10 percent; otherwise, the yield is at least 50 percent. (Drawings reprinted by permission of Cambridge University Press.)

Scanned image of p. xi



  It's at this point in the manufacturing process that the reduction to the infinitesimal begins. Through a photoreductive process known as step-and-repeat, the reticles are used to create photomasks, glass plates that are employed in the factory to make the chips. The first set of photomasks are the master plates, from which the masks used in the plant are made; there's a photomask for each layer, and each mask may contain as many as five hundred IC images. Increasingly, however, the reticles and master masks are dispensed with, and the tape is fed into an electron-beam mask-making machine that, controlled by a computer, fashions the working plates.

  An IC plant is a noisy cross between a hospital and a factory. Expensive electronic equipment is everywhere. Workers (usually women) are sheathed in white smocks, caps, and shoe covers. The walls are painted white. Air-scrubbing machines operate around the clock, because a single particle of dust can harm a chip irreparably. For this reason, IC plants are kept at least a hundred times cleaner than the average hospital. The early stages of the manufacturing process are heavily automated, the later ones labor-intensive and usually carried out overseas, where wages are lower. A few companies, particularly IBM, are experimenting with fully automated chip-making machines.

How Chips Store and Process Data


  To sum up the discussion so far: ICs are composed chiefly of transistors that switch on or off in accordance with a sort of Morse code that follows the dictates of Boolean logic. That, in a nutshell, is how computers and other digital devices operate. But how do such machines really work? How do they actually remember things and perform calculations?

  Let's begin with remembering. There are many forms of IC memories, but the two most widely used are random-access memory (RAM) and read-only memory (ROM). A RAM is like a scratch pad and is used to preserve the data and programs that a computer or other electronic device needs for its immediate operation. In a personal computer, for example, a RAM is that part of the machine's memory available to the operator for storing numbers or documents or programs.

Random-Access Memory.

Random-Access Memory. When binary pulses are sent to a RAM or a ROM, they are first decoded into row and column coordinates. Then all the memory cells in the designated row and column are turned on, but only one cell - the one at the intersection of the chosen row and column - is fully activated. In dynamic RAMs, the stored charges tend to leak away, and such chips therefore require regular refreshing. ROMs and static RAMs, on the other hand, retain their contents without refreshing.



  A ROM, on the other hand, is like a slate of chiseled marble and store information and instructions necessary for the machine's general operation. The program that tells a handheld calculator how to perform division is kept in a ROM along with instructions for finding square roots and for carrying out other functions. A ROM cannot be altered by the user.

Scanned image of p. xii



  Regardless of their memory capacities, RAMs and ROMs are essentially simple devices. Both chips are composed chiefly of memory cells arranged in a geometric grid like those found on graph paper - a layout that allows each cell to have its own coordinates, or address, and so to be accessed directly. In a RAM, a basic cell consists of a capacitor and a transistor; the capacitor stores the data (the presence of an electrical charge represents a one, its absence a zero), while the transistor, when turned on, releases the data to the processing chips of the host machine or enables new information to be written in.

  In a ROM, the only components in the cell are either a ground, which stands for a zero (electricity dissipates in a ground), or an open circuit, which represents a one (in which case, the computer is in effect reading the very current it has dispatched to the ROM). For the sake of convenience, the memory capacity of ICs is measured in units of K, which normally stands for a thousand, but in electronics signifies 210 or 1,024. Hence, a 1K chip can store up to 1,024 bits of data, a 64K chip 65,536 bits.

  The processing of the enormous number of charges stored in a computer's memory is performed by a special class of chips called microprocessors. A microprocessor is the central processor of a computer; it includes all the circuits that carry out the arithmetic and logic operations, reduced to a single IC. One of the most important and versatile inventions of recent times, the microprocessor makes data processing possible in even the smallest device. It is the microprocessor that has given rise to most of the sophisticated products of modern electronics: handheld calculators, home computers, video games, programmable videotape recorders, automatic bank tellers, industrial robots, and hundreds of other machines (For a full account of the development of the microprocessor, see pp. 30-31, 34-37).

  Over the years the architecture of microprocessors has become somewhat standardized. Microprocessors usually contain five key components. First, an arithmetic and logic unit (ALU), made of transistors arranged in the form of Boolean logic gates, executes mathematic and logic functions. Second, a bank of RAM-like parts called registers, made out of modified memory cells, stores data needed temporarily by the ALU. third, a control unit, consisting of transistors in Boolean arrays, decodes data and implements programs stored in memory. Fourth, a network of interconnections called a data bus links the various parts of the chip to one another. And, finally, a clock times all the operations.

Microprocessor/Microcontroller.

Microprocessor/Microcontroller. A microprocessor contains all the circuits necessary to carry out arithmetic and logic operations, but it isn't a self-contained computer on a chip. It needs RAMs, ROMs, and input/output ICs to work. A microcontroller, by contrast, is a true single-chip computer, containing all the parts of a microprocessor as well as its own memory and input/output ports. Microcontrollers are also called microcomputers.

Scanned image of p. xiii



  A microprocessor is not, in and of itself, a computer on a chip. It must be used in conjunction with other ICs, particularly RAMs, ROMs, and input/output chips. But there is a class of chips, called microcontrollers, or microcomputers, that are truly computers on a chip. These ICs, which have been made possible by the relentless advance of microelectronic technology, include their own RAM, ROM and input/output elements. They don't need other chips to assist them, although they are often linked with others so as to augment their power. Microcontrollers are commonly used in relatively simple devices like calculators, toys, games, and typewriters; they are also employed in cars, where they monitor the air, oil and gas mixture in order to decrease pollution and enhance the engine performance.

  By shunting on and off signals through millions of Boolean logic gates and the compact memory grids of RAMs and ROMs, microprocessors, microcontrollers, and other ICs can add, subtract, average, compare, contrast, differentiate, and otherwise manipulate binary numbers - all at fantastic rates. A bit can be written or read into a typical 16K RAM in as little as two hundred billionths of a second and can be read out of the average 16K ROM in even less time. And one recently produced microprocessor, a 32-bit chip from Hewlett-Packard (p. 69), can multiply two 32-bit numbers in a mere 1.8 millionths of a second, making it roughly four thousand times faster than the 30-ton 18,000-tube ENIAC, which took a snail-like five hundredths of a second to multiply two 10-digit decimal numbers.

  Not a year seems to go by without a quantum leap in the sophistication of ICs. Chips with a storage capacity of one million bits have already appeared (magnetic-bubble memories, an early version of which is shown on p. 21), as have others with almost half a million transistors (the 32-bit microprocessor mentioned above). Still others, like the Josephson junction depicted on p. 77, can switch on or off in as little as six trillionths of a second. The speed and complexity of ICs are not unlimited, of course, but in the opinion of at least three highly regarded scientists, I. E. Sutherland, Carver A. Mead, and T. E. Everhart, IC technology is still in its youth:
  There is every reason to believe that the integrated circuit revolution has run only half its course; the change in complexity of four to five orders of magnitude that has taken place during the past fifteen years appears to be only the first half of a potentially eight-order-of-magnitude development. There seem to be no fundamental obstacles to 107-to-108-device integrated circuits (Basic Limitations in Microcircuit Fabrication Technology, Rand Corp. Report R-1956-ARPA [Nov. 1976]).


  ICs with ten to a hundred million components? ICs whose basic operating units are not transistors but entire microprocessors, built by the millions into chips smaller than a thumbtack? Incredible as it may seem, such devices are a distinct, and utterly glorious, possibility.

Scanned image of p. xiv

Suggestions for Further Reading


  If you'd like to delve further into the history of the IC, the following books are good starting points: Ernest Braun and Stuart Macdonald, Revolution in Miniature (Cambridge University, 2nd ed., 1982); G. W. A. Dummer, Electronic Inventions and Discoveries (Pergamon, 2nd ed., 1978); Christopher Evans, The Micro Millenium (Viking, 1980); and Dirk Hansen, The New Alchemists: Silicon Valley and the Microelectronics Revolution (Little, Brown, 1982).

A Note on the Photographs


  Except for a handful, all the photos that follow were taken with a Zeiss Ultraphot or similar microscope camera, a sophisticated piece of equipment that employs a three-dimensional optical effect known as differential interference contrast. The brilliant colors are not, therefore, attributes of the chips themselves but a result of the photographic process. A strong light shone on an IC is refracted by the chip in such a way as to create an unpredictable palate of colors. And when that light is a single hue, light bright blue, the outcome is often quite stunning.

  Because this book is an overview, it is necessarily highly selective. Most of the historically significant chips are here, but some, like the first ROMs, were nowhere to be found. Others had to be excluded because the quality of the available photos was poor. Furthermore, by no means is the quantity of pictures from any one company a reflection on that firm's contribution to the development of the IC. Rather, the pictures were chosen on the basis of beauty, variety, and historical significance.

Scanned image of p. xv


v

STATE OF THE ART
©Copyright Stan Augarten
This book is provided for general reference. The National Museum of American History and the Smithsonian Institution make no claims as to the accuracy or completeness of this work.

page:   Index    2   4   6   8   10   12   14   16   18   20   22   24   26   28   30   32   34   36   38   40   42   44   46   48   50   52   54   56   58   60   62   64   66   68   70   72   74   76  


National Museum of American History


HomeSearchChip TalkChip ArtPatentsPeoplePicturesCreditsCopyrightComments