|Reference||STATE OF THE ART||Stan Augarten|
|State of the Art|
To see a world in a grain of sand,
And a Heaven in a wild flower,
Hold Infinity in the palm of your hand,
And Eternity in an hour.
A revolution has been taking place in electronics. Spurred by the invention of the transistor in 1947 and of the integrated circuit (IC) in the late 1950s and early 1960s, computers and other electronic devices have become smaller, cheaper, more powerful, and more versatile. The fruits of this revolution are all around us in the form of hundreds of inventions based on the IC: personal computers, robots, digital watches, video cassette recorders, CAT scanners, the space shuttle, and, alas, smart bombs and cruise missiles. For better or worse, we have entered a new technological era.
The star of the revolution is the IC chip, or microchip (all synonyms, although microchip is a journalistic popularization eschewed by the electronic industry. IC and chip are the preferred terms). And what is an IC? It is a small piece of silicon that has been engineered, by a process akin to contact printing, to manipulate electrical signals. It looks like nothing so much as a dull fleck of aluminum, rarely bigger than a quarter of an inch square, and often much smaller. Yet it is a phenomenon of electronic ingenuity, capable of storing hundreds of thousands of bits of information and executing millions of operations a second. Some chips, known as microprocessors, are even more powerful than the multimillion-dollar computers of the 1950s and 1960s.
Silicon, the chief constituent of ICs, is the second most abundant substance on the surface of the earth, and, in combination with oxygen, the chief ingredient of sand - which makes Blake's lovely verse a fitting ode to the IC. Silicon is a semiconductor, a type of solid whose ability to conduct electricity lies somewhere between that of an insulator like rubber, which is an extremely poor carrier of electrons, and that of a conductor like copper, which is a very good carrier. Because of its semiconducting properties and its almost unlimited abundance, silicon can be endowed, at very low cost, with near-magical electrical properties. Thankfully, it is one element we will never run out of.
State of the Art is the story of the IC - of how it was invented, how it works, how it is made, and how it is used. The story is almost entirely an American one, since the chip was invented in the United States and until recently was manufactured almost exclusively here. The semiconductor industry is centered on a stretch of land between San Francisco and San Jose, California; alive with intellectual energy, nouveau wealth, and capitalist ambition, the area has come to be known as Silicon Valley. Major IC companies are also located in Texas, Arizona, New York, Massachusetts, and other parts of the country. Nevertheless, other nations, particularly Japan, have recently begun producing chips in bulk, and they will no doubt eventually contribute to the ongoing development of the IC - one of the most important and, as the photos in this book demonstrate, beautiful inventions of the twentieth century.
Most electronic machines, including all computer, speak a common language; binary math, in which all numbers, no matter how large, are represented as a combination of ones and zeros. There are no other digits, and, surprisingly enough, no others are needed. In the binary system, a one is represented just as it is in the decimal system - as simply a 1 - but a two is written as 10, a three as 11, a four as 100, a five as 101, and a six as 110. At eight, the number lengthens to 1000, and at sixteen, it expands once again, to 100000. Seventeen is expressed as 10001, eighteen as 10010. Cumbersome for human use, binary notation is ideal for electronic equipment, because any number may be expressed as a string of on and off electrical impulses - a sort of Morse code for computers.
In some ways, binary numbers are easier to use than decimal figures. since there are only two digits in the binary system, addition involves no more than the toting up of ones and zeros, and the carrying forward of a binary digit, or bit, to the next highest power of two. (A byte, incidentally, is eight bits, enough information for a computer to characterize a letter of a decimal digit.) Multiplication is also a simple process, as any bit can be multiplied by only one or zero. Once again, the process is reduced to addition:
Subtraction may be performed in the usual fashion or through an ancient mathematical trick known as "casting out nines," in which numbers are transformed into their complements - their mirror images, so to speak - and are added together. Since division may be carried out by toting up how many times one number can be subtracted from another, all arithmetical operations in binary math may be reduced to addition. That particular feature of the binary system goes far in making computers possible, and it is joined by yet another mathematical coincidence of equal impact on the operation of electronic machines.
In the nineteenth century, a brilliant British mathematician by the name of George Boole devised a system of algebra, or mathematical logic, capable of determining whether a statement is true or false. the statement must first be reduced to a true or false proposition, then subjected to simple logical operations governed by truth tables - arrays of simple formulas that manipulate numbers in accordance with fixed rules. Since almost any problem can be reduced to a series of true or false, yes or no propositions, Boolean logic can be used to perform an almost endless variety of chores, including addition.
In the late 1930s, Claude E. Shannon, an uncommonly precocious graduate student at MIT, had a historic insight; he realized that the simple rules of Boolean logic made an ideal operating system for computers, then in the early stages of development. with its yes or no, true or false statements, Boolean logic was the ideal mathematical mate for binary math, whose ones and zeros could be made to stand for the black-and-white dichotomies of Boolean algebra. Shannon showed that both binary math and Boolean logic can be mimicked by electronic circuits arranged to shuttle on and off pulses along Boolean pathways. In Shannon's day, these circuits were made out of electromechanical gadgets known as relays, which physically opened and closed like trap doors.
How Computers Add. This is a half-adder, one of a computer's basic logic circuits. It consists of four logic gates which, in the drawing above, are adding one and one. An OR gate accepts two binary inputs: if either of them is one, the output is also one. An AND gate accepts two inputs: if both of them are one, the output is one; otherwise, it is zero. A NOT gate, also called an inverter, converts single input into its opposite value (a one into a zero, and vice versa). A logic chip, or any IC with logic circuitry, may contain tens of thousands of such gates.
Composed of relays, the first digital computers were bulky, slow, expensive and noisy. Many unflattering things have been said about them. With all those relays clicking on and off in response to digital impulses, they sounded like chattering mechanical teeth. (Physicist Jeremy Bernstein described the Mark I, built at Harvard during World War II and one of the first large computers, as sounding "like a roomful of ladies knitting.") By the late 1940s, however, computers had become as quiet as rustling lilies, for by then they were being made out of tubes.
Invented by the American Lee de Forest in 1906, the vacuum tube (technically the three-electrode valve, or triode) could amplify electric current as well as turn it on or off. tubes were much faster and cheaper than relays, although not necessarily smaller, and they were also noiseless; but tubes had problems of their own. for instance, the first true electronic computer, ENIAC (for Electronic Numerical Integrator And Computer), christened at the University of Pennsylvania in 1945, had 18,000 tubes; they burned enough power to light a small town and gave off a great deal of heat. And when ENIAC blew a tube, as it did several times a day, finding and replacing the device was a technician's nightmare.
The invention of the transistor two years after the birth of ENIAC eventually did away with the vacuum tube. The transistor is a solid-state version of the tube, able to turn on or off in a fraction of a second and, in the process, greatly amplify an incoming electrical signal. Unlike a tube, however, it does these things without the help of coiled wires, metal plates, glass capsules, or vacuums. For a transistor is made out of a semiconductor, typically silicon or germanium, that has been chemically contaminated, or doped, with impurities to allow it to carry current. One of its earliest forms, the junction transistor (the second picture in this book), looks like nothing more than a lump of mud.
silicon is referred to an npn transistor. (There are also pnp transistors, but we won't discuss their operation here.) The inner layer is called the base, the outer layers the emitter and the collector. These terms are perfectly apropos, since the emitter issues electrons, the collector collects them, and the base supervises the whole affair.
Junction Transistor. A junction transistor is made up of layers of silicon that have been impregnated with extra electrons and positive ions. The drawing above shows an npn transistor: a slice of positively doped silicon, the base, between two layers of negatively doped silicon, the emitter and the collector. When a positive voltage is applied to the terminals, electrons rush from the emitter to the collector, while holes travel toward the ground. It is the voltage applied to the base that both controls and amplifies the current leaving the collector.
To operate a junction transistor, we need only apply a positive voltage to the base and a higher positive voltage to the collector; the emitter must be wired to a ground (a circuit that dissipates electricity). Two things will happen when we turn on the electricity. the positive ions of the p-type base will be repelled by the positive voltage into the negatively charged emitter (but they won't penetrate the collector, because it is receiving an even higher voltage). Meanwhile, the electrons respond to a different drummer. Lured by the positive voltages of the base and the collector, they rush out of the emitter, through the base, and into the collector. As they flow through the device, their ranks swelled by the extra electrons in the doped emitter and collector, they become a huge horde, and push out of the transistor through the positive terminals attached to the base and the collector.
In a well-designed transistor, nearly all the electrons complete the journey from the emitter to the collector, amplifying the current (the flow of electrons) applied to the base by at least a hundredfold. Amplifications, or gains, of as much as a thousand or more are possible under certain circumstances. Even a tiny boost in the base's voltage - again, the base is the controlling component - greatly amplifies the current.
In general, transistors have two entirely different applications. In so-called analog devices - radios, televisions, and the like - they serve primarily as amplifiers (hence the term transistor radio). But in digital equipment - computers, calculators, video games, and so on - they also function as switches, turning on or off millions of times a second in response to the true or false, yes or no, conducting or nonconducting impulses of binary math. Incidentally, the transistor's amplifying properties serve it well by allowing it to be switched on with very little energy and yet still yield a high output.
A single, self-contained transistor is called a discrete component. Throughout the 1950s and early 1960s, electronic equipment was composed largely of discretes - transistors, resistors (which retard the flow of electricity), capacitors (which store it), and so on. Like cookies cut from molds, discretes were manufactured
separately, packaged in their own containers, and soldered or wired
together onto masonite-like circuit boards, which were then installed in
computers, oscilloscopes, and other electronic equipment. Whenever an electronic
device called for a transistor, a little tube of metal containing a
pinhead-sized speck of silicon had to be soldered to a circuit board. The
entire manufacturing process, from transistor to circuit board, was
expensive and cumbersome.
Making an IC. ICs are made out of razor-thin wafers of silicon etched with as many as
five hundred chips. First, racks of wafers are rolled into extremely hot
(about 2,000º F) ovens filled with steam or an oxygen-laden gas. In
effect, the wafers are rusted, covered with a thin, electrically
insulating layer of silicon dioxide (hereafter referred to as oxide) that
forestalls short circuits. Next they are coated with a photoresist, an
emulsion that reacts to ultraviolet light only.
It's at this point in the manufacturing process that the reduction to the infinitesimal begins. Through a photoreductive process known as step-and-repeat, the reticles are used to create photomasks, glass plates that are employed in the factory to make the chips. The first set of photomasks are the master plates, from which the masks used in the plant are made; there's a photomask for each layer, and each mask may contain as many as five hundred IC images. Increasingly, however, the reticles and master masks are dispensed with, and the tape is fed into an electron-beam mask-making machine that, controlled by a computer, fashions the working plates.
An IC plant is a noisy cross between a hospital and a factory. Expensive electronic equipment is everywhere. Workers (usually women) are sheathed in white smocks, caps, and shoe covers. The walls are painted white. Air-scrubbing machines operate around the clock, because a single particle of dust can harm a chip irreparably. For this reason, IC plants are kept at least a hundred times cleaner than the average hospital. The early stages of the manufacturing process are heavily automated, the later ones labor-intensive and usually carried out overseas, where wages are lower. A few companies, particularly IBM, are experimenting with fully automated chip-making machines.
To sum up the discussion so far: ICs are composed chiefly of transistors that switch on or off in accordance with a sort of Morse code that follows the dictates of Boolean logic. That, in a nutshell, is how computers and other digital devices operate. But how do such machines really work? How do they actually remember things and perform calculations?
Let's begin with remembering. There are many forms of IC memories, but the two most widely used are random-access memory (RAM) and read-only memory (ROM). A RAM is like a scratch pad and is used to preserve the data and programs that a computer or other electronic device needs for its immediate operation. In a personal computer, for example, a RAM is that part of the machine's memory available to the operator for storing numbers or documents or programs.
Random-Access Memory. When binary pulses are sent to a RAM or a ROM, they are first decoded into row and column coordinates. Then all the memory cells in the designated row and column are turned on, but only one cell - the one at the intersection of the chosen row and column - is fully activated. In dynamic RAMs, the stored charges tend to leak away, and such chips therefore require regular refreshing. ROMs and static RAMs, on the other hand, retain their contents without refreshing.
A ROM, on the other hand, is like a slate of chiseled marble and store information and instructions necessary for the machine's general operation. The program that tells a handheld calculator how to perform division is kept in a ROM along with instructions for finding square roots and for carrying out other functions. A ROM cannot be altered by the user.
Microprocessor/Microcontroller. A microprocessor contains all the circuits necessary to carry out arithmetic and logic operations, but it isn't a self-contained computer on a chip. It needs RAMs, ROMs, and input/output ICs to work. A microcontroller, by contrast, is a true single-chip computer, containing all the parts of a microprocessor as well as its own memory and input/output ports. Microcontrollers are also called microcomputers.
A microprocessor is not, in and of itself, a computer on a chip. It must be used in conjunction with other ICs, particularly RAMs, ROMs, and input/output chips. But there is a class of chips, called microcontrollers, or microcomputers, that are truly computers on a chip. These ICs, which have been made possible by the relentless advance of microelectronic technology, include their own RAM, ROM and input/output elements. They don't need other chips to assist them, although they are often linked with others so as to augment their power. Microcontrollers are commonly used in relatively simple devices like calculators, toys, games, and typewriters; they are also employed in cars, where they monitor the air, oil and gas mixture in order to decrease pollution and enhance the engine performance.
By shunting on and off signals through millions of Boolean logic gates and the compact memory grids of RAMs and ROMs, microprocessors, microcontrollers, and other ICs can add, subtract, average, compare, contrast, differentiate, and otherwise manipulate binary numbers - all at fantastic rates. A bit can be written or read into a typical 16K RAM in as little as two hundred billionths of a second and can be read out of the average 16K ROM in even less time. And one recently produced microprocessor, a 32-bit chip from Hewlett-Packard (p. 69), can multiply two 32-bit numbers in a mere 1.8 millionths of a second, making it roughly four thousand times faster than the 30-ton 18,000-tube ENIAC, which took a snail-like five hundredths of a second to multiply two 10-digit decimal numbers.
Not a year seems to go by without a quantum leap in the sophistication of ICs. Chips with a storage capacity of one million bits have already appeared (magnetic-bubble memories, an early version of which is shown on p. 21), as have others with almost half a million transistors (the 32-bit microprocessor mentioned above). Still others, like the Josephson junction depicted on p. 77, can switch on or off in as little as six trillionths of a second. The speed and complexity of ICs are not unlimited, of course, but in the opinion of at least three highly regarded scientists, I. E. Sutherland, Carver A. Mead, and T. E. Everhart, IC technology is still in its youth:
There is every reason to believe that the integrated circuit revolution has run only half its course; the change in complexity of four to five orders of magnitude that has taken place during the past fifteen years appears to be only the first half of a potentially eight-order-of-magnitude development. There seem to be no fundamental obstacles to 107-to-108-device integrated circuits (Basic Limitations in Microcircuit Fabrication Technology, Rand Corp. Report R-1956-ARPA [Nov. 1976]).
ICs with ten to a hundred million components? ICs whose basic operating units are not transistors but entire microprocessors, built by the millions into chips smaller than a thumbtack? Incredible as it may seem, such devices are a distinct, and utterly glorious, possibility.
If you'd like to delve further into the history of the IC, the following books are good starting points: Ernest Braun and Stuart Macdonald, Revolution in Miniature (Cambridge University, 2nd ed., 1982); G. W. A. Dummer, Electronic Inventions and Discoveries (Pergamon, 2nd ed., 1978); Christopher Evans, The Micro Millenium (Viking, 1980); and Dirk Hansen, The New Alchemists: Silicon Valley and the Microelectronics Revolution (Little, Brown, 1982).
Except for a handful, all the photos that follow were taken with a Zeiss Ultraphot or similar microscope camera, a sophisticated piece of equipment that employs a three-dimensional optical effect known as differential interference contrast. The brilliant colors are not, therefore, attributes of the chips themselves but a result of the photographic process. A strong light shone on an IC is refracted by the chip in such a way as to create an unpredictable palate of colors. And when that light is a single hue, light bright blue, the outcome is often quite stunning.
Because this book is an overview, it is necessarily highly selective. Most of the historically significant chips are here, but some, like the first ROMs, were nowhere to be found. Others had to be excluded because the quality of the available photos was poor. Furthermore, by no means is the quantity of pictures from any one company a reflection on that firm's contribution to the development of the IC. Rather, the pictures were chosen on the basis of beauty, variety, and historical significance.
|STATE OF THE ART
©Copyright Stan Augarten
|This book is provided for general reference. The National Museum of American History and the Smithsonian Institution make no claims as to the accuracy or completeness of this work.|
|page:||Index 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76|