The tiny speck of material called the microchip has enabled the computer to reach into every corner of society. As computers get “smarter,” the question becomes: Who’s in charge, us or them?
Historians determine whether a political event qualifies as a revolution rather than an uprising or coup by asking if it has fundamentally changed the lives of the people concerned and the world around them. By that criterion, there can be no doubt that a non-political revolution of historic dimensions is now underway.
It is difficult to give it an accurate name. “The computer revolution” is incomplete, and “the cybernetic revolution” is fuzzy. Though it does not cover the whole ground, it seems the nearest we can come to a definitive term is “the microchip revolution,” since microchips are the heart of both full-scale computers and the special-purpose microprocessors which control so many modern machines.
Whatever the revolution is called, it is clearly the real thing. A revolution alters the mentality of the people going through it and those born into it, and that can certainly be said of this one. A revolution is impossible to hide away from. It keeps looming up at you everywhere.
In little more than a dozen years, microtechnology has become a pervasive fact of life in industrialized countries. It affects us intimately: We carry little microprocessors called quartz watches around on our wrists; we drive cars laced with computerized controls and fill those cars up at computerized gas pumps. Microcircuitry comes into play in many of our ordinary routines – buying food, watching television, making a phone call. It has eliminated some habits, such as going to the bank frequently to draw out cash, and created new ones, such as buying tickets in nationwide lotteries with gigantic prizes. It permits us to do things that were unimaginable a couple of decades ago, like recording television programs while we sleep.
“Civilization advances by extending the number of important operations we can perform without thinking of them,” Alfred North Whitehead wrote. If so, the accelerator of civilization today is a minuscule speck of silicon mutated to act as a switch to handle information encoded in electric currents. For a computer is basically a switching machine capable of making calculations at astronomical speeds and storing the results.
Computers have been around for a long time, of course. The first to employ the binary system of counting in ones and zeros came on stream in the United States in 1945. It was called the ENIAC {for Electronic Numerical Integrator and Computer}. It weighed 30 tons, was 18 feet high and 80 feet long, and contained some 18,000 vacuum tubes which failed at an average of one every seven minutes. It cost US $487,000 to build back then when a buck was a buck.
The vacuum tubes were the switches that directed the traffic of information through the system. In the early 1960s manufacturers began replacing them with transistorized integrated circuit chips – microchips for short. The tubes had no more than half a dozen different functions; the chips went from having a few functions each when they were introduced to having hundreds of functions in the 1970s. They now have hundreds of thousands of functions, and there is no limit in sight to how much further they can be miniaturized. One measure of how far miniaturization has progressed is that all the circuitry in that 30-ton ENIAC could now be contained on a panel the size of a playing card.
The revolution has been one both of size and cost. For all their mind-boggling sophistication, microchips are essentially derived from sand, the world’s most common material. Steady improvements in methods of producing them have brought about a miraculous drop in the price of computers. According to one expert interviewed by Otto Freidrich in Time magazine, “If the automobile business had developed like the computer business, a Rolls-Royce would now cost $2.75 and run 3 million miles on one gallon of gas.”
Coupled with ingenious methods of adapting the calculating ability of computers to fields like graphics, word-processing and machine control, the reduction in size and cost has allowed them to spread into every corner of a modern economy. As they have done so, they have verified Robert McIver’s observation that “technology is the most subtle and most effective engineer of social change.”
Robots and the quest for a better standard of living
Political revolutions traditionally have stripped a class of people – the aristocrats – of their previously unquestioned security. On the surface, the technological revolution in progress threatens to do the same to the class of blue-collar and clerical workers that once formed the backbone of the industrial society.
Entire skilled trades like hot-type printing and photo engraving have already been decimated. A recent report by the Economic Council of Canada forecast sharp reductions in employment throughout the Canadian goods-producing industries. It predicted that the number of machining and related jobs in Canadian industry will plummet from 273,000 in 1981 to fewer than 13,000 in 1995, just seven years from now.
This is because computerized machinery and equipment is inexorably taking over work that was formerly done by human beings. It is as if Czech playwright Karel Capek’s 1921 drama R.U.R. had been lifted off the stage and placed in reality. In it, Capek coined the word “robots.” He depicted a sterile world in which machines had robbed man of the satisfaction and dignity of work.
Robots like those envisaged by Capek -“mechanical men” complete with arms and fingers and memories to remind them what to do – now dominate the workload in many factories. Doubtless in the future a lot more of them will be found on the shop floors. Though these mobile devices embody the popular image of robots, they are not the only ones of their kind in our midst. A computer that follows blueprints like a machinist or makes up a newspaper page like a compositor could be described as a robot as well.
Computers that seem to be smarter than human beings
Capek’s play gives voice to a fear that is at least as old as the Industrial Revolution of the early 19th century. This is that technology will deprive masses of people of the means of procuring a livelihood, throwing them out into the cold without money or the hope of another job.
In R.U.R., the owner of the robot factory argues the case for what we now call productivity, saying, in effect, that the lowering of the price of goods due to mechanization creates the activity and purchasing power that keeps the economy turning over. This proposition is rejected in the play, but it has been proved true in real life. Labour-saving machinery and equipment has been coming on stream in Canada more or less steadily for 100 years now, and the number of jobs has risen with only a few interruptions. Increased productivity has contributed to a rising overall standard of living. In the past 30-odd years, the jobs eliminated in the goods-producing sector have been replaced by new jobs in the service industries.
An even more deep-seated fear surfaced in Capek’s play. In it, the robots turn on their human masters and start destroying them. The more “intelligent” they are, the more they display the human characteristic of belligerency. This vision of manlike monsters wreaking havoc on the human race is an age-old nightmare that has been enshrined in literature ever since Homer. It has cropped up many times in science fiction in reference to stationary computers, which, although they don’t look like human beings, give the appearance of thinking like them.
It is easy to fantasize about a ring of computers that can “talk” to one another conspiring to hold the world to ransom, or some such plot-line. In Stanley Kubric’s film 2001 – A Space Odyssey, HAL the computer does not approve of what the crew of the space ship is doing, so he settles matters in his – or rather its – own way. It is all quite plausible when you are watching the film, because HAL can talk out loud in plain language. So can many computers in service today.
We tend to ascribe human qualities to computers because they display these qualities more than any other machine. They also manage to give the impression that, like HAL, they could outsmart a human being if they put their “minds” to it. A small desk-top computer can teach a person all kinds of things he or she didn’t know, not only imparting knowledge, but posing problems and asking hard questions. It can command users to do this or that while leading them through a program, and scold them {so it seems} when they hit the wrong keys. It can correct errors in the spelling or arithmetic like an irritable school-marm. It can play chess, black-jack or poker, and regularly beat us at our own games.
The lexicon of computer science adds to their human aura. We talk about their “language,” and about how they “read” information into their “memories.” If another sort of machine doesn’t work properly, we merely say there’s something wrong with it. If a computer goes haywire, we describe it in words we would normally reserve for human beings: we say that it has made an error, or that it has failed.
When this occurs, we derive a sneaking satisfaction from it, as though a particularly uppity schoolmate had made a fool of herself in front of the whole classroom. Everybody has a funny story about entering into a correspondence with a company or government and having a computer send them idiotic replies. A wire service recently circulated a photo of a man standing beside a stack of 100 thick government documents which a computer had mailed out to him when he had only asked for one: Typical! We chuckle over gaffes like this, but our chuckles have a defensive ring, because we know that most of the time computers can do a lot of things quicker and more accurately than we can do them ourselves.
The “new illiterates” and why they shouldn’t worry
Consultants estimate that as many as one-third of all “information workers” among professionals, managers and clerical staff are “cyberphobes” who resent computers. They mistrust the things, especially when they are told they will have to use them in their work. And not without reason; as Murray Laver wrote in an article in Management Today: “Computers have faced many ordinary men and women with substantial and disturbing changes in their working lives. Organization changes may break up groups of colleagues which provide the basis of social life within a company or department. Working methods have frequently been altered in ways which supersede existing skills, devalue precious experience and reduce an individual’s sense of responsibility and achievement.”
Another source of cyberphobia, especially among middle-aged workers, is that their unfamiliarity with computers has turned them into the “new illiterates.” They hesitate to take training in computer use because they may be embarrassed to reveal how little they know. Not only do they not know how to work them, they do not know how they work; and a certain social stigma has become attached to not being able to chat easily about bits, bytes and boot programs. In his own unique description of a computer, columnist Russell Baker put this in perspective: “First, you have the hardware. This is pretty much like the brain housed in your skull. Do you know how your brain works? What the cerebellum does when the memory is activated? Of course not. And it doesn’t bother you, does it? So why go all to pieces because the computer is so complicated that only a Ph.D. from MIT can understand it?”
One doesn’t have to be a cyberphobe to feel a certain apprehension about the things computers can be made to do. We hear a lot of talk about “artificial intelligence,” though what is really meant is that computers can be programmed to make automatic choices among certain types of information or to set out optional choices for managers to take. Still, no doubt about it, they’re getting “smarter.” Isn’t it just possible they’ll get so smart they’ll be running everything?
“The real danger is not that computers will begin to think like men, but that men will begin to think like computers,” wrote columnist Sydney J. Harris. Whatever it is programmed to do, a computer employs a system of algebra devised by 19th century British mathematician George Boole which reduces propositions of any kind to mathematical terms. The solutions to problems put to a computer are thus completely rational. They may, however, be all too rational for human beings, who have a preference for solutions that are humane, moral and just.
The computer could help us to understand ourselves
The great mistake of computer enthusiasts is to assume that, because these machines have such amazing capabilities, they are able to do anything. What they cannot do was pointed out in a recent speech by I.B. Scott, chairman of CP Rail. They do not, he said, have brainwaves: “They never sit up nights wondering ‘how come?’ or ‘what if?’ They never have hunches. Despite some progress in our search for artificial intelligence, only the human mind has the power to prove or disprove rules by trying to break them. And only the human mind has the instinct to try.”
Everybody got a scare in the stock market crash of October, 1987, when it looked as if computers were doing things that should rightly be done by people. The machines had been programmed to sell when prices hit certain levels, and they kept on selling among themselves. As always, they were only reacting to their controls like any other machine – a car or an automatic washer. Still, the situation carried a frightening echo of William Henry Thoreau’s lamentation: “Lo! Men have become tools of their tools!”
The way to prevent men from becoming tools of their tools in future is simply to remember that a computer is a tool, one that is as likely to be wielded as badly as any other. Human nature being what it is, we should keep a running check on whether it is being wielded responsibly, because we “repeatedly enlarge our instrumentalities without improving our purpose,” as Will Durant wrote.
The computer was originally designed to blow people to bits more efficiently by doing artillery trajectory calculations quicker and more accurately for the U.S. Army, but World War II had ended before it could be applied to this purpose. Computers even now are employed for a great variety of military purposes. These include directing the nuclear missile systems that could bring about the end of the world.
At the same time, their wonderful powers are being put to the cause of pushing back the frontiers of knowledge in medicine and other fields of research. Through these research functions, they are enabling us to understand our world as we never have before. By freeing human beings from the drudgery of routine work, they are opening up new avenues of creativity. Lewis Mumford once said that every technological advance ever made has proved potentially dangerous because it has not been accompanied by advances in self-understanding. Just possibly, microtechnology might one day allow us more fully to understand ourselves.
One of the great abilities of the computer is to run through all the facts and figures pertaining to a situation and set out alternative courses of action. The computer presents an alternative in itself. Like any other tool, it can be used thoughtlessly or carelessly, or for evil ends. You can smash in a man’s head with a hammer, you can mangle your own thumb with it, or you can use it to build a house. You make something excellent with it, or something mediocre. The computer is asking us: What will it be?