You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

Chip of Fools

A review of The Second Self: Computers and the Human Spirit by Sherry Turkle.

Before rushing blindly into the computer age, we need to remind ourselves that events have already falsified most of the predictions about “postindustrial society” issued with such authoritative assurance over the last forty years. According to a long line of social forecasters, the expansion of the service sector of the economy, this growing proportion of white-collar workers in relation to blue collar workers, and the development of increasingly sophisticated technologies will homogenize the class structure of advanced capitalist societies. In the middle-class society of the future, “professional, technical, and managerial people,” as Peter Drucker argued in 1958, will “become the largest group in the working population … larger in fact than the blue-collar people.”

Advanced technology, on this reasoning, will create an insatiable demand for trained personnel. In the 1950s and 1960s, those who predicted a “knowledge revolution” cited the growth of college enrollments after World War II and the upgrading of the educational credentials required for employment. The exponential increase of information, in their view, accounted not only for the decline of the working class and the rise of a “new middle class,” but for the poverty of the newly discovered “underclass” of blacks, Puerto Ricans, and other “culturally deprived” minorities who lacked the skills essential to effective competition in the labor market. The drive for equal opportunity in education, in many ways the heart of the liberal program in the ’50s and ’60s, assumed that once the underclass had been equipped with technical skills, it would gain access to steady jobs and find its way into the mainstream of American life.

Today the same arguments are advanced in favor of compulsory training in the use of computers. Yet the “postindustrial society” routinely invoked in such appeals has demonstrably failed to materialize. The prolonged academic depression, the surplus of college graduates, and the growing unemployment even among people with advanced degrees, make a mockery of Drucker’s optimistic assertion that “we cannot get enough educated people.” It now appears that employers use educational credentials not to certify technical competence but merely to screen out applicants, mostly from the working and lower classes, who lack what has aptly been called a white-collar union card—that is, a college degree. The educational system has not served as an equalizer. For reasons explained by Christopher Jencks and others, it has reinforced class distinctions instead of breaking them down. Nor has the United States become a middle-class society. Thanks to the work of Richard Parker, Andrew Levinson, Paul Blumberg, and Harry Braverman, among others, it is now clear that most white-collar workers perform essentially unskilled labor. Distinctions between white-collar jobs and blue-collar jobs have become increasingly arbitrary and misleading. Advanced industrial society rests not on the upgrading of skills but on the systematic destruction of skills—on the “degradation of labor,” in Braverman’s phrase.

FAR FROM upgrading the technical skills of the work force, advanced technology in the service of corporate capitalism makes skilled labor superfluous. It promotes an interchangeability of personnel, a rapid movement from one type of work to another, and most important of all, a growing concentration of the labor force in technically backward, labor-intensive, and often un-unionized sectors of the economy. The rapid growth of employment in the service sector, which allegedly proves that America has become a middle-class, “postindustrial” society, reflects, on the contrary, the proletarianization of the work force and the growth of the reserve army of labor, as Marx called it—the casually or irregularly employed workers who constitute an “inexhaustible reservoir of disposable labor power.” The same developments lead to high levels of unemployment, which can no longer be regarded as aberrational but have to be seen, as Braverman puts it, as a “necessary part of the working mechanisms of the capitalist mode of production.”

I dwell on all this because apologies for the computer and for intensified computer instruction in the schools rest on the familiar claim that technological innovations will create an abundance of skilled jobs, eliminate disagreeable jobs, and make life easy for everyone. Everything we know about technological “progress” indicates, on the contrary, that it promotes inequality and political centralization. It commends itself to the masters of American industry for that very reason. Whenever we hear that some new technology is “inevitable” we should consult the historical record, which shows that technical innovations usually appeal to industrialists not because they are inevitable or even because they make for greater productive efficiency, but because they consolidate the industrialist’s power over the work force. The triumph of industrial technology testifies not to the inexorable march of science, but to the defeat of working-class resistance.

It is a muddled, a historical view of the industrial revolution that dismisses this resistance as an attempt to “postpone the inevitable,” as J. David Bolter writes in his study of the coming “computer age.” It is equally muddled to argue that since the “computer age” is upon us, our best hope lies in “reforming the age of computers from within.” In the past, efforts to reform industrial technology from within, usually led by engineers, served merely to reinforce the lessons already driven home by workers’ resistance to the introduction of new technologies: that those technologies serve the interests of capital and that even those who design and manage the machines have little to say about the uses they are put to.

Over and over again, new technologies have reduced even the engineer’s work to a routine. What originates as a craft degenerates into a series of automatic operations performed more or less unthinkingly. Computer programming is no exception to this pattern. As Sherry Turkle writes:

In the course of the last decades programmers have watched their opportunities to exercise their expertise in a spontaneous way being taken away. Those who are old enough remember the time when things were different as a kind of golden age, an age when a programmer was askilled artisan who was given a problem and asked to conceive of and craft a solution. … Today, programs are written on a kind of assembly line. The professional programmer works as part of a large team and is in touch with only a small part of the problem being worked on.

IN THE early days of the computer, according to Turkle, many people hoped that electronic technology could be captured by the counterculture. “Personal computers became symbols of hope for a new populism in which citizens would band together to run information resources and local government.” But things did not turn out that way. Computers encouraged centralization and bureaucracy. Instead of humanizing industry, the personal computer came to serve as an escape from industry for hobbyists and even for professional programmers seeking to achieve in the privacy of the home the control they could no longer exercise at work. Turkle reminds us that “people will not change unresponsive government or intellectually deadening work through involvement with their machines at home.” But personal computers offer the illusion of control in “one small domain,” if not in the larger world of work and politics. Sold to the public as a means of access to the new world of postindustrial technology, personal computers in fact provide escape from that world. They satisfy a need for mastery and control denied outlets elsewhere.

They provide other forms of escape as well. In interviews with people who use computers extensively, particularly with children, Turkle found that computers appeal to people who find their personal lives unmanageable, often to people afraid of being overwhelmed by uncontrollable emotions. “The greater the anxiety about being out of control, the greater the seduction of the material that offers the promise of perfect response.” Playing video games and solving problems on a home computer help to dissociate thought from feeling. They encourage a cool, detached, cerebral state of mind. They allow the operator to feel at once “swept away and in control.” The computer provides a lifelike response that can nevertheless be predicted and controlled. It is no wonder that many users find themselves becoming addicted to their computers. After contrasting early expectations about computer technology to the role it actually plays in people’s lives, Turkle concludes: “It would certainly be inappropriate to rejoice at the ‘holistic’ relationships that personal computers offer if it turns out that they serve as a kind of opiate.”

EXERTION of control over a machine often leads to the further step of identification with the machine—to a new conception of the self as a machine in its own right. Turkle’s principal contention is that technologies are “evocative,” changing the way we think about ourselves and about human nature. The image evoked by computers is the image of the machine-like self. In this sense, it is a “mirror of the mind”—not because it accurately imitates the operation of the mind (as we are often told) but because it satisfies the wish to believe that thought can divorce itself from emotion. For those who have entered most fully into the world of the computer, the prospect that men and women can become machines is a hopeful promise, not a threat. The promise finds its most highly developed expression in the Utopia of “artificial intelligence”—the “next step in evolution,” as Edward Fredkin of M.I.T. once proclaimed it. The theory of artificial intelligence rests on the premise that “thought does not need a unitary agent who thinks,” as Turkle puts it. Thought can dispense with the thinking self, in other words. It can thus overcome the emotional and bodily limitations that have encumbered humanity in the past. Theorists of artificial intelligence celebrate the mind’s clarity, as opposed to what one of them, Marvin Minsky, revealingly refers to as the “bloody mess of organic matter.”

These dreamers hope to create a new race of supermen freed from nature and from man’s oldest enemy, death. Listen to Fredkin’s contemptuous recital of human limitations.

Basically, the human mind is not most like a god or most like a computer. It’s most like the mind of a chimpanzee and most of what’s there isn’t designed for living in high society [sic] but for getting along in the jungle or out in the fields. …The mere idea that of have to be the best in the universe is kind of far-fetched. … The fact is, I think we’ll be enormously happier once our niche has limits to it. We won’t have to worry about carrying the burden of the universe on our shoulders as we do today. We can enjoy life as human beings without worrying about it. 

The social vision implied by this kind of thinking is as regressive as the escapist psychology behind it. The psychology is the fantasy of total control, absolute transcendence of the limits imposed on mankind by its lowly origins. As for the social vision, it carries one step further the logic of industrialism, in which the centralization of decision-making in an educated elite frees the rest of us from the burden of political participation.

According to J. David Bolter, the computer promotes a new understanding of human limitations; and Fredkin’s statement might lend itself to just this sort of misinterpretation. Like Turkle, Bolter studies the computer’s imaginative impact. He too concludes that computers evoke a new image of man as a machine. But he offers a more encouraging reading of this “major change in sensibilities,” one reminiscent of those advanced by Marshall McLuhan and Alvin Toffler. The computer has undermined our “linear” conception of progress. Bolter thinks, and replaced it with a “finite world view.” It has weakened the old Faustian “concern with depth” and encouraged a concern with surfaces. It has devalued emotional intensity; but “if the computer age does not produce a Michelangelo and a Goethe, it is perhaps less likely to produce a Hitler or even a Napoleon.”

Bolter is not unaware of the computer’s “Utopian” appeal; but he thinks that other considerations “balance” the Faustian implications of computer technology. “We are becoming aware of our own temporal limitations.” But as Fredkin’s feverish reflections show so clearly, this acknowledgment of limitations, prompted by a comparison of the slow moving human mind with the computer’s rapid calculations, does not mean what it appears lo mean. Instead of accepting human limitations, theorists of artificial intelligence and other prophets of the new electronic age dream of overcoming them by creating a new race of machines, just as genetic engineers dream of redesigning the human body so as to free it from all the ills that flesh is heir to.

Like most people who write about computers, Bolter—a classicist with a master’s degree in computer science has no interest in politics and no conception of the political context of computer technology. His claim that computers foster a sense of limits rests, in part, on the irrelevant observation that “computers figure largely in all facets of conservation and rational consumption.” Conservation and consumption are political issues, not technical issues, and computers in themselves will do nothing to bring about a rational allocation of scarce resources or a less exploitive attitude toward nature. Conservation runs counter to our entire system of large-scale capitalist enterprise. It demands small-scale production, political decentralization, and an abandonment of our consumer culture. It demands a change in the way we live, not a new technology, even a “revolutionary” technology. In any case, the revolutionary impact of information technology has been greatly exaggerated by students of “megatrends.” Like all technologies, the computer solves problems that are defined not by technology itself but by the prevailing social priorities. In a society based on the ruthless exploitation of natural resources and on the dream that man can “raise himself above the status that nature seems to have assigned him,” in Bolter’s own words, the computer will serve as another weapon in man’s war against nature. More prosaically, it will serve as a means of producing corporate profits. In a more democratic society, the computer might serve more constructive purposes.

Technology is a mirror of society, as Turkle insists, not a revolutionary force in its own right. It shows us ourselves as we are and as we would like to be; and what it reveals, in the case of the computer, is an unflattering image of the American at his most incorrigibly escapist, hoping to lose himself—in every sense of the term—in the cool precision of machines that know everything except everything pertaining to that “bloody mess of organic matter.”