// Read more here: // https://my.onetrust.com/s/article/UUID-d81787f6-685c-2262-36c3-5f1f3369e2a7?language=en_US //
You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

Technology Is Taking Over English Departments

The false promise of the digital humanities

ASSOCIATED PRESS/Jacques Brinon

The humanities are in crisis again, or still. But there is one big exception: digital humanities, which is a growth industry. In 2009, the nascent field was the talk of the Modern Language Association (MLA) convention: “among all the contending subfields,” a reporter wrote about that year’s gathering, “the digital humanities seem like the first ‘next big thing’ in a long time.” Even earlier, the National Endowment for the Humanities created its Office of Digital Humanities to help fund projects. And digital humanities continues to go from strength to strength, thanks in part to the Mellon Foundation, which has seeded programs at a number of universities with large grants—most recently, $1 million to the University of Rochester to create a graduate fellowship.

Despite all this enthusiasm, the question of what the digital humanities is has yet to be given a satisfactory answer. Indeed, no one asks it more often than the digital humanists themselves. The recent proliferation of books on the subject—from sourcebooks and anthologies to critical manifestos—is a sign of a field suffering an identity crisis, trying to determine what, if anything, unites the disparate activities carried on under its banner. “Nowadays,” writes Stephen Ramsay in Defining Digital Humanities, “the term can mean anything from media studies to electronic art, from data mining to edutech, from scholarly editing to anarchic blogging, while inviting code junkies, digital artists, standards wonks, transhumanists, game theorists, free culture advocates, archivists, librarians, and edupunks under its capacious canvas.”

Within this range of approaches, we can distinguish a minimalist and a maximalist understanding of digital humanities. On the one hand, it can be simply the application of computer technology to traditional scholarly functions, such as the editing of texts. An exemplary project of this kind is the Rossetti Archive created by Jerome McGann, an online repository of texts and images related to the career of Dante Gabriel Rossetti: this is essentially an open-ended, universally accessible scholarly edition. To others, however, digital humanities represents a paradigm shift in the way we think about culture itself, spurring a change not just in the medium of humanistic work but also in its very substance. At their most starry-eyed, some digital humanists—such as the authors of the jargon-laden manifesto and handbook Digital_Humanities—want to suggest that the addition of the high-powered adjective to the long-suffering noun signals nothing less than an epoch in human history: “We live in one of those rare moments of opportunity for the humanities, not unlike other great eras of cultural-historical transformation such as the shift from the scroll to the codex, the invention of movable type, the encounter with the New World, and the Industrial Revolution.”

Illustration by Smithe

The language here is the language of scholarship, but the spirit is the spirit of salesmanship—the very same kind of hyperbolic, hard-sell approach we are so accustomed to hearing about the Internet, or  about Apple’s latest utterly revolutionary product. Fundamental to this kind of persuasion is the undertone of menace, the threat of historical illegitimacy and obsolescence. Here is the future, we are made to understand: we can either get on board or stand athwart it and get run over. The same kind of revolutionary rhetoric appears again and again in the new books on the digital humanities, from writers with very different degrees of scholarly commitment and intellectual sophistication.

In Uncharted, Erez Aiden and Jean-Baptiste Michel, the creators of the Google Ngram Viewer—an online tool that allows you to map the frequency of words in all the printed matter digitized by Google—talk up the “big data revolution”: “Its consequences will transform how we look at ourselves.... Big data is going to change the humanities, transform the social sciences, and renegotiate the relationship between the world of commerce and the ivory tower.” These breathless prophecies are just hype. But at the other end of the spectrum, even McGann, one of the pioneers of what used to be called “humanities computing,” uses the high language of inevitability: “Here is surely a truth now universally acknowledged: that the whole of our cultural inheritance has to be recurated and reedited in digital forms and institutional structures.”

If ever there were a chance to see the ideological construction of reality at work, digital humanities is it. Right before our eyes, options are foreclosed and demands enforced; a future is constructed as though it were being discovered. By now we are used to this process, since over the last twenty years the proliferation of new technologies has totally discredited the idea of opting out of “the future.” Everyone who ever swore to cling to typewriters, record players, and letters now uses word processors, iPods, and e-mail. There is no room for Bartlebys in the twenty-first century, and if a few still exist they are scorned. (Bartleby himself was scorned, which was the whole point of his preferring not to.) Extend this logic from physical technology to intellectual technology, and it seems almost like common sense to say that if we are not all digital humanists now, we will be in a few years. As the authors of Digital_Humanities write, with perfect confidence in the inexorability—and the desirability—of their goals, “the 8-page essay and the 25-page research paper will have to make room for the game design, the multi-player narrative, the video mash-up, the online exhibit and other new forms and formats as pedagogical exercises.”

In fact, the transition to some version of a post-verbal future is already taking place. Debates in the Digital Humanities includes a variety of blog posts, as well as more formal essays and articles, and one of these, Mark L. Sample’s “What’s Wrong with Writing Essays,” describes how digital humanities can be applied in pedagogical terms. Its great utility, in his view, is that it can do away with student writing. “Why must writing, especially writing that captures critical thinking, be composed of words? Why not images? Why not sound? Why not objects?” As an example of a digital-humanist artifact, he offers the anecdote that once, in a video-game class, “one student ‘mapped’ Sid Meier’s Pirates! (1991) onto a piece of driftwood. This ‘captain’s log,’ covered with screenshots and overlayed with axes measuring time and action, evokes the static nature of the game more than words ever can.... The wood says what words cannot.” In this vision, the very idea of language as the basis of a humane education—even of human identity—seems to give way to a post- or pre-verbal discourse of pictures and objects. Digital humanities becomes another name for the obsequies of humanism.

But it would be unfair to generalize from the obviously anti-humanistic manifestations of digital humanities to the entirety of the field itself, for the simple reason that the field has no common essence: it is not a species but at best a genus, comprising a wide range of activities that have little relationship with one another. At its most pragmatic, digital humanities has less to do with ways of thinking than with problems of university administration. The advent of the Internet has posed challenges to the institutions of academia just as it has to the music business, and one function of digital humanities is to address these issues. In “What Is Digital Humanities and What’s It Doing in English Departments?” Matthew Kirschenbaum writes, “the construction of ‘digital humanities’ ... increasingly serves to focus the anxiety and even outrage of individual scholars over their own lack of agency amid the turmoil in their institutions and professions.”

McGann, in his essay “Information Technology and the Troubled Humanities,” points to university press publishing as one site of this discontent. “In 1990 a university press would typically print 1000-1500 copies of an academic book. Today the number is 200-250 and dropping every year.” The obvious solution is to migrate the publication of monographs to the Internet, but the incentives for promotion and tenure continue to reward print publication over online work. “How to bring about the transition to online publication is the $64,000 question,” McGann observes. Other essays in Debates in the Digital Humanities and Defining Digital Humanities raise the problem of how to assign credit for things such as websites and computer programs, which are collaboratively authored, in a system geared to the single-author article or book. Others lament that there is no budget line in most departments for the kind of specialists—programmers, curators, librarians—needed to make a digital humanities project come together.

Underlying these administrative problems is a more basic question about the nature of humanistic work. A humanities culture that prizes thinking and writing will tend to look down on making and building as banausic—the kind of labor that can be outsourced to non-specialists. Digital humanities gains some of its self-confidence from the democratic challenge that it mounts to that old distinction. “Personally, I think Digital Humanities is about building things,” said Ramsay in a polarizing talk at the MLA convention in 2011, printed in Defining Digital Humanities. Unlike many theorists, however, he was willing to make this demand concrete: “Do you have to know how to code? I’m a tenured professor of digital humanities and I say ‘yes.’ ” Naturally, most humanities professors, even digital ones, do not know how to code, and Ramsay’s bluntness caused a backlash: “boy, did this get me in trouble,” he writes in a note to his essay. There is something admirable about this frankness: if digital humanities is to be a distinctive discipline, it should require distinctive skills.

But are they humanistic skills? Was it necessary for a humanist in the past five hundred years to know how to set type and publish a book? Moreover, is it practical for a humanities curriculum that can already stretch for ten years or more, from freshman year to Ph.D., to be expanded to include programming skills? (Not to mention the possibility that the kind of people who are drawn to English and art history may not be interested in, or good at, computer programming.) Like many questions in digital humanities, this one remains open. But the basic emphasis on teamwork and building, as opposed to solitary intellection, is common to all stripes of digital humanists. Digital_Humanities leaves no doubt that the future of the field belongs to democratic groups, not elitist individuals:

The myth of the humanities as the terrain of the solitary genius, laboring alone on a creative work, which, when completed, would be remarkable for its singularity—a philosophical text, a definitive historical study, a paradigm-shifting work of literary criticism—is, of course, a myth. Genius does exist, but knowledge has always been produced and accessed in ways that are fundamentally distributed, although today this is more true than ever.

Once again, the “of course” signals that we are in the realm of ideology. As an empirical matter, the solitary scholar laboring on a singular paradigm-shifting work is quite real. Mimesis is not a myth, and neither is Major Trends in Jewish Mysticism, or Philosophy in a New Key, or The Civilization of the Renaissance in Italy—you can go to the library and check them out (or, if that takes too long, download them). There is no contradiction between this fact and the idea that knowledge is “fundamentally distributed.” Scholarship is always a conversation, and every scholar needs books to write books. Humanistic scholarship has always been additive and collaborative even if it has not been in the strict sense collective. It is not immediately clear why things should change just because the book is read on a screen rather than a page.

This is not to say, of course, that traditional scholars, even “solitary geniuses,” cannot make use of digital tools. They already do: just about every scholarly book written today is written on a computer, and every query takes the form of an e-mail, and some advanced scholarly methods rely on exciting new technological tools. The translation of books into digital files, accessible on the Internet around the world, can be seen as just another practical tool like these, which facilitates but does not change the actual humanistic work of thinking and writing. Indeed, as McGann argues in his new essay collection A New Republic of Letters, the translation of the world’s libraries into digital form represents a major opportunity for a revival of philology, the most traditional kind of textual scholarship. So far, he points out, “we are not even close to developing browser interfaces to compare with the interfaces that have evolved in the past 500 years of print technology.”

If some digital humanists do see a contradiction between individual genius and digital practice, it is because more is at stake here than whether you read on the page or online. For the authors of Digital_Humanities, it is a truism that a change in the medium of knowledge means a change in the structure and even the essence of knowledge. Inspired by a naïve kind of historical materialism, they take for granted that the kind of knowledge available to the reader of a scroll is different from that of a codex, which is different from that of a printed book, which is different from that of a screen. That is why “digital humanists imagine the past and the future in ways that fundamentally transform the authoring practices of poets and historians, using new sets of tools, technologies, and design strategies. For digital humanists, authorship is rooted in the processes of design and the creation of the experiential, the social, and the communal.”

None of these assumptions are obviously correct. And all this is perhaps to state things the wrong way around: it is not that digital humanists have created their tools out of a commitment to the social, but that the tools of digital humanities are only suited to understanding things in the mass. Franco Moretti, the pioneering scholar of the history of the novel, gives a name to the kind of encounter with texts that machines facilitate: “distant reading,” he calls it. 

The New Critics gave us close reading, the engagement with the minute verbal nuances of a text, and the mode still thrives—Helen Vendler’s studies of Shakespeare’s sonnets and Dickinson’s poems are classic contemporary examples. But as Moretti argues in “Conjectures on World Literature,” one of the essays in his new collection, Distant Reading, close reading implies that certain texts are especially worthy of this kind of scrutiny—that is, it implies a canon. “If you want to look beyond the canon ... close reading will not do it,” he observes; you can closely read two hundred poems, but not twenty thousand poems. To analyze such a large quantity of texts, you need to “focus on units that are much smaller or much larger than the text: devices, themes, tropes—or genres and systems.” And this is the kind of concrete pattern-finding that computers specialize in. Distant reading is reading like a computer, not like a human being: as Moretti forthrightly says, “We know how to read texts, now let’s learn how not to read them.”

Moretti gives an example of this kind of not-reading in an essay revealingly titled “Style, Inc.: Reflections on 7,000 Titles.” Rather than read every novel published in Britain between 1740 and 1850, a task that would fill a lifetime, he takes only the titles of all those novels, and uses a computer program to find patterns in the data. One such pattern emerges right away: over the period in question, the average length of a title decreases dramatically, from fifteen to twenty words at the beginning to six words at the end. This is owed especially to the disappearance of very long titles, which were conventional in early novels but became unfashionable.

By examining those long titles, Moretti shows that what changed was the function of the title itself. From a miniature summary of the book, it evolved into a catchy, attention-grabbing advertisement for the book. The proliferation of novels in a crowded market helps to explain this trend: “Titles allow us to see a larger literary field ... and the first thing we see in this larger field ... is the force of the market.” Moretti goes on to draw other salient observations from the data—for instance, why the titles of sensationalistic novels tend to begin with “the” (The Vampyre, The Pirate, The Rebel) while pioneering feminist novels opt for “a” (A Hard Woman, A Daughter of Today). As he explains, “What the article ‘says’ is that we are encountering all these figures for the first time; we think we know what daughters and wives are, but we actually don’t, and must understand them afresh.”

“Style, Inc.” presents as good a case for the usefulness of digital tools in the humanities as one can find in any of these new books. And yet its findings are not very exciting. It is striking that digital tools, no matter how powerful, are themselves incapable of generating significant new ideas about the subject matter of humanistic study. They aggregate data, and they reveal patterns in the data, but to know what kinds of questions to ask about the data and its patterns requires a reader who is already well-versed in literature. The computer can tell you that titles have shrunk (and you hardly need a computer to tell you that: the bulky eighteenth-century title is commonplace and a target of jokes even today), but it takes a scholar with a broad knowledge of literary history—that is, a scholar who has examined the insides and not just the outsides of literary and artistic works—to speculate about the reasons titles shrink, and why it matters. Likewise, you have to know about the nuances of the English language and the history of feminism in order to be able to explain why titles such as A Hard Woman appealed to the writers of “new woman” novels. Is a computerized list of titles necessary to generate such an insight, or could it be deduced simply by reading and thinking about some of the relevant books? Does the digital component of digital humanities give us new ways to think, or only ways to illustrate what we already know?

Certainly, if we ask the data unsophisticated or banal questions, we will get only unsophisticated and banal answers. That is the lesson of Uncharted, in which Erez Aiden and Jean-Baptiste Michel play tricks with the Google Ngram Viewer. In an odd but revealing moment, they quote a list of things that publications said about their invention when it launched, including this: “Mother Jones hailed it as ‘perhaps the greatest timewaster in the history of the Internet.” “Hailed” does not seem like quite the right word here, but Aiden and Michel don’t care: what matters is not the quality of the attention but the fact that “the interwebs were atwitter, and the Twitter was abuzz.”

The Google Ngram Viewer allows the user to search all of Google Books for strings of characters. This sounds like a powerful tool, but as Aiden and Michel put it through its paces, it turns out once again that the digital analysis of literature tells us what we already know rather than leading us in new directions. It is not surprising to learn, for instance, that the incidence in print of the name of any given year is most common in that year itself, so that more books containing “1950” were published in 1950 than in any other year. One reason this is not surprising is that all books’ copyright pages include the year of publication; but Aiden and Michel ignore this fact, which tends to nullify their conclusions about the “forgetting curve.” Once again, meta-knowledge—knowledge about the conditions of the data you are manipulating—proves to be crucial for understanding anything a computer tells you. Ask a badly phrased question and you get a meaningless answer.

At another point Aiden and Michel use the Ngram Viewer to document the suppression of certain names in German-language books published between 1933 and 1945. They show that banned artists such as Chagall and Beckmann virtually disappear from German books under the Nazis, and then rebound spectacularly after the war, as interest in their work revives. This is another example of data illustrating a truism rather than discovering a truth. After all, we wouldn’t think to search for those names in that time period unless we knew what we were going to find, and why; and the same holds true for the other examples of censorship that Aiden and Michel cite—the word “Tiananmen” in Chinese after 1989, for instance. The faux naïveté of some of these digital tools, their proud innocence of prior historical knowledge and literary interpretation, is partly responsible for the thinness of their findings.

Indeed, Aiden and Michel write that when they posed the same question about artists’ names to “a scholar from Yad Vashem,” she was able to predict exactly “which names would appear at which end of the curve. We didn’t give her access to our data or to our results, and we didn’t even tell her why we were asking. All she got from us was the list of names. Nevertheless, her answers agreed with ours the vast majority of the time.” Of course they did: she was a scholar! Aiden and Michel do not seem to recognize that this example, far from making the case for the usefulness of Ngrams, completely destroys it, by turning them into fancy reiterations of conventional wisdom.

If computers cannot think better than human beings, digital humanists are left with the argument that at least they can think faster—the John Henry argument. In the essay “Developing Things: Notes toward an Epistemology of Building in the Humanities,” Ramsay and Geoffrey Rockwell observe, “Reading Foucault and applying his theoretical framework can takes months or years of application. A web-based text analysis tool could apply its theoretical position in seconds.” Never mind that understanding that theoretical position will take more than seconds. Here are nicely encapsulated the two fundamental errors that theorizing about the revolutionary nature of digital humanities often commits. First, there is the idea that thinking humanistically is a matter of taking a framework of ideas and “applying” it to a text or a work of art. The second is the idea that applying ideas in this way leads to an external self-subsistent result, be it a theory or another book or a piece of driftwood with pictures on it.

Both of these errors derive from a false analogy between the humanities and the sciences. Humanistic thinking does not proceed by experiments that yield results; it is a matter of mental experiences, provoked by works of art and history, that expand the range of one’s understanding and sympathy. It makes no sense to accelerate the work of thinking by delegating it to a computer when it is precisely the experience of thought that constitutes the substance of a humanistic education. The humanities cannot take place in seconds. This is why the best humanistic scholarship is creative, more akin to poetry and fiction than to chemistry or physics: it draws not just on a body of knowledge, though knowledge is indispensable, but on a scholar’s imagination and sense of reality. Of course this work cannot be done in isolation, any more than a poem can be written in a private language. But just as writing a poem with a computer is no easier than writing one with a pen, so no computer can take on the human part of humanistic work, which is to feel and to think one’s way into different times, places, and minds.

The problem for the humanities—the institutional and budgetary problem—is that changed minds and expanded spirits are not the kinds of things that can be tabulated on bureaucratic reports. In humanistic study, quantification hits its limits (even if quantifiers refuse to recognize them). It is much easier to measure the means—books published, citations accumulated—than the ends. It is likely that digital humanities will only increase this kind of pressure on humanities departments. All those grants have to be accounted for somehow; the rhetoric of the digital, in the academy as in the marketplace, prides itself on being results-oriented. Thus the authors of Digital_Humanities conclude, “If the humanities are to thrive and not just exist in niches of privilege, they will have to visibly demonstrate the contributions to knowledge and society they are making in the digital era.” Here the populist language of ivory towers combines with the market language of productivity to create yet another menacing metric for the humanities.

The best thing that the humanities could do at this moment, then, is not to embrace the momentum of the digital, the tech tsunami, but to resist it and to critique it. This is not Luddism; it is intellectual responsibility. Is it actually true that reading online is an adequate substitute for reading on paper? If not, perhaps we should not be concentrating on digitizing our books but on preserving and circulating them more effectively. Are images able to do the work of a complex discourse? If not, and reasoning is irreducibly linguistic, then it would be a grave mistake to move writing away from the center of a humanities education.

These are the kinds of questions that humanists ought to be well equipped to answer. Indeed, they are just the newest forms of questions that they have been asking since the Industrial Revolution began to make our tools our masters. The posture of skepticism is a wearisome one for the humanities, now perhaps more than ever, when technology is so confident and culture is so self-suspicious. It is no wonder that some humanists are tempted to throw off the traditional burden and infuse the humanities with the material resources and the militant confidence of the digital. The danger is that they will wake up one morning to find that they have sold their birthright for a mess of apps. 

Adam Kirsch is a senior editor at The New Republic and a columnist for Tablet.