Imagine a new Library of Alexandria. Imagine an archive that contains all the natural and social sciences of the West—our source-critical, referenced, peer-reviewed data—as well as the cultural and literary heritage of the world's civilizations, and many of the world’s most significant archives and specialist collections. Imagine that this library is electronic and in the public domain: sustainable, stable, linked, and searchable through universal semantic catalogue standards. Imagine that it has open source-ware, allowing legacy digital resources and new digital knowledge to be integrated in real time. Imagine that its Second Web capabilities allowed universal researches of the bibliome.
Well, why not imagine this library? Realizing such a dream is no longer a question of technology. Remarkable electronic libraries are already being assembled. Google Books aims to catalogue about 16 million books. The nonprofit Internet Archive already has some 1 million volumes. Public expectations run ahead even of these efforts. To do research, only one in a hundred American college students turn first to their university catalogue. Over 80 percent turn first to Google.
It is clear that if a new Alexandria is to be built, it needs to be built for the long term, with an unwavering commitment to archival preservation and the public good. A true public good itself, it probably needs to be largely governmentally funded. And, while a global and cooperative venture, it needs to be hosted by one organisation that is reputable, long-standing, nonprofit, and exists in a stable jurisdiction. The Library of Congress, the flagship institution of the world’s only surviving Enlightenment republic, comes to mind. There might be other possibilities, such as the New York Public Library, or the British Library, or a consortium of the world’s leading university libraries—UCLA, Harvard, Cambridge University, and so on.
In other words, the question for scholars and gatekeepers is not whether change is coming. It is whether they will be among the change-makers. And if not them, then who? Who else will ensure long-term conservation and search abilities that are compatible across the bibliome and over time? Who else will ensure equality of access? Ultimately, this is not a challenge of technology, finances, or ultimately even laws, difficult though they are. It is a challenge of will and imagination.
Answering that challenge will require some soul-searching: Do we have the generosity to collaborate? Can we build legal, organizational, and financial structures that will preserve and order—but also share and disseminate the learning of the world? Scholars have traditionally gated and protected knowledge, yet also shared and distributed it in libraries, schools, and universities. We have stood for a republic of learning that is wider than the ivory tower, and now is the time to do so again. We must stand up, as the Swedes say, for folkbildningsidealet, that profoundly democratic vision of universal learning and education.
We must first understand that the nature of the library is changing. Traditionally, libraries have been conceived as protective vessels in a world where information is scarce. Our iconic library stories are romances of destruction, decay, and amnesia. We tell tales of time, fire, and barbarians, and of heroic rescues of fragments of lost and esoteric knowledges. We still mourn Alexandria. We revere St. Catherine’s Monastery, the Vatican archives, and the Dead Sea Scrolls. We grieve over the Christians closing the academy of Athens, and we listen in horror to the tales of the fall of Constantinople, where in desperation the last Grecian scholars lit the cannons with their manuscripts. Who among us has not lamented, with Aeneas Silvius (Pius II), that Homer and Plato have now “died a second death”? Boethius, the monks of Iona, the fleeing Byzantine humanists--these are our heroes and role models.
In other words, throughout history, libraries have depended on destruction. And today, in an era of electronic abundance, they still operate within an increasingly imaginary economy of scarcity—fragments, incunabula, manuscripts, rare books. They act as storehouses of pricey collectors’ items, painstakingly recorded sets of symbols, crafted sometimes by hand, sometimes in block print, and sometimes in movable type. Only very recently (remember the last printers’ strike in Britain) were any of these conjured up from the bowels of computers. Once, books were chained to the wall. Today, print is an afterthought: “Do you want a receipt with that?”
In today’s era of electronic abundance, how can libraries archive the dreams and experiences of humankind? What do we discard? And if a library can no longer be understood as a warehouse of treasures, a primitively accumulated Schatzkammer, what is it?
One way to understand this dilemma is to consider the choices faced by organizations such as the Harvard Library, the world’s largest university library, as it digitizes its material. Its some 16 million volumes rival those planned for Google Books. One took nearly 400 years to achieve. The other, less than a decade. Harvard's institutional culture dates back to 1638, and as late as the mid-nineteenth century, it was a stated duty of its Overseers to count its volumes. In those days, Harvard’s books were threatened by fire and water (as London’s booksellers’ wares crossed the Atlantic on clippers and schooners). Yet today, our sailing--or counting--skills mean little.
Like all good research libraries, Harvard's is hierarchically organized. The core / reference section, with bibliographies, dictionaries, encyclopedias, library catalogues, and so on, is rapidly dematerializing, as it moves into “the cloud.” So is much of the record of scholarship, especially in the natural sciences, and at least some of the record of the human experience.
But what about gray data, such as laboratory notebooks, lectures, conference proceedings, dissertations, data sets, courseworks? Is it not the task of libraries to preserve the processes as much as the products of knowledge? How else can we test it? Or indeed comprehend it historically? The papers of Newton, Darwin, Einstein, and Bohr can be (and indeed are being) produced in toto. But what about “big science”? The ATLAS detector of the Large Hadron Collider at CERN--that 27-kilometer underground circular tunnel used to search for the Higgs boson--takes 90 million measurements 600 million times a second, and these are analyzed by some 6,000 high energy physicists. Worldwide, stored scientific data is approaching a petabyte and doubles every year. Even artisanal lab skills, once handed down by lore and practice, are now recorded on wikis. What is to be preserved? By whom? For how long? How do we process, calibrate, reorganize, analyze, and store our data? What do we do when our software, let alone our brains, cannot keep up? What do we do when bits degrade, software and hardware go extinct, and cyberspace turns out to be a decaying maze?
Scholars rightly argue that we cannot meaningfully analyze our peer-reviewed knowledge without also archiving its primary sources. But today’s knowledge quest is universal: Our primary sources encompass all the knowledge, hopes, and dreams of humanity. Our Alexandria was not burnt, our Byzantium still stands, and our Athenian academies are blossoming. And in addition to the near-infinitude of our scholarly endeavours and their materials, we want to preserve that which we have not yet incorporated into our learned canons: the near-extinct and the barely remembered, the oral traditions and the dying languages, the esoteric and the sacred—the reviled, even—and the persecuted. We want the Nazi state papers and the Lodz ghetto archives, the Soviet encyclopedias and the samizdat literature, the Maimonides commentaries and the Genizah fragments, the Ethiopians' church songs and their memories of the recent famines.
Next to the rare, well-studied cultic artefact--the letter by Jefferson, the Magna Carta--we also want ephemera: pamphlet literature, theater bills, immigrant broadsheets, and poetry workshops. And we are right to want ephemera. We have belatedly realized that humankind understands only poorly what will last through the ages. Think of John Clare, Emily Dickinson, or Barbara Pym. Or think of Isaac Bashevis Singer.
What if our next “peasant poet,” as John Clare was known, twitters? What if he writes a blog or shojo manga? What if he publishes via a desktop or vanity publisher? Will his output count as part of legal deposit material? What if there is a masterpiece being filmed in Bollywood? What if one among many Nigerian novelettes, which typically address a young heroine’s agonized choice between a village boy and a “big man,” turns out to be written by a Jane Austen? And even if none are, don’t we want to preserve them all, regardless, so that one day we can run larger studies on them, studies perhaps as yet unimaginable, because they depend on computer uses not yet invented?
Moreover, investigating very large datasets—whether texts, numbers, or images—is a job for consortia. It is beyond the capacity of any one library or university, especially if the data to be mined is raw and unorganized—such as digital satellite imagery, census data, survey responses, and the like Moreover, such studies might engage not only university-affiliated scholars, but also the community.
You see the problem. What is the library, when the totality of experience approaches that which can be remembered? What is it when we no longer preserve only those fragments that time, fire, and barbarians have left us? When we are no longer are able to safeguard only remnants of our discourses on thought, memory, and images, but the thoughts, memories, and images themselves—complete? What do we do when we have not only the Lives of the Most Excellent Painters, Sculptors, and Architects, but also Vasari’s blog, wiki, twitter, texts, emails, chatroom, Facebook, radio interviews, TV appearances, and electronic notebooks?
In 2008, the Internet’s founder, Tim Berners-Lee, reflecting on his topsy-turvy child, noted that the Web’s vast emergent properties are perhaps best modelled by biological concepts, such as plasticity, population dynamics, food chains, and ecosystems. But how do we conceive of the Web when this also means grasping its quasi-biological whole? Do libraries dream of electric sheep? For that matter, do electric sheep dream of libraries? Who will preserve? Who will be preserved? How will we tell the difference? Will Simfrog 2.0 be conscious? Will Second Life take on life—and if so, what will be its—and our—library?
There is also the question of access. As the Open Web movement has it, an old tradition and a new technology have converged to make possible an unprecedented public good. The “new technology” means that the marginal costs of electronic replicas are now nearly zero, triggering a gloriously chaotic disintermediation. Think only of Kindle / Amazon, Google Books, the Expresso Machine, or Mills & Boon’s e-books. But the role that the “old tradition” will play in this arrangement is less often discussed. Scholars publish without direct pay, for the sake of knowledge, with peer recognition and social utility as their reward. In practice, peer recognition reigns foremost. Most scholars are only mildly interested in widening their audiences. This matters, for scholars run archives and libraries, and they run them according to their lights. These institutions do a fine job collecting, but the truth is that their guardians mostly grant access to, well, fellow scholars.
When speeches are given, university representatives describe their mission as “producing, preserving, and propagating knowledge.” But in local-governance parlance, the purpose of university libraries is to serve their faculty and students, and, when feasible, scholars at peer institutions. In other words, university libraries typically define their constituencies as those scholars formally associated with their universities. Not even alumni are mentioned. The narrowness of these constituencies is worth stressing, because many people think that the great university libraries set out to serve the public. They do not, at least not directly.
This matters, because the public today is not the public of 50 years ago. Okies, hillbillies, sharecroppers, and mill workers may not have had the energy or learning to engage with scholars. But today’s public is educated and engaged. Indeed, it has proven this by participating in the collective knowledge projects that the technological rupture has enabled. The World Community Grid signs up volunteer computers. Other projects such as Wikipedia and SETI turn to volunteers via their computers. Through Folding@home, some 40,000 PlayStation 3 volunteers help Stanford scientists fold proteins. In foldReCAPTCHA, amateurs help digitize The New York Times’ back catalogue. In the ESP project, the public has labelled some 50 million photographs to train computers to think. In GalaxyZoo, some 160,000 people help astronomers at Johns Hopkins University and elsewhere to classify galaxies, and in Africa@home, volunteers study satellite images of Africa, to help the University of Geneva create useful modern maps. Conservation biology, a whole academic field, depends on amateur surveys, both outdoors and in historical collections. At Herbaria@home, for example, volunteers decipher herbaria held in British museums.
Yet much of this crowdsourcing, or mass voluntary participation, is just “grunt work”: basic lab-assistant–type work that often deals with image recognition. Scholars engage less with the “hive mind”—the public—when it comes to more complex or interpretative work. There are exceptions. For example, in Israel, the Rothschild family and others are pioneering a project to put the Dead Sea Scroll fragments into a public domain website, thereby engaging with religious communities that have unparalleled language skills. But by and large, the scholarly community has not made available to the public its “core” research material, such as, to choose a few examples, the House of Commons Parliamentary Papers, Historical Statistics of the United States Online, BMJ Clinical Evidence, Early English Literature Online, ehRAF Collection of Ethnography, Index of Christian Art, ProQuest Dissertations and Theses, Index Islamicus, Frantext, Oxford Music Online, ARTstor, and Aluka. Try accessing these databases via Google instead of through your university account. It is a thought-provoking experience. Many make very clear indeed that they are commercially owned and thus debarred to all, except for those able to pay eye-watering fees. And even university-controlled collections are expensive. Take the “Index of Christian Art,” assembled by Princeton since 1917. There are vast, learned—and poor—Christian communities worldwide. Should this magnificent assemblage of digitized photographs be limited to those able to pay $500 annual fees? It is free for Dumbarton Oaks fellows, but even a fellow's spouse is only allowed to see the electronic database if possessed of “appropriate academic qualifications in his or her own right.” So much for familial economies of scholarship, and the rights of that generation of women who left college to get married, yet engage with their husbands’ work. So much, too, for modern families—why make a gesture toward spouses, but not partners?
My examples of closed academic databases are random. I do not mean to single out anyone for special blame. But nor do I want to absolve anyone. The wider point is this: Few academic databases and research tools are in the public domain, even though the public has paid for them—through research grants, tax breaks, and donations. Nor is the higher-order academic commentary available to the public. It is arguably especially problematic that Ph.D. and M.A. theses are not in the public domain, given that these master works delineate those supposedly “appropriate” boundaries of access. In other words, the gate-openers remain hidden from those debarred from accessing that to which they open the gate. It is equally problematic that JSTOR, the splendid 1997 database of most twentieth-century scholarly articles in the social sciences and humanities, is off-limits for the public (although in fairness JSTOR’s hands are largely tied, since it and indeed other academic knowledge managers face near-impossible copyright laws).
And at least the academic databases have entered the digital realm. Academic monographs, although produced by digitized means, are then, in what is arguably an act of collective academic madness, turned into non-searchable paper products. Moreover, both academic articles and monographs are kept from the public domain for the author's lifetime, plus 70 years. My own Ph.D., published in 1999, will come into the public domain in about 110 years, around 2120. And no matter what Congress might claim, I do not think my royalty earnings will be a big income for my grandchildren. I would rather reach out to fellow scholars and enthusiasts.
In any case, grandchildren’s rights are not the issue here. If they were, Congress would not have applied the same centuries-long lockup periods to out-of-print works, where copyright holders and publishing presses can no longer be found. The public does not even have allemansrätt, to use the Swedish medieval term for the right to roam, on those vast thought-lands that lie fallow and abandoned. Because of copyright, few dare to adopt these orphaned works into the public domain, no matter how central they are for scholarship, or how interesting to the general public. Few dare to re-issue them even in paper format. Additionally, restrictive fair use-rules mean that libraries that own a copy do not dare digitize it for the public domain or even for their own constituencies. In the age of electronic reproduction, many books are legally enjoined to remain as few and as rare as Gutenberg Bibles.
As things stand, scholars sign over their copyrights to for-profit academic presses and journals. Sometimes, in violation of their contracts, they also post their works on their own websites. Publishers are not suing yet. It is a “don’t ask, don’t tell” standoff. But that is hardly ideal. It means that free public access to scholarship, as far as it exists in fragments here and there, is based on a wholesale violation of copyright. And, in any case, self-archiving is inherently unstable and transient. The legal profession rightly worries about judgments based on since-vanished references, and those of us who work in twentieth-century history or the social sciences know the difficulties of citing ever-changing websites. Thus new Alexandria falters, most immediately on copyright legislation and market failure.
The academic publishing market has bifurcated into a fragmented paper market for monographs and an oligopolistic electronic market, or cartel, for journals. The inflation rate for scholarly monographs is bad enough (and more academic books are published every year). But prices are hyper-inflating for commercial academic journals. Three firms, it is said in academic circles, control 85 percent of the periodicals market. Karl Marx and Adam Smith, both experts on the natural evil of monopoly, would nod knowingly on learning that an annual subscription for a scholarly journal can cost up to $25,000, and that the price per page for commercial journals is up to twelve times more than for non-profit ones. And this is not because the for-profit journals are better. In the field of economics, at least, the cost per citation is 16 times higher for commercial journals than for those published by scholarly societies. And this is only counting subscription fees. Additionally, a higher proportion of closed-access journals than open-domain journals charge publication fees, and at the high end, they charge more than the most expensive open-access journal, PLoS, Public Library of Science.
After all, there are no substitute goods, and the purchasers of the journals (university libraries, but ultimately university administrators) are not the consumers (the professors and students). Thus, publicly-funded institutions first give away and then buy back their own research, research that they paid for in the first place. To add insult to injury, these for-profit journals are produced by unpaid, volunteer editors and peer reviewers. Here, too, labor is donated for free, by those same scholars who also sign over their copyrights for free. It is, shall we say, an unusual business model. The producer gives away a product that he then buys back after having helped the intermediary package it. It is no wonder that private-equity companies circle these publishing companies. It is no wonder, either, that these publishers work hard to ensure regulatory capture. Congress is the academic publishers’ most natural client and constituency, and—thanks to their alliance with Hollywood and the music industry—their success in locking up and rendering irrelevant the output of academic research has been nothing less than astonishing.
Robert Darnton, head librarian of Harvard and a renowned scholar, has rightly warned that what happened to journals will happen to books. The 2008 settlement between Google and the Book Rights Registry, after all, explicitly states its purpose is “to maximize revenues.” And while the U.S. research libraries that participate in the Google digitizing project nominally retain a digital copy, they are banned from making this copy available even to their own members, let alone members of other participating libraries, or the general public. A recent Financial Times article agrees with Robert Darnton, warning that, by means of the Books Rights Registry, Google and the publishing industry have created “an effective cartel,” with “significant barriers to entry.” New competitors are by default barred from scanning books, and even if they were not, “Google’s effective most-favoured-provider status” would stifle competition. An “effective monopoly provider” always eventually charges monopolistic and discriminatory prices, the Financial Times notes, “just as happened with academic journals in the past.”
Of course, there are signs of hope. Around 10 percent of Anglophone academic journals are now open access, and the “gold” ones are edited and peer-reviewed. Even scholars only seeking peer recognition are well advised to publish in them since, with prestige factors equalized, citation rates are significantly higher from open-access articles. As Kevin Guthrie of Ithaca has noted, however, as long as journal and university press brands continue to be used as a proxy for quality in tenure committees, the commercial stranglehold will remain. Yet this is unnecessary. After all, tenure committees read candidates’ work and canvass outside experts—the proxy is not really needed.
Other worthwhile initiatives aimed at opening up scholarship to the public are emerging too. Thanks to the pioneering efforts of Robert Darnton—efforts at times opposed by fellow giants in the field such as Anthony Grafton—the Faculty of Arts and Sciences at Harvard has begun to put its members’ forthcoming scholarly articles on a public-domain website, managed by the newly established university library’s Office for Scholarly Communications. The Association of College and Research Libraries is searching for solutions to the periodicals crisis. The National Institutes of Health, which direct some $29 billion per year for biomedical research, stipulate that their 325,000 or so grantees must publish their NIH-supported research in PubMedCentral. The UK’s largest biomedical research charity, the Wellcome Trust, encourages open access, and the seven UK Research Councils are “committed to the guiding principles that publicly funded research must be made available to the public and remain accessible for future generations.” Dutch universities are pioneers in this field, not least in how they cooperate with each other. Physicists have run an open-access pre-print archive for years, first at Los Alamos and now at Cornell. There is the Public Library of Science, the Open Knowledge Commons, OpenCourseWare, the Open Content Alliance, the Internet Archive, Creative Commons, the Budapest Open Access Initiative, and so on.
The great libraries of the West understand that they can no longer compete against each other as to who can warehouse the most treasures. But if the collectivities of libraries are to remain the guardians of our patrimony, as they must, how do they divide that task between themselves? Increasingly (and encouragingly), they agree that stewardship must be joint, cross-unit, and complementary—a mash-up, even. Innovations and ideas abound, such as joint rather than parallel collecting of duplicative materials, strengthening the Centre for Research Libraries and other membership organizations, inter-library loan services, “joint-view” union catalogues, common licenses and joint negotiations for e-resources, coordinated collection developments and storage protocols, etc. These are matters of electronic knowledge management, and their operations are contested, via uneasy and shifting alliances between IT support and library staff. And critical questions of governance remain. How does one manage outsourcing, leases, and rents, while still ensuring permanent access to permanent content? In a mash-up, who takes what responsibility for materials being captured, curated, preserved, ordered, and delivered? Who plants the flag, asserting that we are here for centuries to come?
Yes, there are worthwhile initiatives to make scholarship public. But wider and deeper collective action is needed. We need a greater sense of urgency. We need more alliances, outreach and advocacy work. We need to embrace the neo-Gutenbergian shift, this disaggregating and democratizing rupture of time and space, whose profound cultural significance and depth none of us have yet fully grasped.
Why not a legal nudge—a presumption of open access along the lines of presumed organ donor intent? Could copyright be revocable—a lease, rather than a sale? Could copyright be deemed to automatically lapse when it stops generating income? At the very least, shouldn't copyright have to be asserted and renewed, in order to remain in force? A more public-minded policy at the university presses would make a great difference, too. The presses could, for example, release their back lists into the public domain. Could university libraries be more imaginative? Could we make alumni lifelong members? Could the materials held by the open universities in England and Israel become, well, open? Could we develop pay-per-view portals into scholarly resources that are invoiced monthly and electronically? And in doing so could we, ahem, lower prices? The Journal of Interdisciplinary History, for example, optimistically charges $10 for a book review, and the average price for a JSTOR article—if you are lucky enough to find one the publisher is willing to sell—is approximately $17. Compare that to iTunes! Could we digitize out-of-copyright books on demand and for a small fee, so that members of the public could “liberate” their chosen books? Could university catalogues be turned into blogs? That is to say, could university members—or the public— add commentaries and hyperlinks? After all, views could be switched between catalogue-only, university-affiliate-commentary–only, and open commentary. And today’s filters remove defamatory or offensive comments. At the very least, if libraries are to continue in their traditional role, as reliable repositories of our cultural memories and collective knowledge—that is, if libraries are to become the spiders in the internet—their catalogues need to provide reliable URLs, backed by long-term maintenance policies and institutional guarantees. The alternative is to rely on Google’s search-engine algorithms, which is to say, on ephemeral beauty contests.
And can we not lobby better? Many in the open-access movement were disheartened by the British Library’s response to the 2006 Gowers Review of Intellectual Property (by the Treasury). The British Library pleaded for unpublished works to have “only” a copyright lasting for life plus 70 years. It asked for permission to copy old sound and film recordings, since the then-proposed extension of the 50-year music copyright to 95 years otherwise ensured the certain destruction of most of the British Library sound archive.
Could we also be tougher? Could we name and shame, tagging out-of-print works with a “Congress/the EU/Parliament has banned this work from coming into the public domain”? Could academics put their own house in order? University teachers may not be able to put course materials into the public domain. But they can issue reading lists, and they can YouTube the lectures as well as summarizing them—or ask students to summarize them— on Wikipedia. Each one of us, in our own station, can help to open up scholarship to the public.
We guardians need to do this for the public's sake and for our own. Right now, projects to open up scholarship mostly pertain to the natural sciences, and mostly concern present academic work. Twentieth-century scholarship in the humanities and the social sciences is lacking. Authored by academics hoping not for monetary gains, but for renown among their peers and influence over the public, and financed by means of taxes and charitable gifts, this incomparable treasure trove is locked away from society by “The Sonny Bono Copyright Term Extension Act of 1998” (also known as the “Micky Mouse Protection Act”). It is an ironic fate—a second death, if you will—for the great refugee scholars of Europe. Think only of Erwin Panovsky, Gershom Scholem, Kurt Gödel, Marc Bloch, Ludwik Fleck, or Simone Weil.
Look at JSTOR (if you can). There you find the evidence-based, source-critical foundations of sociology, anthropology, geography, history, philosophy, classics, Oriental studies, theology, musicology, history of science and so on. They are all closed to the public. It is wonderful, of course, that high-energy physics and string theory are open to all. But is it not ironic that we have opened the gates only to that scholarship which few professors, let alone members of the public, have the cognitive capacity and appropriate training to grasp?
The opportunity costs for society are self-evident. But what about the opportunity cost for scholars? For example, the public has set itself the task to rewrite knowledge for the public domain through Wikipedia and the like. Should not these sites be hyperlinked with JSTOR? By excluding the public from their scholarly literature, academics make it impossible for amateurs to use sound research methodologies, critically examining evidence by cross-referencing and source analysis. Scholars then critique the public’s output for not being sufficiently academic. Academics commonly refer to the occasionally wobbly scholarly standards of Wikipedia as proof the public does not wish to pursue scholarship. Might it not instead prove that they do not let them?
Forget, for the moment, about the morality of thus adding insult to injury. Consider instead the downside for the universities. Does not the professoriate take a reputational risk? After all, the web-tech community is working on how to verify information on the Web, or as they put it, “engineering layers of trust and provenance.” In the longer term, the question is not whether the Web will be scholarly in some perfectly meaningful sense. It is whether traditional twentieth-century scholarship in the humanities and the social sciences will be integrated into that emerging, increasingly cross-referenced and even more scholarly world of the web. Or will what James Boyle has nicely termed our cultural agoraphoria—our undue skepticism of open networks—lead the universities to become bystanders in the new worlds of open-access knowledge?
If scholars continue to hide away and lock up their knowledge, do they not risk their own irrelevance? An immediately important debate, I think, is to be had over how academics fail to engage with their natural constituency (and former students): journalists, business leaders, lawyers, entrepreneurs, politicians, and civil servants. These people are the ruling classes, if you would like. They are the ones who house and feed professors. Is it really in academics’ long-term interest to not let these well-educated and well-intentioned people as much as glance at, say, the Index of Christian Art? Is it really in their interest not to show the public their scholarly articles and academic monographs? What does this tell the public about who academics think is clubbable? And how will that affect how the public thinks about, say, federal research grants, or top-up fees?
Half a millennium ago, at the dawn of the age of mechanical reproduction, German townfolk were dazzled by the thought that, thanks to their new-fangled printing presses, God’s word might now be put in the hands of the laity. There would be no need for intermediaries. God’s word would speak not through the clergy, but to each soul, no matter how humble his station on earth. Of course, the intermediaries struck back—the Counter-Reformation was arguably just that, a rebellion of intermediaries. Indeed, Ireland retained a Catholic censorship until its belated modernity a few decades ago. But the technological rupture of the printing press was such that the disintermediation was inevitable over the longue durée. We became—and look closely at the word—Protestants.
Today, at the dawn of the age of electronic reproduction, the intermediaries are again striking back. The publishers are the most blatant and crude, of course. But academics are also intermediaries. And while they may not think of it this way, arguably they too are striking back. Then, as now, obstacles are imagined—and created. University libraries are closed shops, JSTOR remains blocked, theses are inaccessible, and academic monographs are available, if at all, only on paper and at prohibitive prices. For this sorry state of affairs, we should not only blame Hollywood and the music industry. The obstacles to a true and electronic Reformation are real, but perhaps also caused by the continuation of “business as usual,” perhaps ultimately founded in the mental difficult that older folk have imaginatively re-drawing work practices, as well as organizational and legal “silos.” Remember Henry Ford’s comment: “If I had asked my customers what they wanted, they would have asked for a better horse carriage.”
However, the research done in my field, the history of science, offers comfort in the morbid but accurate observation—ultimately traceable to Kuhnian theory—that “science marches ahead one funeral at a time.” Obstacles can delay, but not stop, a technological rupture of this magnitude. Excepting the odd Wykehamist or yeshiva boy, our children—always on, multi-tasking, mobile—will not engage with a body of scholarship their elders have incomprehensibly surrounded by barbed wire. But they will remain engaged in learning. The question is not whether there will be future scholars. It is how these future scholars will remember and integrate previous scholarship. And in pondering that, which means pondering our own scholarly legacy, it is worth remembering that “the generational war is the one war whose outcome is certain.”
(This article is in the public domain.)
Lisbet Rausing can be contacted at [email protected].