POLITICS MARCH 12, 2008
Even if you are a Wikipedia fan who thinks the site is
usually accurate, you can’t help but feel that there’s an implicit marker on
all the content: "Maybe this is correct." That "maybe" is what sticks in the
craws of so many people. Teachers often insist that their students cannot cite
Wikipedia. Journalists and academics are embarrassed to admit they use it, and
most would not consider writing for it. But if your goal is to improve human
understanding, isn’t one of the world’s top websites a better outlet than University of Nebraska Press?
The old model of publishing non-fiction, borrowed largely
from academia, promised a straightforward result. You picked up an academic
journal to find the latest word on French tax farming in the 18th century. You
knew that what you were getting was tried, tested, vetted, and replicated, at
least as much as was humanly possible. You thought of the author as laying
small bricks for subsequent scientific advances. But this model of knowledge
accretion never had the accuracy it pretended to. If I had to guess whether
Wikipedia or the median refereed journal article on economics was more likely
to be true, after a not so long think I would opt for Wikipedia. This
comparison should give us pause.
The issues of trust and accuracy have come to the fore
lately with the “revelation” that a number of published autobiographies are little
more than fiction. Most notably, Margaret B. Jones’s acclaimed new memoir Love and Consequences was discovered
to be fraudulent; a week before that, Misha Defonseca’s European bestseller Misha: A Mémoire of the Holocaust Years,
first published in 1997, was admitted
to be a fabrication. Ms. Defonseca had told tales about running from the Nazis,
living with wolves (really), and searching for her deported parents across Europe. Last month the author confessed that most of the
story, including her identity, was simply made up.
Let’s examine the Defonseca case. The only news here is the
belief that the falsity of this story is in fact news. A quick look at Amazon.com shows that the first book
review--dated 2001--charges the following: “Uplifting
and entertaining though this story may be, it is impossible to tell how much of
it is true. Let's face it, no one has ever been brought up by wolves, beautiful
idea though it is. I would love to believe that wolves would take care of
children, bring them up and feed them, but they don't.” As of early March 2008,
48 out of 61 people found that review to be helpful, yet only now is the
fabrication a story for the mainstream media. On the academic side, doubts were raised about the book’s
veracity by potential blurbers even before it was published.
The earliest Wikipedia article I
can find on Defonseca comes on French Wikipedia, on February 29, 2008; by the
end of that day, one of the revisers noted that the story is false. (By the way,
the longer Wikipedia page on literary hoaxes is impressive; see it here.) That
the mainstream media did not challenge the original story reflects one of the
most important and most pernicious biases of print media and TV. If something
isn’t reported, people assume it either didn’t happen or isn’t important. Since
just about everything is reported on the web--whether it is true or false--issues
of verification become more important than issues of omission. On the positive
side, we’re less likely to take a lot of claims for granted.
Defonseca’s fabrication joins a long list of hoaxes, along
with Clifford Irving’s fake biography of Howard Hughes, the so-called Hitler
Diaries, James Frey’s partially false memoir about addiction, James Macpherson’s
18th-century marketing of Ossian--and dare I mention the claim that Moses wrote
the Torah and thus authored the story of his own demise in Deuteronomy?
The sad truth is that
“non-fiction” has been unreliable from the beginning, no matter how finely
grained a section of human knowledge we wish to consider. For instance, in my own field, critics have tried to replicate the findings in
academic journal articles by economists using the initial data sets. Usually, it
is impossible to replicate the results of the article even half of the time. Note
that the journals publishing these articles often use two or three referees--experts
in the area--and typically they might accept only 10 percent of submitted
papers. By the way, economics is often considered the most rigorous and the
most demanding of the social sciences.
You can knock down the reliability of published research another
notch by considering “publication bias.” Publication bias refers to how the editorial
process favors novel and striking results. Sometimes novel results will appear
to be true through luck alone, just because the numbers line up the right way,
even though the observed relationship would not hold up more generally. Articles
with striking findings are more likely to be published and later publicized,
whereas it is very difficult to publish a piece which says: “I studied two
variables and found they were not much correlated at all.” If you adjust for
this bias in the publication process, it turns out you should hardly believe
any of what you read. Claims of significance are put forward at a
disproportionately and misleadingly higher rate than claims of
non-significance. Brad DeLong and Kevin Lang once co-authored a piece on this
bias which they entitled appropriately: “Are All Economic Hypotheses False?”
The problems are compounded when we turn to non-fiction
books. The refereeing is much looser and more impressionistic, even at an
academic press. Who has time to verify each specific claim in a 400-page
manuscript? The reader is lucky if half of those pages passed a “sniff test”
from one or two qualified readers; usually the rest is taken on faith in the
author’s more general academic reputation. If the book was put out by a trade
publisher, most likely no refereeing was done at all. “I don’t think there is
any way you can fact-check every single book. It would be very insulting and
divisive in the author-editor relationship,” Nan Talese, the editor of James
Frey’s A Million Little Pieces, told
The New York Times.
Push the question one step further: What does journalistic
fact-checking consist of in the first place? Sometimes the fact-checker calls
up an interview source and asks him or her direct questions. Otherwise the
fact-checker sees if the stated claim can be found in some published book,
magazine, or perhaps in a refereed academic journal. Fact-checking can’t be any
more reliable than these underlying sources.
So, should the academic journal or the Wikipedia entry
receive more respect? Should we give literary critics tenure for sparkling
reviews on Amazon.com? Should The New York Times, on a given day,
simply link to the best of the web?
Sadly, the final lessons here are brutal. We cannot quite embrace
the wonderfully egalitarian world of knowledge on the web. Error, falsehood,
sloppy untruths, and just downright lies are found all too frequently and they threaten
to spread even further. That’s why we should defend institutions--such as
academia and the standard canons of traditional journalism--that promise full
fact-checking and tough standards of rigor. Yes those institutions are very
often hypocritical. Everyone faces a deadline or a budget. Nonetheless, dropping
our stated loyalties to such institutions would be like removing our thumb from
the dike and letting the flood waters in.
I don’t mean this as a call to let up on vigilance. We
should criticize our truth-testing institutions and try to improve their
truth-tracking properties; of course, this can mean an active life in Wikipedia,
Amazon.com, and the blogosphere. But
in the final analysis the standards of mainstream institutions are necessary. We
should use the web to strengthen, rather than weaken, those procedures.
If you cannot imagine a worse alternative to the mainstream quest
for replication and fact-checking, just spend a few minutes perusing the “diet”
section of your local bookstore. Maybe the academics don’t know much more about
losing weight, but at least their standards--or at least the standards they pay
lip service to--offer us the promise of someday arriving at better knowledge.
Still, it’s clear that “published truth”--even if
“fact-checked”--is often a sorry disappointment. Of course, readers are mostly not
fighting a battle of methods or standing up for either science or traditional
journalism. They’d just like to ... um ... learn something. In that case, rush
over to Wikipedia or Amazon.com and
enjoy the fruits of human labor to your heart’s content.
And by the way, for the first draft of this essay I wrote
“MacPherson” instead of “Macpherson” in the current published version. It was
Wikipedia that clued me into giving the spelling a second look. Academic
sources use both spellings (lower case seems to predominate); let’s see what The New Republic fact-checker will do
with this one--but Britannica Online Encyclopedia is on my side.
Tyler Cowen is a Professor of Economics at George Mason
University and he
co-writes a blog at http://www.marginalrevolution.com.
By Tyler Cowen