Googling Peer Review

Who is not ambiva­lent about peer review?  On the one hand, it estab­lishes a basic, reli­able level of qual­ity in argu­ment and in evi­dence. On the other, it grinds every­thing down to a bland same­ness. Peer review assures pro­fes­sional stan­dards are met, and also enforces ortho­doxy. Anony­mous peer review pre­vents intim­i­da­tion: anony­mous peer review allows  irre­spon­si­ble, spite­ful crit­i­cism. Peer reviews can be extremely help­ful: peer reviews can crush the spirit. They take for­ever, and they also present a sig­nif­i­cant bur­den of uncom­pen­sated labor.

I often enjoy doing peer review, as I get to read new and inter­est­ing stuff, and I almost always learn from it, when my own work is reviewed. But just as often the crit­i­cisms reflect  stub­born genre con­ven­tions more than free thought. Every­one sees reviews which just do not “get” it, and more sig­nif­i­cantly, never will, because the intel­lec­tual appa­ra­tus isn’t com­pat­i­ble: a social his­to­rian who finds “lin­guis­tic” evi­dence unper­sua­sive; a cul­tural type who find empiri­cism naive and “under­the­o­rized;” a polit­i­cal his­to­rian who sees analy­sis of pop­u­lar cul­ture as triv­ial and point­less, an his­to­rian of pop­u­lar cul­ture weary of polit­i­cal his­tory. In cases like this, no actual “review” takes place: rather, the author and his or her reviewer talk past each other.

Of course aca­d­e­mic pub­lish­ing was never sup­posed to be easy. It was always sup­posed to be rig­or­ous, demand­ing, and dif­fi­cult, and for good rea­son. Good work takes hard work: there’s noth­ing unrea­son­able about that.  But there’s a strong sense in which peer review isn’t always about good work, it’s about famil­iar work, work which is ask­ing ques­tions to which the answers are always already known.

The profession’s ori­gins lie in class priv­i­lege: “peer” has ori­gins in “peer­age,” and class exclu­siv­ity. As Rob Townsend recently pointed out, the AHR did not start doing “dou­ble blind” peer review until the mid 20th cen­tury. Before that, the edi­to­r­ial board han­dled things itself, ensur­ing a closed and clubby atmosphere.

It was an exam­ple of what Marx called “the illu­sion of scarcity.” Plenty of peo­ple had opin­ions about his­tory, and exper­tise: “peers” kept the unwashed out.

Peer review also reflected the rel­a­tive scarcity and expense of print media. Jour­nals were expen­sive to pro­duce, expen­sive to buy, and the vet­ting in peer review raised their value. Peer review also increases the ten­dency of jour­nals to spe­cial­ize in one kind of thing. How many of you have started a research project think­ing not “is this inter­est­ing” but rather “where could I pub­lish this?” That’s the illu­sion of scarcity.

Ear­lier I argued that the era of scarcity in evi­dence was com­ing to a close, because so much pre­vi­ously hard to get mate­r­ial now exists online. Maybe it’s time for the era of scarcity in peer review to end as well. We ought to be able to rethink peer review in ways that make it more effec­tive and less “clubby.”

The first objec­tion mod­ern aca­d­e­mics make to chang­ing peer review is that they don’t want to have to wade through a lot of junk to find some­thing good. Very early on, that was everyone’s expe­ri­ence of the inter­net and dig­i­tal tech­nol­ogy: a very poor sig­nal to noise ratio.

Google, the search engine that fixed that prob­lem and remade the inter­net, is in effect a gigan­tic peer review. Google responds to your query by ana­lyz­ing how many other people–your “peers”– found the same page useful.

When you enter search terms, Google looks for web pages that con­tain instances of those terms, and then from that set it looks for those pages most often linked to or referred  to by oth­ers. In other words, it’s a mas­sive form of peer review: it fore­grounds web pages which oth­ers have vet­ted and cited.

And it’s extremely good at get­ting use­ful infor­ma­tion  in a hurry. If you enter “green breast of the new world” as a search term, you get pages devoted to the Great Gatsby, not outer space-themed porno­graphic sites. The results are extremely spe­cific. Google has peer-reviewed the web, but the peers are vir­tu­ally every inter­net user in the Eng­lish speak­ing world. The more spe­cific your search terms, the smaller and the more spe­cific the set of “peers.”

That’s a sim­pli­fi­ca­tion: Google guards its meth­ods very closely. But it describes the prin­ci­ple. Google turns peer review­ing into “crowd sourc­ing.”

There has been a lot of talk about how to change peer review, usu­ally involv­ing online ver­sions of what we have now–journals which use an edi­tor, edi­to­r­ial board, and peer review­ers to “gatekeep.”

But why pre­serve a struc­ture of gate­keep­ing that evolved specif­i­cally early in the twen­ti­eth cen­tury, for early twen­ti­eth cen­tury social and eco­nomic conditions?

We could cer­tainly “crowd­source” peer review. Imag­ine going to a site where schol­ars in his­tory and history-related fields post new work, work in progress, research find­ings, and queries. The site would be trans­par­ent to Google. You could enter a set of search terms, and instantly get the results your fel­low aca­d­e­mics, search­ing the same terms, found most use­ful. That’s peer review in action.  And the group of “peers” would be larger, more rep­re­sen­ta­tive, more up to date, and less inclined to dis­ci­pli­nary ortho­dox­ies. Such a site would require only min­i­mal edit­ing and min­i­mal main­te­nance.1

Work at such a site could find a much larger audi­ence than sim­ply pub­lish­ing in a pres­ti­gious jour­nal. Let us imag­ine that jour­nal “X” has eight to ten thou­sand sub­scribers, includ­ing libraries. That’s a lot: that’s a big jour­nal. But of those 8000 sub­scribers, how many actu­ally read your arti­cle? How many read it and cite it? How many cite it, but never read it? You don’t really know.

As of 10/14/2010, just two weeks after my first post, this hum­ble blog has had 1352  vis­its. That’s tiny in pro­por­tion to the size of the inter­net, but  large in aca­d­e­mic terms. Google Ana­lyt­ics lets you track the usage of a website–the num­ber of vis­i­tors, the way the vis­i­tors got to your page, the coun­tries they vis­ited from.

I can also tell which indi­vid­ual pages/postings drew the most atten­tion, and get a read­ing of the aver­age time vis­i­tors spent on the site, and  report which pages they spent the most time read­ing. It can show you where your page is linked to. While this is far from an accu­rate assess­ment of how the mate­r­ial on my blog is being used, it’s also far more infor­ma­tion than I get about an arti­cle pub­lished in a jour­nal. In many ways it would give a tenure review com­mit­tee a much more accu­rate account of how someone’s work is used.

So far, what I see on Google Ana­lyt­ics is not all that encouraging–the aver­age vis­i­tor stays for just under five min­utes. The most “aca­d­e­mic” of the posts, “Col­ored Me,” con­tin­ues over five pages. The num­ber of peo­ple who spend time on page five is far lower than the num­ber that view page one. But I’m inclined to say that’s my fault, not the fault of dig­i­tal technology–I need to learn how to present schol­arly mate­r­ial effec­tively on the web.

We all do. Sooner or later, the eco­nom­ics will catch up with the paper jour­nals and with the struc­ture of peer review. Peer review was born in a spe­cific his­tor­i­cal moment–it’s not the final prod­uct of human progress, objec­tively under­stood: we don’t need to pre­serve it in its present form. We need to make sure “peer review” in the future works for our needs and the needs of the pubic that wants good history.

So I’m think­ing enough talk­ing, I have to actu­ally do it. In the next few days, I’m going to post a poll, ask­ing peo­ple to vote for which online project I should pur­sue. Then I’ll post the equiv­a­lent of a jour­nal arti­cle online and see if I can man­age to get it peer reviewed by crowdsourcing.

  1. Google has sort of already done this, with “Google Scholar.” I’ve not found Google Scholar to be very use­ful or dif­fer­ent, because it mostly searches data­bases like Jstor, which have already been peer reviewed, and in effect gives basi­cally the same results for me as search­ing Jstor directly. I believe that’s because there is not enough schol­arly work online to search, out­side of the sub­scrip­tion data­bases.


Leave a Reply

Your email is never shared.Required fields are marked *