In a column on Friday, Mark Bittman goes over what he calls “what may be the most important agricultural study this year”, a multi-institution study finding that

“…longer rotations produced better yields of both corn and soy, reduced the need for nitrogen fertilizer and herbicides by up to 88 percent, reduced the amounts of toxins in groundwater 200-fold and didn’t reduce profits by a single cent. In short, there was only upside — and no downside at all — associated with the longer rotations. There was an increase in labor costs, but remember that profits were stable. So this is a matter of paying people for their knowledge and smart work instead of paying chemical companies for poisons.”

Noting that what he considers (possibly) the year’s most important agricultural study was overlooked by

“two of the leading science journals and even one of the study’s sponsors, the often hapless Department of Agriculture”, Bittman says “one might at least hope that the U.S.D.A.would trumpet the outcome. The agency declined to comment when I asked about it. One can guess that perhaps no one at the higher levels even knows about it, or that they’re afraid to tell Monsanto about agency-supported research that demonstrates a decreased need for chemicals…”

he then adds that “A conspiracy theorist might note that the journals Science and Proceedings of the National Academy of Sciences both turned down the study. It was finally published in PLOS One; I first read about it on the Union of Concerned Scientists Web site.)

After one colleague posted this on the ESA Agroecology Section’s Facebook Page, another colleague commented that “I thought it was highly inappropriate for him to hint that the Iowa State study was rejected by Science and PNAS because of a ‘conspiracy’ fueled by corporate influence. I don’t think Bittman is very well-versed in the harsh realities of scientific publishing…”

While to be sure Bittman presents no evidence of any such conspiracy or malfeasance, it occurs to me that not all of us appreciate the degree to which finding direct evidence of bias, or even determining that such bias is/was absent, is not possible under our current system. Peer review, like democracy, has been called “the worst system except for all those others that have been tried from time to time” (though I tend to think that this is a bit too harsh on democracy, and a bit too soft on peer review). What follows is, for the readers’ edification, a brief somewhat-annotated bibliography of some of the research on peer review that a half hour’s googling turns up:

  • A pithy quote comes from  Shortcomings of peer review in biomedical journals: “Although mainly anecdotal, the evidence suggests that peer review is sometimes ineffective at identifying important research and even less effective at detecting fraud.”
  • A summary from a 1999 BMJ article “Evidence on peer review—scientific quality control or smokescreen?”: “Blinding reviewers to the author’s identity does not usefully improve the quality of reviews; Passing reviewers’ comments to their co-reviewers has no effect on quality of review; Reviewers aged under 40 and those trained in epidemiology or statistics wrote reviews of slightly better quality; Appreciable bias and parochialism have been found in the peer review system;Developing an instrument to measure manuscript quality is the greatest challenge” (emphasis added)

In that piece, Biagioli cites the following:

  • Tamber, Pritbal S. (2001) ‘BioMed Central: Taking a Fresh Look at Scholarly Publishing,’ Science Editor, 24(4): 121.
  • Mahoney, Michael (1977) ‘Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System,’ Cognitive Therapy and Research, 1: 161–175
  • Peters, Douglas and Stephen Ceci (1982) ‘Peer Review Practices of Psychological Journals: The Fate of Published Articles Submitted Again,’ Behavioral and Brain Sciences, 5: 187–195. In this infamous study, they “selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices. With fictitious names and institutions substituted for the original ones… the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier… only three [editors or reviewers] (8%) detected the resubmissions… nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.”
  • Journal of the American Medical Association, 280(3) (special issue on peer review)
  • Cole, Stephen, Leonard Rubin and Jonathan Cole (1978) Peer Review in the National Science Foundation, Washington, DC: National Academy of Sciences.
  • Roy, Rustum (1982) ‘Peer Review of Proposals — Rationale, Practice, and Performance,’ Bulletin of Science, Technology, and Society, 2: 402–422.

One piece found systematic, consistent evidence only for bias towards one’s own nominees:

This one’s a meta-analysis on effects of gender (finding a 7% bias effect favoring men):

One could go on and on, but an interesting summary paragraph echoes the first pithy quote:

  • Opening up BMJ peer review: A beginning that should lead to complete transparency*: “Peer review is slow, expensive, profligate of academic time, highly subjective, prone to bias, easily abused, poor at detecting gross defects, and almost useless for detecting fraud. Evidence to support all these statements can be found in a book by Stephen Lock, my predecessor as editor of the BMJ,1 three special issues of JAMA,2–4 and a forthcoming book.5 The benefits of peer review are harder to pin down, but it is probably more useful for improving what is eventually published than for sorting the wheat from the chaff.6”

Speaking of going on and on, some others I didn’t get a chance to read with promising titles:

And ending, perhaps, on a more “up” note?:

  • Herbert W. Marsh, Upali W. Jayasinghe, Nigel W. Bond, Gender differences in peer reviews of grant applications: A substantive-methodological synergy in support of the null hypothesis model, Journal of Informetrics, Volume 5, Issue 1, January 2011, Pages 167-180, 10.1016/j.joi.2010.10.004: “…testing a null hypothesis model in relation to the effect of researcher gender on peer reviews of grant proposals, based on 10,023 reviews by 6233 external assessors of 2331 proposals from social science, humanities, and science disciplines… Utilizing multilevel cross-classified models, we show that support for the null hypothesis model positing researcher gender has no significant effect on proposal outcomes. Furthermore, these non-effects of gender generalize over assessor gender (contrary to a matching hypothesis), discipline, assessors chosen by the researchers themselves compared to those chosen by the funding agency, and country of the assessor. Given the large, diverse sample, the powerful statistical analyses, and support for generalizability, these results – coupled with findings from previous research – offer strong support for the null hypothesis model of no gender differences in peer reviews of grant proposals.”

* One notes that this piece is from 1999. I’m curious how this worked out for BMJ, but not, obviously, curious enough to Google-fu it right at this moment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s