[Insight-users] The Scientist : Is Peer Review Broken?

Luis Ibanez luis.ibanez at kitware.com
Sun Feb 19 21:24:24 EST 2006


http://www.the-scientist.com/2006/2/1/26/1/


"...Everyone, it seems, has a problem with peer review at top-tier
journals. The recent discrediting of stem cell work by Woo-Suk Hwang at
Seoul National University sparked media debates about the system's
failure to detect fraud. Authors, meanwhile, are lodging a range of
complaints: Reviewers sabotage papers that compete with their own,
strong papers are sent to sister journals to boost their profiles, and
editors at commercial journals are too young and invariably make
mistakes about which papers to reject or accept (see Truth or Myth?).
Still, even senior scientists are reluctant to give speci. c examples of
being shortchanged by peer review, worrying that the move could
jeopardize their future publications...."



"...THE RELIGION OF PEER REVIEW

Despite a lack of evidence that peer review works, most scientists (by
nature a skeptical lot) appear to believe in peer review. It's something
that's held "absolutely sacred" in a field where people rarely accept
anything with "blind faith," says Richard Smith, former editor of the
BMJ and now CEO of UnitedHealth Europe and board member of PLoS. "It's
very unscientific, really..."


"Indeed, an abundance of data from a range of journals suggests peer
review does little to improve papers. In one 1998 experiment designed to
test what peer review uncovers, researchers intentionally introduced
eight errors into a research paper. More than 200 reviewers identified
an average of only two errors. That same year, a paper in the Annals of
Emergency Medicine showed that reviewers couldn't spot two-thirds of the
major errors in a fake manuscript. In July 2005, an article in JAMA
showed that among recent clinical research articles published in major
journals, 16% of the reports showing an intervention was effective were
contradicted by later findings, suggesting reviewers may have missed
major flaws."


"Some critics argue that peer review is inherently biased, because
reviewers favor studies with statistically significant results. Research
also suggests that statistical results published in many top journals
aren't even correct, again highlighting what reviewers often miss.
"There's a lot of evidence to (peer review's) downside," says Smith.
"Even the very best journals have published rubbish they wish they'd
never published at all. Peer review doesn't stop that." Moreover, peer
review can also err in the other direction, passing on promising work:
Some of the most highly cited papers were rejected by the first journals
to see them."

"The literature is also full of reports highlighting reviewers'
potential limitations and biases. An abstract presented at the 2005 Peer
Review Congress, held in Chicago in September, suggested that reviewers
were less likely to reject a paper if it cited their work, although the
trend was not statistically significant. Another paper at the same
meeting showed that many journals lack policies on reviewer conflicts of
interest; less than half of 91 biomedical journals say they have a
policy at all, and only three percent say they publish conflict
disclosures from peer reviewers. Still another study demonstrated that
only 37% of reviewers agreed on the manuscripts that should be
published. Peer review is a "lottery to some extent," says Smith."

"Different studies have shown conflicting results about whether signed
reviews improve the quality of what's sent back and detected only minor
effects, Schroter notes. One report presented at this year's Peer Review
Congress showed that, in a non-English-language journal, signed reviews
were judged superior in a number of factors, including tone and
constructiveness by two blinded editors. However, another study
published in BMJ in 1999 found that signed reviews were not any better
than anonymous comments, and asking reviewers to identify themselves
only increased the chance they would decline to participate.

Still, Schroter says the journal decided to introduce its policy of
signed reviews based on the logic that signed reviews might be more
constructive and helpful, and anecdotally, the editors at BMJ say that
is the case. JAMA's Rennie says he doesn't need research data to tell
him that signing reviews makes them better. "I've always signed every
review I've ever done," he says, "because I know if I sign something,
I'm more accountable." Juries are not anonymous, he argues, and neither
are people who write letters to the editor, so why are peer reviewers?
"I think it'll be as quaint in 20 years' time to have anonymous
reviewers as it would be to send anonymous letters to the editor," he
predicts.

But not all editors agree. Lawrence, for one, says he believes anonymity
helps reviewers stay objective. Others argue that junior reviewers might
become hesitant to conduct honest reviews, fearing negative comments
might spark repercussions from more seniorlevel authors. At Science,
reviewers submit one set of comments to editors, and a separate,
unsigned set of comments to authors - a system that's not going to
change anytime soon, says Kennedy. "I think candor flourishes when
referees know" that not all their comments will reach the authors, he
notes. Indeed, in another study presented at this year's peer review
congress, researchers found that reviewers hesitated to identify
themselves to authors when recommending the study be rejected. Nature
journals let reviewers sign reviews, says Bernd Pulverer, editor of
Nature Cell Biology, but less than one percent does. "In principle"
signed reviews should work, he says, but the competitive nature of
biology interferes. "I would find it unlikely that a junior person would
write a terse, critical review for a Nobel prize-winning author," he says.

However, since BMJ switched to a system of signed reviews, Smith says
there have been no "serious problems." Only a handful of reviewers
decided not to continue with the journal as a result, and the only
"adverse effect" reported by authors and reviewers involved authors
exposing reviewers' conflicts of interest, which is actually a "good
thing," Smith notes.

Another option editors are exploring is open publishing, in which
editors post papers on the Internet, allowing multiple experts to weigh
in on the results and incrementally improve the study. Having more sets
of eyes means more chances for improvement, and in some cases, the
debate over the paper may be more óinteresting than the paper itself,
says Smith. He argues that if everyone can read the exchange between
authors and reviewers, this would return science to its original form,
when experiments were presented at meetings and met with open debate.
The transition could transform peer review from a slow, tedious process
to a scienti . c discourse, Smith suggests. "The whole process could
happen in front of your eyes."

However, there are concerns about the feasibility of open reviews. For
instance, if each journal posted every submission it received, the
Internet would be . ooded with data, some of which the media would
report. If a journal ultimately passed on a paper, who else would accept
it, given that the information's been made public? How could the
journals make any money? There's an argument for both closed and open
reviews, says Patrick Bateson, who led a Royal Society investigation
into science and the public interest, "and it's not clear what should be
done about it."



More information about the Insight-users mailing list