Tải bản đầy đủ (.pdf) (14 trang)

How journal articles get published

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (203.49 KB, 14 trang )

Section 6
The politics of statistics
Chapter
15
How journal articles get published
In my journal, anyone can make a fool of himself.
Rudolph Virchow (Silverman, 1998; p. 21)
Perhaps the most important thing to know about scientic publication is that the “best”
scientic journals do not publish the most important articles. is will be surprising to some
readers, and probably annoying to others (oen editorial members of prestigious journals). I
could be wrong; this statement reects my personal experience and my reading of the history
of medicine, but if I am correct, the implication for the average clinician is important: it will
not be enough to read the largest and most famous journals. For new ideas, one must look
elsewhere.
Peer review
e process of publishing scientic articles is a black box to most clinicians and to the public.
Unless one engages in research, one would not know all the human foibles that are involved.
It is a quite fallible process, but one that seems to have some merit nonetheless.
e key feature is “peer review.” e merits of peer review are debatable (Jeerson et al.,
2002); indeed its key feature of anonymity can bring out the worst of what has been called
the psychopathology of academe (Mills, 1963).Letusseehowthisworks.
e process begins when the researcher sends an article to the editor of a scientic journal;
the editor then chooses a few (usually 2–4) other researchers who usually are authorities in
that topic; those persons are the peer reviewers and they are anonymous. e researcher
does not know who they are. ese persons then write 1–3 pages of review, detailing specic
changes they would like to see in the manuscript. If the paper is not accurate, in their view,
or has too many errors, or involves mistaken interpretations, and so on, the reviewers can
recommendthatitberejected.epaperwouldthennotbepublishedbythatjournal,though
the researcher could try to send it to a dierent journal and go through the same process. If
the changes requested seem feasible to the editor, then the paper is sent back to the researcher
with the specic changes requested by peer reviewers. e researcher can then revise the


manuscript and send it back to the editor; if all or most of the changes are made, the paper
is then typically accepted for publication. Very rarely, reviewers may recommend acceptance
of a paper with no or very minor changes from the beginning.
is is the process. It may seem rational, but the problem is that human beings are
involved, and human beings are not, generally, rational. In fact, the whole scientic peer
review process is, in my view, quite akin to Winston Churchill’s denition of democracy: It
is the worst system imaginable, except for all the others.
Perhaps the main problem is what one might call academic road rage.Asiswellknown,
it is thought that anonymity is a major factor that leads to road rage among drivers of
Section 6: The politics of statistics
automobiles. When I do not know who the other driver is, I tend to assume the worst about
him; and when he cannot see my face, nor I his, I can aord to be socially inappropriate
and aggressive, because facial and other physical cues do not impede me. I think the same
factors are in play with scientic peer review: routinely, one reads frustrated and angry com-
ments from peer reviewers; exclamation points abound; inferences about one’s intentions as
an author are made based on pure speculation; one’s integrity and research competence are
not infrequently questioned. Now sometimes the content that leads to such exasperation is
justiable; legitimate scientic and statistical questions can be raised; it is the emotion and
tone which seem excessive.
Four interpretations of peer review
Peer review has become a matter of explicit discussion among medical editors, especially in
special issues of the Journal of the American Medical Association (JAMA).eresultofthis
public debate has been summarized as follows:
Four diering perceptions of the current refereeing process have been identied: ‘the
sieve (peer review screens worthy from unworthy submissions), the switch (a persistent
author can eventually get anything published, but peer review determines where),
the smithy (papers are pounded into new and better shapes between the hammer of
peer review and the anvil of editorial standards), and the shot in the dark (peer review
is essentially unpredictable and unreproducible and hence, in eect, random).’ It
is remarkable that there is little more than opinion to support these characterizations

of the gate-keeping process which plays such a critical role in the operation
of today’s huge medical research enterprise (‘peer review is the linch pin of science.’).
(Silverman, 1998; p. 27)
I tend to subscribe to the “switch” and “smithy” interpretations. I do not think that peer
review is the wonderful sieve of the worthy from the unworthy that so many assume, nor is
it simply random.
It is humanly irrational, however, and thus a troublesome “linchpin” for our science.
It is these human weaknesses that trouble me. For instance, peer reviewers oen know
authors, either personally or professionally, and they may have a personal dislike for an
author; or if not, they may dislike the author’s ideas, in a visceral and emotional way. (For all
we know, some may also have economic motivations, as some critics of the pharmaceutical
industry suggest [Healy, 2001].) How can we remove these biases inherent in anonymous peer
review? One approach would be to remove anonymity, and force peer reviewers to identify
themselves. Since all authors are peer reviewers for others, and all peer reviewers also write
their own papers as authors, editors would be worried that they would not get complete and
direct critiques from peer reviewers, who might fear retribution by authors (when serving as
peer reviewers). Not just paper publication, but grant funding – money, the life blood of a
person’s employment in medical research – are subject to anonymous peer review, and thus
grudges that might be expressed in later peer review could in fact lead to losing funding and
consequent economic hardship.
Who reviews the reviewers?
We see how far we have come from the neutral objective ideals of science. e scientic peer
review process involves human beings of esh and blood, who like and dislike each other,
and the dollar bill, here as elsewhere, has a pre-eminent role.
114
Chapter 15: How journal articles get published
How good or bad is this anonymous peer review process? I have described the matter
qualitatively; are there any statistical studies of it? ere are, in fact; one study for example,
decided to “review the reviewers” (Baxt et al., 1998). All reviewers of the Annals of Emergency
Medicine received a ctitious manuscript, a purported placebo-controlled randomized clin-

ical trial of a treatment for migraine, in which 10 major and 13 minor statistical and scientic
errors were deliberately placed. (Major errors included no denition of migraine, absence of
any inclusion or exclusion criteria, and use of a rating scale that had never been validated or
previously reported. Also, the p-values reported for the main outcome were made up and did
not follow in any way from the actual data presented. e data demonstrated no dierence
between drug and placebo, but the authors concluded that there was a dierence.) Of about
200 reviewers, 15 recommended acceptance of the manuscript, 117 rejection, and 67 revi-
sion. So about half of reviewers appropriately realized that the manuscript had numerous
aws, beyond the amount that would usually allow for appropriate revision. Further, 68%
of reviewers did not realize that the conclusions written by the manuscript authors did not
follow from other results of the study.
If this is the status of scientic peer review, then one has to be concerned that many studies
are poorly vetted, and that some of the published literature (at least) is inaccurate either in
its exposition or its interpretation.
Mediocrity rewarded
Beyond the publication of papers that should not be published, the peer review process has
the problem of not publishing papers that should be published. In my experience both as an
author and as an occasional guest editor for scientic journals, when multiple peer reviews
bring up dierent concerns, it is impossible for authors to respond adequately to a wide range
of critiques, and thus dicult for editors to publish. In such cases, the problem, perhaps, is
not so much the content of the paper, but rather the topic itself. It may be too controversial,
or too new, and thus dicult for several peer reviewers to agree that it merits publication.
In my own writing, I have noticed that, at times, the most rejected papers are the most
enduring. My rule of thumb is that if a paper is rejected more than ve times, then it is
either completely useless or utterly prescient. In my view, scientic peer review ousts poor
papers – but also great ones; the middling, comfortably predictable, tend to get published.
is brings us back to the claim at the beginning of this chapter, that the most prestigious
journals usually do not publish the most original or novel articles; this is because the peer
review process is inherently conservative. I do not claim that there is any better system, but
I think the weaknesses of our current system need to be honestly acknowledged.

One weakness is that scientic innovation is rarely welcomed, and new ideas are always
at a disadvantage against the old and staid. Again, non-researchers might have had a more
favorable illusion about science, that it encourages progress and new ideas and that it is con-
sciously self-critical. at is how it should be; but this is how it is, again in the words of Ronald
Fisher:
A scientic career is peculiar in some ways. Its raison d’etre is the increase of natural
knowledge. Occasionally, therefore, an increase of natural knowledge occurs. But this
is tactless, and feelings are hurt. For in some small degree it is inevitable that views
previously expounded are shown to be either obsolete or false. Most people, I think,
can recognize this and take it in good part if what they have been teaching for ten
years or so comes to need a little revision; but some undoubtedly take it hard, as a
115
Section 6: The politics of statistics
blow to their amour propre, or even as an invasion of the territory they have come to
think of as exclusively their own, and they must react with the same ferocity as we can
see in the robins and chanches these spring days when they resent an intrusion into
their little territories. I do not think anything can be done about it. It is inherent in the
nature of our profession; but a young scientist may be warned and advised that when
he has a jewel to oer for the enrichment of mankind some certainly will wish to turn
and rend him.
(Salsburg, 2001; p. 51)
So this is part of the politics of science – how papers get published. It is another aspect of
statistics where we see numbers give way to human emotions, where scientic law is replaced
by human arbitrariness. Even with all these limitations, we somehow manage to see a scien-
tic literature that produces useful knowledge. e wise clinician will use that knowledge
where possible, while aware of the limitations of the process.
116
Chapter
16
How scientic research

impacts practice
A drug is a substance that, when injected into a rat, produces a scientic paper.
Edgerton Y. Davis (Mackay, 1991; p. 69)
The almighty impact factor
Many practitioners may not know that there is a private company, omson Reuters, owner
of ISI (Information Sciences Institute), which calculates in a rather secretive fashion a quan-
titative score that drives much scientic research. is score, called the impact factor (IF),
reects how frequently papers are cited in the references of other papers. e more frequently
papers are cited, presumably the more “impact” they are having on the world of research and
practice. is calculation is relevant both for journals and for researchers. For journals, the
more its articles are cited, the higher its IF, the greater its prestige, which, as with all things in
our wonderfully capitalist world, translates into money: advertisers and subscribers ock to
thejournalswiththehighestprestige,thegreatest...impact.Iparticipateinscienticjournal
editorial boards, and I have heard editors describe quite explicitly and calmly how they want
to elicit more and more papers that are likely to have a high IF. us, given two papers that
might be equally valid and solid scientically, with one being on a “sexy” topic that generates
much public interest, and another on a “non-sexy” topic, all other things being equal, the
editor will lean towards the article that will interest readers more. Now this is not in itself
open to criticism: we expect editors of popular magazines and newspapers to do the same;
my point is that many clinicians and the public see science as such a stuy aair that they
may not realize that similar calculations go into the scientic publication process.
eIFalsomatterstoindividualresearchers.Justasbaseballplayershavebattingaverages
by which their skills are judged, the IF is, in a way, a statistical batting average for medical
researchers. In fact, ISI ranks researchers and produces a top ten list of the most cited scien-
tic authors in each discipline. In psychiatry, for instance, the most cited author tends to be
the rst author of large epidemiological studies. Why is he cited so frequently? Because every
time one writes a scientic article about depression, and begins with a generic statement such
as “Major depressive disorder is a common condition, aicting 10% of the US population,”
that rst author of the main epidemiological studies of mental illness frequency is likely to be
cited. Does such research move mountains? Not really. ere is, no doubt, some relevance to

the IF and some correlation with the value of scientic articles. ere are data to back up this
notion. Apparently, about 50% of scientic articles are never cited even once. e median
rate of citation is only 1–2 citations. Fiy to one hundred citations would put an article above
the 99th percentile, and over 100 citations is the hallmark of a “classic” paper (Carroll, 2006).
So IF captures something, but its correlation with quality research is not as strong or
as direct as one might assume. One analysis looked at 131 articles publishing randomized

×