Peer review in science -- is it broken?

BlogHer Original Post

I'll be blunt: In the U.S., many of us are scientifically illiterate. Not only do we not understand the content of the scientific disciplines, but we don't understand the cultures of the sciences. Those of us who read blogs, however, can have a front-row seat to the practice of science, as well as tons of good commentary on it. In particular, lately I've been drawn to posts that address the issue of ethics in science, technology, and medicine through consideration of the peer review process and research results that can't be replicated.

Some definitions for the laypeople among us: Peer review is the process by which research and scholarship are evaluated by other experts in one's field. The depth and breadth of formal peer review varies by field; for example, the number of reviewers, as well as whether or not they and article authors will remain anonymous to one another, differs across science, social science, and humanities disciplines. Informal peer review also takes place after research results are published in an article; others in the field weigh in with observations and experiences that question, critique, or support the authors' assertions.

When a researcher publishes study results that are particularly exciting or controversial, usually other scientists will try to replicate the studies to see if they get the same results of the original researcher. As a scientist, you want results that are replicable because, among other reasons, replications can back up your research methodology and bolster your reputation.

Onto the blog posts:

First up, epidemiologist Tara C. Smith of Aetiology reflects on why scientists have not been able to replicate research that connected autism with a vaccine commonly given to children in the U.S. Recently, it came to light that Andrew Wakefield, a scientist whose research suggested a link between the measles/mumps/rubella vaccine and autism, had falsified some data in his study. Smith comments:

This is truly incredible. Even being familiar with Wakefield's statements over the past decade about his research, and his complete denial about studies that have contradicted his own findings, it's still pretty shocking that he completely made up data, and then pushed it for ten years as children around the world became ill and even died in light of his research. It's even more disgusting in light of the fact that I doubt this new information will change many minds when it comes to vaccination--the meme has already spread too far to let a little thing like atrocious scientific misconduct rein it in now.

Want more information? Lee Kottner at Cocktail Party Physics also writes about the campaign of misinformation surrounding the MMR vaccine.

Janet Stemwedel alerts us to another case of scientific fraud, this time by anesthesiologist Scott S. Reuben, who fabricated data. She explains the risks Reuben took:

only Reuben seemed to be obtaining these robust results. Since we're talking about research that was used to inform treatment decisions -- where many different physicians were administering the treatment -- this should have set off alarm bells. It did set off alarm bells, of course, but the question is why it took so long. And I'm not sure there's a neat answer to the question of how long the scientific community ought to wait for one researcher's "robust result" to be replicated by others. This is one of those situations where scientists need to work out how to balance their excitement about potentially useful new results, and their default trust of the member of their professional community who produced the results, with their skepticism.

[. . .]

On what peer review ought to have uncovered, we should recall that what actually occurs in peer review may not match what non-scientists imagine. Peer reviewers are not, in most cases, trying to replicate the experiments reported, nor are they even likely to reread all the references cited. Rather, they're looking at the manuscript to see if there's a coherent scientific argument, if the data included support that argument, if the interpretation of results makes sense against the background of the other scientific knowledge in this scientific area, and whether there are any obvious mistakes.

For more on peer review, check out Zuska's post "The Question of Emotion in Speaking of 'Voodoo Correlations in Social Neuroscience.'" In it, Zuska considers the claims made in a soon-to-be-published paper that questions social neuroimaging researchers' methodology and particularly the ways they analyze data. Some scientists and bloggers have called into question the use of the word "voodoo" in the paper's title.

In referring to a particular objection by Tor Wager, Zuska writes,

Mr. Wager objects to language like "voodoo" because it creates an "emotional effect" that affects public perception. The implication here is that the type of language Mr. Wager approves of creates NO emotional effect and has NO affect on public perception. And yet this is crazy. Using language that is more traditionally associated with scientific discourse creates its own emotional effect - it creates the effect that the speaker is speaking without emotion, is completely rational and objective, has no vested interests or biases, and can be trusted, even when some or all of those statements are unfounded. The discourse of science most assuredly has an affect on public perception, and we would be crazy to pretend that we did not want to have such an effect.

This is an excellent point, and one that needs to be broadcast among both scientists and laypeople. As philosophers Donna Haraway and Sandra Harding have reminded us, science is embodied--it is practiced by people with perspectives, biases, and investments. Science is not a pure process of discovery. It's a construction of knowledge by people for people. Because of this, peer review can be deeply flawed, depending on the individuals involved. Both the producers and consumers of that knowledge always, always, always must keep this fact in mind.

What are your thoughts today on science, its critics, and its gatekeepers?

Leslie Madsen-Brooks develops learning experiences for K-12, university, and museum clients. She blogs at The Clutter Museum, Museum Blogging, and The Multicultural Toybox.

Comments

In order to comment on BlogHer.com, you'll need to be logged in. You'll be given the option to log in or create an account when you publish your comment. If you do not log in or create an account, your comment will not be displayed.