Friday, December 15, 2006

Certain quotations in this NY Times article struck me as odd:
Historically, scientific journals have published only positive results — data showing one thing connected to another (like smoking to cancer). As a rule, they didn’t publish negative results (this drug didn’t cure that disease). Medical journals began publishing negative results a few years ago, but social science didn’t follow the trend. This is a problem. Not publishing negative results means that generations of researchers can waste time and money repeating the same studies and finding the same unpublishable results.

Then there’s publication bias. If, for example, a study found that welfare states have more terrorism than nonwelfare states, it would probably get published. If eight studies didn’t find a connection between welfare and terrorism, they probably wouldn’t make it into print, because technically, they didn’t find anything. So a survey of published literature might suggest that welfare and terrorism are linked, even though eight studies potentially proved otherwise. This could have serious implications.

“Social-science research results lead to huge, important decisions,” Lehrer says, about free trade, for example, or government spending on weapons. “That should require total transparency.” But since no one publishes negative results, Lehrer says, those important decisions are sometimes based on biased information. And there are data supporting his theory: early this year, a study found that the leading political-science journals are “misleading and inaccurate due to publication bias.”

“Everything we think we know may be wrong,” Lehrer says. “The correct results could be sitting in people’s file drawers because they can’t get them published.”

Why? J. David Singer, emeritus professor of world politics at the University of Michigan, says: “Folks in my generation are much too rigid, which means we can’t benefit from a nice piece of research that looks like it tells us nothing. These young social scientists are right to insist we start publishing negative results. Until we do, we’re just throwing away useful information."
No one publishes negative results? I don't think this is true. Take the example: Does welfare cause terrorism? If someone came out with a study purporting to show that welfare causes terrorism, I have no doubt that, far from suppressing the opposite result, there would be scores of social scientists and journals who would be highly motivated to publish studies showing that there is no relationship.

In the field of education, for example, there are all kinds of studies purporting to show that there is no relationship between school achievement and pretty much anything that you can name -- desegregation, charter schools, class size reductions, vouchers, No Child Left Behind, testing in general, academic tracking, bilingual education, school spending, neighborhood effects, teacher certification, and more. (I could cite studies for each of these points.) If someone came up with a solid study showing that there was no relationship between poverty and school achievement, that study would certainly be published, because it would be so provocative and interesting. It wouldn't be squelched on the theory that it was a mere "negative result."

The same seems to be true elsewhere. Social scientists are constantly coming out with studies purporting to show that there is no relationship between daycare and childhood aggression, or between divorce and adult wellbeing, or the rise of the welfare state and illegitimacy, or abstinence education and actual absinence, or the minimum wage and unemployment. Name any controverted subject in the social sciences, and you are almost guaranteed to find at least one study showing that there is no correlation between A and B.

All of that said, it is true that a lot of "negative results" wouldn't be published, but only because there's no reason to suspect a relationship in the first place. To return to education, it's interesting to find out that (according to some studies) funding may not have much effect on student achievement, precisely because the opposite is so often asserted. But no one would be interested in a study discussing any of the nearly infinite number of facts in the universe that never would have been expected have any relationship to school achievement in the first place (i.e., levels of sunspots; the presence of El Nino; number of frogs per capita in the community; etc., etc., etc.).

Similarly, in the hard sciences, I would imagine that medical journals are not interested in publishing 10,000 studies discussing all of the 10,000 substances that don't cure cancer (cayenne pepper, gravel, cranberry juice blended with apple juice, etc.). But they definitely are interested in studies showing, for example, that the combination of Plavix and aspirin often isn't effective in preventing heart attacks and strokes (as had previously been thought).

The interesting question is where to draw the line: Is a negative result worthwhile (because it disproves a relationship that was within the realm of suspectability), or not worthwhile (because it is too obvious)? Maybe there are various disciplines that draw that line too restrictively, but it's not true that "no one publishes negative results."

1 Comments:

Anonymous Anonymous said...

The point is: if you do a study, publish your results. That doesn't mean you should go out of your way to find negative results. No offense, but I think you're nitpicking. People often say "no one" when they mean "most people."

I'm constantly irritated at how disorganized and inaccessible scientific studies are today. Every study on a certain topic should be openly accessible and grouped with other similar studies -- especially if its government-funded. I'm a big fan of PLoS, but more needs to be done to get science online.

5:37 PM  

Post a Comment

Subscribe to Post Comments [Atom]

<< Home