Friday, July 24, 2009

Why Most Published Research Results are False

A dramatic title to a paper published by John Ioannidis in PLOS Medicine. The arguments seem to be really a variant on the publication bias argument familiar in the economics meta-analysis literature. Imagine a field of study where there are no true relationships among the variables. For example, imagine that no drugs actually have an effect on a particular disease. Different researchers test the effects of different drugs on different groups of people. If they set the significance level for a test of whether an effect is present at 10%, say, then 1 in 10 researchers will find a positive result. If only those researchers who find something publish their results the literature will claim an effect when none really exists. This is publication bias. A meta-analysis that treated all the drugs as equivalents and included all trials or controlled for publication bias might help answer this question more correctly.

But even if all the studies were published would you conclude that it was just noise or that the one drug that seemed to have an effect actually had one? More tests of that drug will help settle the issue - replication. Ioannidis argues that in many areas of medicine there are few true relationships and published research exaggerates the likelihood of some effects existing.

This is probably less of a problem in economics where the focus is more on the size of an effect (despite McCloskey's claims) (though see Granger Causality, unit root testing, spurious regressions etc.). But meta-analysis does show us that the size of effects in many published studies may be near meaningless and that the true confidence intervals around those estimates are much larger than those formally found by the researchers. Large sample sizes and appropriate estimators are necessary for obtaining estimates that are at all meaningful.

No comments:

Post a Comment