In the most recent issue of Nature there is a short piece on the decline effect, which was discussed in a longer article in the New Yorker a few months back . Succinctly, the decline effect is the phenomena that the empirical support for certain scientific hypothesis declines over time. The New Yorker article gives several examples of this effect. One important example is decline in empirical support for the effectiveness of certain medical drugs. For example, a study mentioned in the article shows that the demonstrated effectiveness of anti-depressants has decreased by as much as three-fold in the past decades. A more frivolous example is the decline in empirical support for E.S.P. – this peaked in the 1930’s, with several research papers showing empirical support, but declined markedly in the next decade .
Several explanations have been proposed for the effect, and there are even scientists who are now doing research specifically on the decline effect! To me, the most reasonable explanation proposed consists of two parts: 1) scientists like hypotheses that are surprising but true; and 2) the criteria for the “truth” of a hypothesis (in many publication venues) is empirical support with 95% confidence. The second fact suggests that, when empirically testing a hypothesis that, a priori, has a 50% chance of being true, there is a 2.5% chance of getting a false positive. The first fact suggests that many hypotheses that are tested have an a priori chance of being true that is not much more than 50%. After all, if a hypothesis has, a priori, close to a 100% chance of being true, then it’s not very surprising, is it? Thus, these two facts together suggest there will be a non-negligible fraction of hypothesis that scientists investigate, for which empirical testing will give false positives.
One solution for the decline effect, proposed in both articles, is the creation of a kind of “Journal of Negative Results”, which would publish empirical results that contradict certain hypotheses. I heard this idea bantered around a lot during my grad student days, but I don’t think it will ever go very far. Where’s the glory in finding facts that are boring but true? Of course, there’s also the possibility of either 1) raising the 95% empirical support threshold necessary for publication; or 2) requiring empirical verification by many independent studies before allowing for publication. Both of these ideas seem reasonable, but could slow down the scientific process.
Another idea (that was not proposed in either of the articles) is to raise the a priori support for a hypothesis. In Computer Science, we have a mathematical methodology that frequently allows us to prove at least parts of a hypothesis that we want to show is true. I think the decline effect is a nice justification for including at least some theoretical analysis in any paper in Computer Science. Sometimes experiments can allow us to reach further than a purely theoretical study will allow. However, it seems important, in almost all cases, to provide at least some theoretical support for a hypothesis.
 The New Yorker rarely publishes articles on science, but when they do, the articles generally seem to be among the very best science writing out there.
 Believe it or not, according to the New York Times, a paper in “one of psychology’s most respected journals” recently claimed support for the existence of a certain type of ESP in college students. It remains to be seen whether or not there will be a decline effect in the support for this new hypothesis (hint: If I was a betting man, I wouldn’t bet on the hypothesis!)
6 Comments so far
Leave a comment