Prominent academics are often astonished at the rapidity with which the blogosphere occasionally pounces on and dissects their research findings. In this case, it happened to Case and Deaton, authors of a recent much-publicized study entitled "Rising morbidity and mortality in midlife among white non-Hispanic Americans in the 21st century." The pounce was done by Phil Cohen, and - most prominently - by Andrew Gelman.
The TL;DR version is that rising mortality in some of the subgroups spotlighted by Case and Deaton was increased by a composition effect - the average age within the subgroups increased over the observation period, which pushed up death rates for the aggregated subgroups. If you remove the composition effect, the mortality increase among these groups was considerably less.
Anne Case responded with the consternation typical of researchers first encountering blog attacks:
Case said that she didn’t buy this argument. “We spent a year working on this paper, sweating out every number, sweating out over what we were doing, and then to see people blogging about it in real time — that's not the way science really gets done,” she said. “And so it’s a little hard for us to respond to all of the blog posts that are coming out.”
Academics are used to the cozy, staid world of academia. Responses are slow, polite, and vetted by third parties. Arguments happen in seminars, in office discussions, and at dinners. Disputes are resolved over a matter of years - when they are resolved at all. And never do intellectual adversaries take their case to the general public!
But academics are going to have to get used to blogs. The technological advances of the web have simply made it easier for crowds of outsiders to evaluate research in real time. How often that process produces the "wisdom of crowds", and how often it merely adds unhelpful noise, remains to be seen. Certainly we've seen the internet do both of those things at different times. But blog criticism of research looks like something that's here to stay, and academics whose work appears in the popular press will have to get used to it!
Blog discourse has some distinct advantages - above all, the speed of responses and the diversity of people who get involved in discussions. How often do you see two economists arguing with a sociologist and a political scientist/statistician? That's pretty cool! There is, however, a tendency for blog debates to become too antagonistic.
I think Andrew Gelman's latest salvo against Case and Deaton falls into this category a bit. He is put out that Case and Deaton have, so far, refused to issue a public mea culpa about what he sees as a major gotcha. Gelman writes up what he thinks such a mea culpa should say, and includes these bits of snark:
Had it not been for bloggers, we’d still be in the awkward situation of people trying to trying to explain an increase in death rates which isn’t actually happening...We count ourselves lucky to live in an era in which mistakes can be corrected rapidly[.]
Gelman is dramatically overstating the importance of what he found! To say that the increase in death rates "isn't actually happening", first of all, is not quite right - Gelman's rough-and-ready composition adjustment removes all of the increase, but more careful examination shows that some portion of the increase remains.
Second, Gelman is kind of assuming that zero is the important benchmark for what constitutes an "increase". He makes sure to point out that the paper's main finding - that American white mortality increased a lot relative to various comparison groups - is not changed by the composition adjustment. But when he claims that the increase "didn't really happen", Gelman is saying that "increase" is an absolute rather than a relative term.
Andrew, you're a stats guy. You know full well that people analyzing time-series data detrend stuff all the time. Measuring increases relative to a trend is totally standard practice!
So like many blog debates, this one ends up making a mountain out of a molehill. The composition effect was a useful and instructive observation, but it doesn't really change anything about the paper's result. And publicly demanding that the authors engage in an equally public mea culpa over such a non-issue is a little unrealistic. If it leads to rancor in the long term, that will be a shame.
I like what blogs have done for research, but I think we should work to make those discussions less about point-scoring and more about a cooperative, crowdsourced search for truth.