A number of articles have purported to show that obesity and other social problems spread through social networks. These articles, many by Christakis and Fowler, have appeared in the New England Journal of Medicine and other top medical journals. I find such results strange–they generally don’t pass the sniff test for me. The causal mechanism is usually inferred to exist from the statistical significance of the results.
Russell Lyons, a mathematician, has now published a paper in the journal Statistics, Politics, and Policy that finds that the work by Christakis and Fowler is not so good. (HT: Pietro Poggi-Corradini). Here is the summary:
We begin by summarizing the major problems with C&F’s studies:
1. The data are not available to others.
2. The unavailable data are sparse for friendships.
3. The models used to analyze the sparse data contradict the data and the conclusions.
4. The method used to estimate the dubious models does not apply.
5. The statistical significance tests from the questionable estimates do not show
the proposed differences.
6. The wrongly proposed differences do not distinguish among homophily, environment, and induction.
7. Associations at a distance are better explained by homophily than by induction.
(Homophily is selection bias, that “people tend to associate with others like themselves.)
Or in another words, bad paper, meaningless results. It’s not an easy article to follow (and neither was the original work by Christakis and Fowler.) The point on statistical significance is pretty clear though and pretty deadly.
Lyons had a little trouble getting his article published:
We first submitted our critique to the New Engl. J. Med., but it was rejected
without peer review. The journal declined to give a reason when asked. We next submitted to BMJ, but it was again rejected without peer review. This journal did, however, volunteer that “We decided your paper was probably better placed in a more specialist journal.” It is interesting to note that the same issue of BMJ that published Fowler and Christakis (2008a) also published the critique Cohen-Cole and Fletcher (2008a). The cover of that issue, in fact, was devoted to those two articles. In contrast to BMJ’s decision, the general-interest online newsmagazine Slate published an article by Johns (2010) on our critique the same month we submitted our paper. An delightful coda is that a few months later, BMJ published an editorial by Schriger and Altman (2010) called “Inadequate post-publication review of medical research”.
After these rejections by the New Engl. J. Med. and BMJ, we approached
three top journals who did not publish any of C&F’s studies, JAMA, Lancet, and Proc. Natl. Acad. Sci.. All were uninterested in our critique since they do not publish critiques of articles they did not originally publish. The section of J. Pers. Soc. Psychol. that published Cacioppo et al. (2009) does not publish critiques even of papers they have published, unless accompanied by new data.
Following on this educational venture, we submitted to a statistics journal
that specializes in reviews, Stat. Sci. Five months later they had 3 referee reports. The first two recommended publication after revisions (e.g., “an important critique” and “well worth publishing”), while the third, though agreeing with our critiques, said that C&F’s work was insufﬁciently important to warrant publication of a critique in Stat. Sci. Two months after getting these reports, the editor made his decision: rejection, allowing for resubmission if we made the tone more neutral and changed the focus, perhaps to “editorial decision making standards in medical journals”, as suggested by the third referee.
Methodological journals abound, but their cautions and recommendations
are largely ignored (Blalock, 1989). Indeed, “in a process well documented by
Blalock and Duncan, positivist sociology, like so many other professions, has tended to become immune to the recognition of flaws in its work” (Baldus, 1990). Given the above considerations, it may help to have a journal specifically devoted to critiques. This would not only allow others to know more about which studies are trustworthy, but could also have the salutary effect of encouraging researchers to pay extra attention to their methods lest they be publicly critiqued.