Tuesday, March 7, 2017

On Perverse Incentives and Replication in Science

Stephen Hsu has a nice blog post on this topic. He writes about this common pattern:

1. Study reports results which reinforce the dominant, politically correct, narrative.
2. Study is widely cited in other academic work, lionized in the popular press, and used to advance real world agendas.
3. Study fails to replicate, but no one (except a few careful and independent thinkers) notices.

#1 is spot-on for economics. Woe be to she who bucks the dominant narrative. In economics, something else happens. Following the study, there are 20 piggy-back papers which test for the same results on other data. The original authors typically get to referee these papers, so if you're a young researcher looking for a publication, look no further. You've just guaranteed yourself the rarest of gifts -- a friendly referee who will likely go to bat for you. Just make sure your results are similar to theirs. If not, you might want to shelve your project, or else try 100 other specifications until you get something that "works". One trick I learned: You can bury a robustness check which overturns the main results deep in the paper, and your referee who is emotionally invested in the benchmark result for sure won't read that far.

Hsu then writes:


"one should be highly skeptical of results in many areas of social science and even biomedical science (see link below). Serious researchers (i.e., those who actually aspire to participate in Science) in fields with low replication rates should (as a demonstration of collective intelligence!) do everything possible to improve the situation. Replication should be considered an important research activity, and should be taken seriously"

That's exactly right. Most researchers in Economics go their entire careers without criticizing anyone else in their field, except as an anonymous referee, where they tend to let out their pent-up aggression. Journals shy away from publishing comment papers, as I found out first-hand. In fact, much if not a majority of the papers published in top economics journals are probably wrong, and yet the field soldiers on like a drunken sailor. Often, many people "in the know" realize that many big papers have fatal flaws, but have every incentive not to point this out and create enemies, or to waste their time writing up something which journals don't really want to publish (the editor doesn't want to piss a colleague off either). As a result, many of these false results end up getting taught to generations of students. Indeed, I was taught a number of these flawed papers as both an undergraduate and a grad student.

What can be done?

Well, it would be nice to make replication sexy. I'm currently working on a major replication/robustness project of the AER. In the first stage, we are checking whether results are replicable, using the same data sets and empirical specifications. In the second stage, we plan to think up a collection of robustness checks and out-of-sample tests of papers, and then create an online betting market about which papers will be robust. We plan to let the original authors bet on their own work.

Another long-term project is to make a journal ranking system which gives journals points for publishing comment papers. Adjustments could also be made for other journal policies, such as the extent to which a particular journal leeches off the academic community with high library subscription fees, submission fees, and long response times.

The AEA should also come out with a new journal split between writing review articles (which tend to be highly cited), and comment papers (which tend not to be). In that case, they could do both well and good.

As an individual, you can help the situation by writing a comment paper (maybe light up somebody who isn't in your main field, like I did). You can also help by citing comment papers, and by rewarding comment papers when you edit and serve as a referee. As an editor, do you really care more about your journal's citations than truth? You could also engage in playful teasing of your colleagues who haven't written any comment papers as people who aren't doing their part to make economics a science. (You could also note that it's also a form of soft corruption, but I digress...)


2 comments:

  1. I just wish more economists (and scientists in other fields, like mine) thought this way.

    In the early engineering literature, for example, papers that were read at meetings were published together with comments, which generally put the spotlight on strengths or weaknesses of the papers. Nowadays, with everything digital, it should be easy to do something similar.

    ReplyDelete
    Replies
    1. Me too. What field are you in? I think publishing papers along with comments would be smart...

      Delete