At least in some way. Don't get me wrong, there is a lot of great research out there. However, it has occured to me that many people are much too trusting of published research, particularly when written by people from fancy universities with fancy letters behind their names and when published in prestigious journals. I saw this recently during a very lively session on the Decline in US Manufacturing Growth and Productivity at the AEA meetings in Philadelphia several weeks ago. Several people asked David Autor why his results on the impact of China on US innovation was different from what other prominent researchers had found. (One of the answers, of course, is that there is little reason to believe the competing research, but I digress...) Similarly, one of my complaints of the otherwise excellent Trade Talks podcast with Chad Bown is that published results, particularly by prominent researchers, are generally taken at face value, with not enough discussion, in my view, about potential caveats and shortcomings of the methodologies employed.
The reality is that science is difficult, and that Cowen's First Law (there is something wrong with everything!) applies to economic research.
Here's a moving video from Neil Degrasse Tyson which I mostly love. My only issue was his description of science:
One of the great things about science, is that it's an entire exercise in finding what is true.This is a description of everything I wish science was! Perhaps it is an accurate description of hard sciences (I'm skeptical), but this is not how the social sciences operate. In practice, when a top researcher has a major finding, other top researchers, with rare exceptions, do not check it. Occasionally, grad students or less prominent researchers will overturn the result, but they will find that journals simply aren't the least bit interested in publishing papers which reverse seminal papers. Thinking like an economist, this creates some rather perverse incentives. If you are a well-connected researcher in a prominent department, you are well-incentivized to publish as much as possible. This means creating research which appears sophisticated, and it also means not pissing off the people who will judge your research. On the contrary, implies that there are benefits from having a lot of close friends (what deGrasse calls your "rivals") in the profession. You don't accomplish this by pointing out that another researcher's results disappear when you control for latitude. As a result, many top researchers are in fact incentivized to crank out many low-quality papers but with seemingly blockbuster results.
You have a hypothesis, you test it. I get a result. A rival of mine double checks it, because they think I might be wrong. They perform an even better experiment than I did, and they find out, “Hey, this experiment matches! Oh my gosh. We’re on to something here!” And out of this rises a new, emergent truth.
Part of the way this system survives is because there is a culture frowning on writing "comment papers", and the other reason is that there is, fortunate for the existence of this system, a willing population of "sheep", the "true believers", available to consume and believe this research.
In any case, on my ride back to Moscow from Philadelphia, I fired up Stata, and took a second look at some of the research which found that the China shock led to a huge increase in productivity and patenting in Europe, published in a leading journal. The thesis sounded quite dubious to me from the beginning. It turned out that including sectoral fixed effects -- a very basic control -- killed the results. If I were to write this up, the journal that published it would never, in a million years, accept it. Secondly, although the original authors seem to me like fine people, traditionally, economists behave in a way which is mafia-level shady (see the comments) when their research comes under attack. Partly, they have to do this, since the masses believe that most top research is correct, it is seen as a huge black mark on someone's reputation to have a paper overturned. If there was widespread knowledge that science is difficult and most papers have flaws, this might not be so necessary. Then, perhaps, we could get a bit closer to Neil Degrasse Tyson's idealized view of science.
Write up your results get them published in a journal that publishes replications. I know a few. :-)
ReplyDeleteHaha. I might do this, but I've probably already pissed off enough people in the profession.
DeleteSo true...! We need to rethink our approach towards research.
ReplyDeletePerhaps, to some large extent certainly, but I do see a lot of criticism in economics, in seminars, and discussions, as a PhD student at University of Arizona department of finance.
ReplyDeleteBut also, what about, "...one funeral at a time" to eventually get change and overturning. We have seen this in econ at least over long periods of time.
I actually think the discussion in seminar rooms in economics tends to be very good. However, it doesn't transfer over to written research, and this is a problem.
Delete"One funeral at a time" is too slow.
And then, there is the possibility that the very same people who's seminal results you are overturning, are also the reviewers of your paper when you submit it for publication or for conference (Especially if your topic is specialized in a field with small number of researchers).
ReplyDeleteThis happened to me. A trick is to simply ask the editors not to assign the original authors as a referee! Amazing what you can get when you ask.
Deletepierre bourdieu's homo academicus.
ReplyDelete