Tuesday, February 6, 2018

On the Uses (and Abuses) of Economath: The Malthusian Models

Many American undergraduates in Economics interested in doing a Ph.D. are surprised to learn that the first year of an Econ Ph.D. feels much more like entering a Ph.D. in solving mathematical models by hand than it does with learning economics. Typically, there is very little reading or writing involved, but loads and loads of fast algebra is required. Why is it like this?

The first reason is that mathematical models are useful! Take the Malthusian Model. All you need is four simple assumptions: (1) that the birth rate is increasing in income, (2) that the death rate is decreasing in income, (3) that income per person is negatively related to population, and (4) the rate of technological growth is slow relative to population growth, and you can explain a lot of world history, and it leads you to the surprising conclusion that income in a Malthusian economy is determined solely by birth and death rate schedules, and is uncorrelated with technology. Using this model, you can explain, for example, why incomes before 1800 were roughly stagnant for centuries despite improving technology (technological advance just resulted in more people; see the graph of income proxied by skeletal heights below). It also predicts why the Neo-Europes -- the US/Australasia/Southern Cone countries are rich -- they were depopulated by disease, and then Europeans moved in with lots of land per person. It is a very simple, and yet powerful, model. And it makes (correct) predictions that many historians (e.g., Kenneth Pomeranz), scientists (e.g., Jared Diamond), and John Bates Clark-caliber economists (see below) get wrong.



A second beneficial reason is signalling. This reason is not to be discounted given the paramount importance of signalling in all walks of life (still not sufficiently appreciated by all labor economists). Smart people do math. Even smarter people do even more complicated-looking math. I gratuitously put a version of the Melitz model in my job market paper, and when I interviewed, someone remarked that I was "really teched up!" Simple models are not something that serious grown-ups partake in. Other social science disciplines have their own versions of peacock feathers. In philosophy, people write in increasingly obtuse terms, using obscure language and jargon, going through enormous effort to use words requiring as many people as possible to consult dictionaries. Unfortunately, the Malthusian model above, while effective in terms of predictive power, is far too simple to play a beneficial signalling role, and as a result would likely have trouble getting published if introduced today. 

A third reason to use math is that it is easy to use math to trick people. Often, if you make your assumptions in plain English, they will sound ridiculous. But if you couch them in terms of equations, integrals, and matrices, they will appear more sophisticated, and the unrealism of the assumptions may not be obvious, even to people with Ph.D.'s from places like Harvard and Stanford, or to editors at top theory journals such as Econometrica. A particularly informative example is the Malthusian model proposed by Acemoglu, Johnson, and Robinson in the 2001 version of their "Reversal of Fortune" paper (model starts on the bottom of page 9). Note that Daron Acemoglu is widely regarded as one of the most brilliant economic theorists of his generation, is a tenured professor of Economics at MIT, was recently the editor of Econometrica (the top theory journal in all of economics), and was also awarded a John Bates Clark medal (the 2nd most prestigious medal in the profession) in large part for his work on this paper (and a closely related paper). Also keep in mind this paper was eventually published in the QJE, the top journal in the field. Very few living economists have a better CV than Daron Acemoglu. Thus, if we want to learn about how economath is used, we'll do best to start by learning from the master himself.

What's interesting about the Acemoglu et al. Malthusian model is that they take the same basic assumptions, assign a particular functional form to how population growth is influenced by income, and arrive at the conclusion that population density (which is proportional to technology) will be proportional to income! They use the model:

p(t+1) = rho*p(t) + lambda*(y-ybar) + epsilon(t),

where p(t+1) is population density at time t+1, p(t) is population at time t, rho is a parameter (perhaps just less than 1), lambda is a parameter, y is income, ybar is the level of Malthusian subsistence income, and epsilon is an error term. If you impose a steady state (p* and y*) and solve for p*, you get:

p* = 1/(1-rho)*lambda(y*-ybar)

I.e., you get that population density is increasing in income, and thus that income per person should have been increasing throughout history. Thus, these guys from MIT were able to use mathematics and overturn one of the central predictions of the Malthusian model. It is no wonder, then, that Acemoglu was then awarded a Clark medal for this work.

Except. This version doesn't necessarily fit the skeletal evidence above, although that evidence may be incomplete and imperfect (selection issues?). What exactly was the source of the difference in the classical Malthusian model and the "MIT" malthusian model? The crucial assumption, unstated in words but there in greek letters for anyone to see, was that income affects the level of population, but not the growth rate in population. Stated differently, this assumption means that a handful of individuals could and would out-reproduce the whole of China and India combined if they had the same level of income. (With rho less than one, say, .98, the first term will imply a contraction of millions of people in China/India. With income over subsistence, we then need to parameterize lambda to be large enough so that overall population can grow in China. But once we do this, we'll have the implication that even a very small population would have much larger absolute growth than China given the same income.) Obviously, this is quite a ridiculous assumption when stated in plain language. A population can grow by, at most, a few percent per year. 100 people can't have 3 million offspring. What this model does successfully is reveal how cloaking an unrealistic assumption in terms of mathematics can make said assumption very hard to detect, even by tenured economics professors at places like MIT. Math in this case is used as little more than a literary device designed to fool the feebleminded. Fortunately, someone caught the flaw, and this model didn't make the published version in the QJE. Unfortunately, the published version still included the view that population density is a reasonable proxy for income in a Malthusian economy, which of course it is not. And the insight that Malthusian forces led to high incomes in the Neo-Europes was also lost. 

Given that this paper then formed part of the basis of Acemoglu's Clark medal, I think we can safely conclude that people are very susceptible to bullshit when written in equations. More evidence will come later in the comments section, as, conditioned on getting hits, I suspect several people will be taken in by the AJR model, and will defend it vigorously. 

This episodes shows some truth to Bryan Caplan's view that "The main intellectual benefit of studying economath ... is that it allows you to detect the abuse of economath." 

Given the importance of signaling in all walks of life, and given the power of math, not just to illuminate and to signal, but also to trick, confuse, and bewilder, it thus makes perfect sense that roughly 99% of the core training in an economics Ph.D. is in fact in math rather than economics.


Update: Sure enough, as I predicted above, we have a defender of the AJR model in the comments. He argues the AJR model shows why math clarifies, even while his posts unwittingly convey the opposite.

Above, I took issue with the steady state relationship in the model and the fallacious assumption which yields it. The commenter points out correctly, that, outside of the steady state, the AJR model actually implies that there are two conflicting forces. But, so what? My argument was about the steady state. If one fixes the wrong assumption, steady-state income in the Malthusian model will be equal to subsistence income, and thus the main argument for correlation between population density and income outside of the steady state will also be shut down.

Second the commenter unfairly smears Acemoglu & Co., writing that the real problem is not with their model, but that they didn't interpret their model correctly: "goes ahead in the empirical work to largely, in contrast to what their model says, take population density as a proxy for income!".

Thus I'd like to defend Acemoglu against this unfair smear. In preindustrial societies, there were vast differences in population densities between hunter-gatherer groups, and agricultural societies, even though there were not vast income differences between the two. In fact, quite surprisingly, hunter-gatherer societies often look to have been richer despite working less (read Greg Clark), and despite far more primitive technology. Thus it is quite reasonable to assume, as AJR did, that it is likely that differences in technology would swamp the differences in other population shocks (A more important than epsilon). The Black Death might have doubled or tripled incomes, but settled agrarian societies might have population densities 1000 times as large as primitive hunter-gatherer tribes. This isn't an airtight argument, but, given their model, I believe AJR's empirical extension is reasonable, particularly given that they provide a caveat. The problem is that their model is not reasonable.

The commenter goes on to argue that I've gotten AJR's conclusion backward: "You claim that the point they are making is "population density will be a decent proxy for income in a Malthusian model." The point they are making is explicitly the exact opposite: that "caution is required in interpreting population density as a proxy for income per capita." 

Huh? The first two lines of the abstract of the AJR paper read: "Among countries colonized by European powers during the past 500 years, those that were relatively rich in 1500 are now relatively poor. We document this reversal using data on urbanization patterns and population density, which, we argue, proxy for economic prosperity."

Seems clear here they are arguing for using it as a proxy.



Wednesday, January 24, 2018

Reminder: Most Published Research is Probably Wrong!

At least in some way. Don't get me wrong, there is a lot of great research out there. However, it has occured to me that many people are much too trusting of published research, particularly when written by people from fancy universities with fancy letters behind their names and when published in prestigious journals. I saw this recently during a very lively session on the Decline in US Manufacturing Growth and Productivity at the AEA meetings in Philadelphia several weeks ago. Several people asked David Autor why his results on the impact of China on US innovation was different from what other prominent researchers had found. (One of the answers, of course, is that there is little reason to believe the competing research, but I digress...) Similarly, one of my complaints of the otherwise excellent Trade Talks podcast with Chad Bown is that published results, particularly by prominent researchers, are generally taken at face value, with not enough discussion, in my view, about potential caveats and shortcomings of the methodologies employed.

The reality is that science is difficult, and that Cowen's First Law (there is something wrong with everything!) applies to economic research.

Here's a moving video from Neil Degrasse Tyson which I mostly love. My only issue was his description of science:
One of the great things about science, is that it's an entire exercise in finding what is true.
You have a hypothesis, you test it. I get a result. A rival of mine double checks it, because they think I might be wrong. They perform an even better experiment than I did, and they find out, “Hey, this experiment matches! Oh my gosh. We’re on to something here!” And out of this rises a new, emergent truth.
This is a description of everything I wish science was! Perhaps it is an accurate description of hard sciences (I'm skeptical), but this is not how the social sciences operate. In practice, when a top researcher has a major finding, other top researchers, with rare exceptions, do not check it. Occasionally, grad students or less prominent researchers will overturn the result, but they will find that journals simply aren't the least bit interested in publishing papers which reverse seminal papers. Thinking like an economist, this creates some rather perverse incentives. If you are a well-connected researcher in a prominent department, you are well-incentivized to publish as much as possible. This means creating research which appears sophisticated, and it also means not pissing off the people who will judge your research. On the contrary, implies that there are benefits from having a lot of close friends (what deGrasse calls your "rivals") in the profession. You don't accomplish this by pointing out that another researcher's results disappear when you control for latitude. As a result, many top researchers are in fact incentivized to crank out many low-quality papers but with seemingly blockbuster results.

Part of the way this system survives is because there is a culture frowning on writing "comment papers", and the other reason is that there is, fortunate for the existence of this system, a willing population of "sheep", the "true believers", available to consume and believe this research.

In any case, on my ride back to Moscow from Philadelphia, I fired up Stata, and took a second look at some of the research which found that the China shock led to a huge increase in productivity and patenting in Europe, published in a leading journal. The thesis sounded quite dubious to me from the beginning. It turned out that including sectoral fixed effects -- a very basic control -- killed the results. If I were to write this up, the journal that published it would never, in a million years, accept it. Secondly, although the original authors seem to me like fine people, traditionally, economists behave in a way which is mafia-level shady (see the comments) when their research comes under attack. Partly, they have to do this, since the masses believe that most top research is correct, it is seen as a huge black mark on someone's reputation to have a paper overturned. If there was widespread knowledge that science is difficult and most papers have flaws, this might not be so necessary. Then, perhaps, we could get a bit closer to Neil Degrasse Tyson's idealized view of science.