Tuesday, October 30, 2018

Are Negative Interest Rates Expansionary? A Review of Eggertsson et al. (2017)

My answer is yes they are.

I recently assigned my students to write a referee report of Eggertsson et al., "Are Negative Nominal Rates Expansionary?" I chose the paper because I'm a huge fan of Eggertsson's work, the paper is well done, it has a nice synthesis of data and theory, and the topic is of central policy importance. The authors find that negative nominal interest rates may not be expansionary, and also, that under certain conditions, quite surprisingly, they may be contractionary.

I like the paper and see a clear contribution. However, I wish the authors would have framed the paper slightly differently. My reading is that they show that negative reserve rates below the cost of hoarding cash can be potentially contractionary under some conditions, and need other policies (negative lending rates for banks/Gesell taxes) to make the policy more expansionary. This is still an important and useful point -- I learned something from them. Yet, since not all of the key modeling assumptions are true, and since "negative interest rates" can include 1. negative rates above the cost of hoarding cash, 2. negative borrowing rates for banks, or 3. Gesell taxes on banks (may seem far-fetched, but something in this spirit has, in fact, happened), I think the paper's actual result is in fact narrower than "negative rates aren't expansionary and may be contractionary", and does not apply to the negative interest rates that we have seen.

From where does the contractionary result come from? It comes from assuming banks profits can affect intermediation costs. Since negative interest rates are a tax on banks, they hurt profitability. But, how much did banks actually pay in taxes due to negative interest rates? Color me skeptical it hurt profitability enough to have materially damaged intermediation. We are talking a second or third-order type of effect. And, to the extent negative interest did increase loans, the impact on profitability would be uncertain. Lastly, the ECB had a tiered rate system, in which required reserves were still paid a positive interest rate, but only the marginal reserves over some threshold were taxed. 

One concern is that the ECB also had a program (dubbed TLRTO -- can't central banks print some money and hire better PR people?) where banks could borrow money at negative interest rates on the condition they lend it out. In that case, hundreds of billions of Euros in loans were taken out, likely many as a cause of the policy, and these "negative interest rates" would also add several billion Euros directly to bank profits over the last few years. The model in the paper does not consider this possibility. 

Two additional key assumptions of the model are that banks are taking out 100% financing from deposits (as many of my students noted) and all bank profits are paid to households. To their credit, the authors fully acknowledge that the first of these assumptions is important and not exactly true. In Sweden, for example, the deposit share is less than 50%. To the extent that financing comes from other sources, banks may actually care about the spread between the reserve rate and the loan rate, particularly if there is a cost of holding cash (say, 1-1.5%). In practice, after the financial crisis, banks like Bank of America cut their dividend payments to almost nothing. In this environment, banks will respond to negative interest rates by making more loans and lowering the interest rate on loans, and may even pay more dividends, boosting aggregate demand. Nothing in the paper contradicts this logic. 

My intuition before reading this paper is that a negative interest rate will be an effective tax on excess reserves, and thus it would make a bank want to lend out money instead of having more reserves. This appears not to happen in the model because banks would hoard physical cash instead of reserves (I think the authors could have been more explicit on what banks do with their mountain of excess reserves when negative rates happen -- the calibration exercise starts with few excess reserves).  There is an implicit assumption that because deposit rates didn't drop (much) below zero, that the cost of holding cash must also be close to zero, and also, in particular, lower than the .4-.7% negative interest rates that we've seen. My gut feeling is that the level is likely to be closer to 1-1.5%, since banks did not, in fact, hoard physical cash. One reason they did not, however, is that central banks aren't stupid -- it seems at least some central banks take into account changes in cash holdings of banks when setting the limits for negative-interest free reserves (see box 2 of this BIS document on negative rates). In addition, there is a paper arguing banks responded to negative interest rates by raising fees instead, effectively lowering deposit rates below zero -- a topic the authors do address (hat tip to a student).

Perhaps banks did not lower their deposit rates below zero, not because there is no cost to holding cash, but because they didn't know how depositors would react, or believed the policy was temporary, and were afraid of a hysteretic effect on depositors, or due to ongoing concerns with their quantities of bad long-term loans. It could also be the case that holding small amounts of cash is relatively costless, but once you have football fields of cash, suddenly you need to hire top-notch security, and buy insurance, so the cost could increase non-linearly. The authors, once again, should be commended for being explicit that this assumption would alter the conclusions of the paper. Yet, even I suspect there is a limit to the cost of hoarding cash. 

I could go on here with theory. This is a one period model. A negative headline rate probably sends a stronger signal about future low short-term rates (commitment to irresponsibility) than keeping a rate at zero, and thus could influence longer maturities (as evidence finds). They could also underscore a central bank's commitment to do "whatever it takes" to achieve inflation. It's also a closed economy -- negative interest rates could trigger capital outflows and a depreciated exchange rate, particularly if it had an impact on the yield curve.

It's also possible to introduce other factors which would imply negative nominal rates are expansionary. See this nice paper by Davide Porcellacchia (who we tried unsuccessfully to hire last year at NES), which argues that negative rates might still lead consumers to save less overall. 

Thus, theoretically, the result is ambiguous. (Isn't that always the case?) Thus, it comes down to empirics. 

I also had some minor quibbles with the empirics in the paper. At one point, they write, speaking of Denmark, that "the negative policy rate has not been transmitted to deposit rates." It actually looks to my eye like there were slight declines in the deposit rates each time the Danish central bank crossed into negative territory. The household deposit rate was cut roughly in half after the second cut in 2014. If you squint, it also looks like the corporate deposit rate dipped below zero at several points. 

Admittedly, in their evidence for Switzerland and Japan, going negative did appear to have, at best, a very minor impact on deposit rates (see below). However, for both Germany, and for the Euro Area as a whole, it looks to me like going negative might have had close to a 1-for-1 impact on deposit rates. Certainly, in at least several cases negative policy rates did translate into lower deposit rates. 

Even more important than deposit rates are lending rates (Figure below). Again, for lending rates, it does look like in at least a few cases, a lower deposit rate did translate into lower lending rates. In the Japanese case, this happened despite the fact that deposit rates did not fall. 

It would also have been nice to look at surprise announcements of ZLB episodes, and look at how the announcement of negative rates affect a variety of interest rates in the economy, including the rates that banks borrow from each other at, longer-term yields, and also of exchange rates and the stock market (admittedly, from what I've seen, this evidence looks mixed). The BIS found that negative interest rates passed through fully into money markets. Switzerland and Denmark did, after all, institute negative rates in part to stabilize the exchange rates. 

I think more work needs to be done on this topic, but from the theory and the data, I don't see much that suggests that negative interest rates can't or didn't stimulate the economy. I am persuaded that negative reserve rates are best complemented with other policies, and I credit this paper with making the point. I believe central banks should do more to boost lending during liquidity trap periods, such as through a full program of quantitative targeting of loan levels, regulation, fines, negative interest rates, or subsidies for loans made. China in fact was said to do quantitative targeting during the Great Recession, and France and Germany did things like this during the Bretton Woods period. (See this nice paper by Eric Monnet.)

Overall, I'm a big believer of much more aggressive monetary policy at the zero lower bound than what we've seen. While I think a healthy degree of skepticism about new policy tools during liquidity trap periods is prudent (probably, mistakes were made with the rollout and PR around QE, even if it does seem to have been stimulative), and while I also think papers like this one move the debate forward, I've also long been skeptical of the skepticism of the idea that monetary policy can't be effective at the ZLB. I worry that this paper may be misinterpreted to suggest that banks shouldn't try to use negative interest rates at the ZLB, and should opt to do nothing instead. This despite the correct conclusion being that central banks should use negative reserve rates in conjunction with other policies. Some like this happened with people who read Paul Krugman only occasionally, but not in the round. They walked away believing that monetary policy had "shot its wad" -- in the words of a former president. Had they read every word Krugman wrote, they would have believed instead that central banks needed to credibly commit to higher future levels of inflation to stimulate the economy, but only that traditional monetary policy was ineffective. I observed this when I worked in the Obama CEA, and some economists there too believed that nothing more could be done on monetary policy, leading them to not want to recommend to the President that he make his vacant FOMC picks. There's a case to be made that this was the single biggest policy mistake Obama made while in office. And that mistake, to a large extent, explains how we got to where we are. 

Note: I should mention that many of the points above were motivated, directly or indirectly, by the insightful referee reports of my students! 

Tuesday, February 6, 2018

On the Uses (and Abuses) of Economath: The Malthusian Models

Many American undergraduates in Economics interested in doing a Ph.D. are surprised to learn that the first year of an Econ Ph.D. feels much more like entering a Ph.D. in solving mathematical models by hand than it does with learning economics. Typically, there is very little reading or writing involved, but loads and loads of fast algebra is required. Why is it like this?

The first reason is that mathematical models are useful! Take the Malthusian Model. All you need is four simple assumptions: (1) that the birth rate is increasing in income, (2) that the death rate is decreasing in income, (3) that income per person is negatively related to population, and (4) the rate of technological growth is slow relative to population growth, and you can explain a lot of world history, and it leads you to the surprising conclusion that income in a Malthusian economy is determined solely by birth and death rate schedules, and is uncorrelated with technology. Using this model, you can explain, for example, why incomes before 1800 were roughly stagnant for centuries despite improving technology (technological advance just resulted in more people; see the graph of income proxied by skeletal heights below). It also predicts why the Neo-Europes -- the US/Australasia/Southern Cone countries are rich -- they were depopulated by disease, and then Europeans moved in with lots of land per person. It is a very simple, and yet powerful, model. And it makes (correct) predictions that many historians (e.g., Kenneth Pomeranz), scientists (e.g., Jared Diamond), and John Bates Clark-caliber economists (see below) get wrong.

A second beneficial reason is signalling. This reason is not to be discounted given the paramount importance of signalling in all walks of life (still not sufficiently appreciated by all labor economists). Smart people do math. Even smarter people do even more complicated-looking math. I gratuitously put a version of the Melitz model in my job market paper, and when I interviewed, someone remarked that I was "really teched up!" Simple models are not something that serious grown-ups partake in. Other social science disciplines have their own versions of peacock feathers. In philosophy, people write in increasingly obtuse terms, using obscure language and jargon, going through enormous effort to use words requiring as many people as possible to consult dictionaries. Unfortunately, the Malthusian model above, while effective in terms of predictive power, is far too simple to play a beneficial signalling role, and as a result would likely have trouble getting published if introduced today. 

A third reason to use math is that it is easy to use math to trick people. Often, if you make your assumptions in plain English, they will sound ridiculous. But if you couch them in terms of equations, integrals, and matrices, they will appear more sophisticated, and the unrealism of the assumptions may not be obvious, even to people with Ph.D.'s from places like Harvard and Stanford, or to editors at top theory journals such as Econometrica. A particularly informative example is the Malthusian model proposed by Acemoglu, Johnson, and Robinson in the 2001 version of their "Reversal of Fortune" paper (model starts on the bottom of page 9). Note that Daron Acemoglu is widely regarded as one of the most brilliant economic theorists of his generation, is a tenured professor of Economics at MIT, was recently the editor of Econometrica (the top theory journal in all of economics), and was also awarded a John Bates Clark medal (the 2nd most prestigious medal in the profession) in large part for his work on this paper (and a closely related paper). Also keep in mind this paper was eventually published in the QJE, the top journal in the field. Very few living economists have a better CV than Daron Acemoglu. Thus, if we want to learn about how economath is used, we'll do best to start by learning from the master himself.

What's interesting about the Acemoglu et al. Malthusian model is that they take the same basic assumptions, assign a particular functional form to how population growth is influenced by income, and arrive at the conclusion that population density (which is proportional to technology) will be proportional to income! They use the model:

p(t+1) = rho*p(t) + lambda*(y-ybar) + epsilon(t),

where p(t+1) is population density at time t+1, p(t) is population at time t, rho is a parameter (perhaps just less than 1), lambda is a parameter, y is income, ybar is the level of Malthusian subsistence income, and epsilon is an error term. If you impose a steady state (p* and y*) and solve for p*, you get:

p* = 1/(1-rho)*lambda(y*-ybar)

I.e., you get that population density is increasing in income, and thus that income per person should have been increasing throughout history. Thus, these guys from MIT were able to use mathematics and overturn one of the central predictions of the Malthusian model. It is no wonder, then, that Acemoglu was then awarded a Clark medal for this work.

Except. This version doesn't necessarily fit the skeletal evidence above, although that evidence may be incomplete and imperfect (selection issues?). What exactly was the source of the difference in the classical Malthusian model and the "MIT" malthusian model? The crucial assumption, unstated in words but there in greek letters for anyone to see, was that income affects the level of population, but not the growth rate in population. Stated differently, this assumption means that a handful of individuals could and would out-reproduce the whole of China and India combined if they had the same level of income. (With rho less than one, say, .98, the first term will imply a contraction of millions of people in China/India. With income over subsistence, we then need to parameterize lambda to be large enough so that overall population can grow in China. But once we do this, we'll have the implication that even a very small population would have much larger absolute growth than China given the same income.) Obviously, this is quite a ridiculous assumption when stated in plain language. A population can grow by, at most, a few percent per year. 100 people can't have 3 million offspring. What this model does successfully is reveal how cloaking an unrealistic assumption in terms of mathematics can make said assumption very hard to detect, even by tenured economics professors at places like MIT. Math in this case is used as little more than a literary device designed to fool the feebleminded. Fortunately, someone caught the flaw, and this model didn't make the published version in the QJE. Unfortunately, the published version still included the view that population density is a reasonable proxy for income in a Malthusian economy, which of course it is not. And the insight that Malthusian forces led to high incomes in the Neo-Europes was also lost. 

Given that this paper then formed part of the basis of Acemoglu's Clark medal, I think we can safely conclude that people are very susceptible to bullshit when written in equations. More evidence will come later in the comments section, as, conditioned on getting hits, I suspect several people will be taken in by the AJR model, and will defend it vigorously. 

This episodes shows some truth to Bryan Caplan's view that "The main intellectual benefit of studying economath ... is that it allows you to detect the abuse of economath." 

Given the importance of signaling in all walks of life, and given the power of math, not just to illuminate and to signal, but also to trick, confuse, and bewilder, it thus makes perfect sense that roughly 99% of the core training in an economics Ph.D. is in fact in math rather than economics.

Update: Sure enough, as I predicted above, we have a defender of the AJR model in the comments. He argues the AJR model shows why math clarifies, even while his posts unwittingly convey the opposite.

Above, I took issue with the steady state relationship in the model and the fallacious assumption which yields it. The commenter points out correctly, that, outside of the steady state, the AJR model actually implies that there are two conflicting forces. But, so what? My argument was about the steady state. If one fixes the wrong assumption, steady-state income in the Malthusian model will be equal to subsistence income, and thus the main argument for correlation between population density and income outside of the steady state will also be shut down.

Second the commenter unfairly smears Acemoglu & Co., writing that the real problem is not with their model, but that they didn't interpret their model correctly: "goes ahead in the empirical work to largely, in contrast to what their model says, take population density as a proxy for income!".

Thus I'd like to defend Acemoglu against this unfair smear. In preindustrial societies, there were vast differences in population densities between hunter-gatherer groups, and agricultural societies, even though there were not vast income differences between the two. In fact, quite surprisingly, hunter-gatherer societies often look to have been richer despite working less (read Greg Clark), and despite far more primitive technology. Thus it is quite reasonable to assume, as AJR did, that it is likely that differences in technology would swamp the differences in other population shocks (A more important than epsilon). The Black Death might have doubled or tripled incomes, but settled agrarian societies might have population densities 1000 times as large as primitive hunter-gatherer tribes. This isn't an airtight argument, but, given their model, I believe AJR's empirical extension is reasonable, particularly given that they provide a caveat. The problem is that their model is not reasonable.

The commenter goes on to argue that I've gotten AJR's conclusion backward: "You claim that the point they are making is "population density will be a decent proxy for income in a Malthusian model." The point they are making is explicitly the exact opposite: that "caution is required in interpreting population density as a proxy for income per capita." 

Huh? The first two lines of the abstract of the AJR paper read: "Among countries colonized by European powers during the past 500 years, those that were relatively rich in 1500 are now relatively poor. We document this reversal using data on urbanization patterns and population density, which, we argue, proxy for economic prosperity."

Seems clear here they are arguing for using it as a proxy.

Wednesday, January 24, 2018

Reminder: Most Published Research is Probably Wrong!

At least in some way. Don't get me wrong, there is a lot of great research out there. However, it has occured to me that many people are much too trusting of published research, particularly when written by people from fancy universities with fancy letters behind their names and when published in prestigious journals. I saw this recently during a very lively session on the Decline in US Manufacturing Growth and Productivity at the AEA meetings in Philadelphia several weeks ago. Several people asked David Autor why his results on the impact of China on US innovation was different from what other prominent researchers had found. (One of the answers, of course, is that there is little reason to believe the competing research, but I digress...) Similarly, one of my complaints of the otherwise excellent Trade Talks podcast with Chad Bown is that published results, particularly by prominent researchers, are generally taken at face value, with not enough discussion, in my view, about potential caveats and shortcomings of the methodologies employed.

The reality is that science is difficult, and that Cowen's First Law (there is something wrong with everything!) applies to economic research.

Here's a moving video from Neil Degrasse Tyson which I mostly love. My only issue was his description of science:
One of the great things about science, is that it's an entire exercise in finding what is true.
You have a hypothesis, you test it. I get a result. A rival of mine double checks it, because they think I might be wrong. They perform an even better experiment than I did, and they find out, “Hey, this experiment matches! Oh my gosh. We’re on to something here!” And out of this rises a new, emergent truth.
This is a description of everything I wish science was! Perhaps it is an accurate description of hard sciences (I'm skeptical), but this is not how the social sciences operate. In practice, when a top researcher has a major finding, other top researchers, with rare exceptions, do not check it. Occasionally, grad students or less prominent researchers will overturn the result, but they will find that journals simply aren't the least bit interested in publishing papers which reverse seminal papers. Thinking like an economist, this creates some rather perverse incentives. If you are a well-connected researcher in a prominent department, you are well-incentivized to publish as much as possible. This means creating research which appears sophisticated, and it also means not pissing off the people who will judge your research. On the contrary, implies that there are benefits from having a lot of close friends (what deGrasse calls your "rivals") in the profession. You don't accomplish this by pointing out that another researcher's results disappear when you control for latitude. As a result, many top researchers are in fact incentivized to crank out many low-quality papers but with seemingly blockbuster results.

Part of the way this system survives is because there is a culture frowning on writing "comment papers", and the other reason is that there is, fortunate for the existence of this system, a willing population of "sheep", the "true believers", available to consume and believe this research.

In any case, on my ride back to Moscow from Philadelphia, I fired up Stata, and took a second look at some of the research which found that the China shock led to a huge increase in productivity and patenting in Europe, published in a leading journal. The thesis sounded quite dubious to me from the beginning. It turned out that including sectoral fixed effects -- a very basic control -- killed the results. If I were to write this up, the journal that published it would never, in a million years, accept it. Secondly, although the original authors seem to me like fine people, traditionally, economists behave in a way which is mafia-level shady (see the comments) when their research comes under attack. Partly, they have to do this, since the masses believe that most top research is correct, it is seen as a huge black mark on someone's reputation to have a paper overturned. If there was widespread knowledge that science is difficult and most papers have flaws, this might not be so necessary. Then, perhaps, we could get a bit closer to Neil Degrasse Tyson's idealized view of science.