Friday, April 17, 2020

Debate Over Covid Fatality

So, first came this horrendous WSJ article by a Stanford couple claiming the true Coronavirus death rate may be much less than what people think. They suggest that perhaps the real rate may be as low as .01%, and thus, that maybe we should just let the disease run. After all, if just between 20,000 and 40,000 Americans die, what is the harm? 

Of course, now, with 37,000 Americans already dead from Covid-19, and the body counts still rising daily (some 2,500 just yesterday), this estimate now looks foolish. Not to say, "I told you so", but the WSJ looked problematic to me in real time, so I tweeted about it: https://twitter.com/TradeandMoney/status/1243215666297044995

In retrospect, my response looks prescient, in part since China's own government revised up its estimates of death in Wuhan.

Debraj Ray, an economist who I respect a lot, and who is an editor of the American Economic Review, also posted some interesting thoughts about the Covid-19 death rate here, influenced by the Stanford couple's WSJ article. He argued we should rethink lockdowns given the uncertainty over the fatality rate. I posted a response on his blog, which I'll repost here:


Hello Debraj. Thought provoking post. I see some potential problems in your logic, however. Why are you ignoring the growth in deaths (or lack thereof) in New York before March 14th? If it's true that there were no deaths before then, then why wouldn't it follow that cases before then were scarce? Of course, likely there were few deaths before March 14th because there was little testing -- if you die of flu symptoms with no test for Covid available, it's likely you are not marked as dying of coronavirus. To the extent this is true, the growth rate of deaths is slower that what you have merely assumed. And, if it turns out that the rate of growth doubles every four days, then that would imply that either New York had it's first case before Wuhan did, or the death rate is even higher than 1%.

Lastly, Bhattarchaya's stuff has already been debunked. The death rate in Vo was 1%. The Chinese data should not be taken at face value. The NBA is not a random sample of the US. The Stanford couple mention .01% as a plausible death rate. But, some Italian provinces have death rates already as high as .065%, and that is assuming a 100% infection rate, which can't be right, as these provinces are still recording masses of new cases despite a draconian quarantine. In other Italian towns, already .1% of the population has died, and many people have also tested negative. 

After all this, I should mention that I'm actually in agreement that the death rate could well be much less than 1%. .4-.6% seems plausible to me, but spreads easier since there is no vaccine. On the other hand, .1% seems ruled out by what we know already. This could explain why Italian hospitals were overwhelmed so early, why they are not by the flu. 

Also, you write that people have freaked out too much. Have you paid any attention, at all, to what has happened in Italy? The freak out there has not prevented ICU units to become completely overwhelmed. They are essentially letting people over the age of 65 die for lack of ventilators. You really think this mass "freaking out" in Italy makes no sense?

Very Respectfully Submitted,Doug Campbell

Debraj then provided a thoughtful response.


Hi Doug, thanks. I wish we knew the truth behind fatality rates. You are right that it could be as high as 0.4%. My post is not written with a sense of dramatic certainty. That said (1) could you give me references to the Vo fatality rate? (2) I don't know what you mean by trusting the "Chinese data" --- the data has to do with airlifts out of Wuhan where everyone was tested, (3) the flu rate is for a particular distribution of ages (say, the US distribution), and we need to correct for that age distribution when studying regions with a high proportion of elderly, and (4) please read carefully before commenting on my interpretation. My post is perfectly consistent with OVERALL deaths being higher by a factor of 3 or more. I don't know what you want to include under the term "freaking out," but I am not including the terrible sense of sorrow that people feel for losing their loved ones. I simply refer to one's *own* fear of being fatally sick. Thanks.

He agrees that a death rate as high as .4% is plausible.  In response to his point:

    (1) I actually don't see the Vo fatility rate that I had seen earlier when I was just googling, although they had like 89 people test positive when they tested the first time, and they had a high-profile early death: https://www.theguardian.com/commentisfree/2020/mar/20/eradicated-coronavirus-mass-testing-covid-19-italy-vo

   (2) Bhattarcharya's first piece of evidence, using Chinese data, takes the fatality rate in Wuhan at face value, while making an adjustment to the infection rate. I was suspicious of the Chinese fatality rate. Lo and behold, under pressure, China has now revised up its fatality estimates for Wuhan by 50% -- and many people, me included, suspect it even could be higher than that. I won't try to claim I'm any sort of a genius here -- it was obvious the Chinese data were fake. Still is. It was troubling to see the Stanford duo take them at face value.

   (3) I agree we need to adjust for ages. Also, in Italy, Wuhan, and Iran, the hospitals were clearly overwhelmed. But, the death rate in Lombardy is now at .12%. Over two-thirds of the people there tested, tested negative, and they only test high-risk people. In addition, they have undoubtedly missed a lot of deaths, and people are still dying. You add all this to the info that their hospitals were overwhelmed by the end of February, and it's clear this is vastly more deadly than the flu. You had written: "That calls for a fatality rate around 0.1%". Even in New York City, the death rate is already around .1% of the population (that is, % of the population before the exodus), and over half of the people tested there have tested negative. And this came in the US, without any overwhelmed hospitals and likely a similar population structure, if healthier and wealthier than average. Even if we assume that 20% of the New York City population has already been infected, then we get a fatality rate of around .5%. If 25%, then .4%. Given the exodus of people out of the city, and 60% negative test rate of those with severe symptoms or direct exposure, it's probably tough to make an argument that more than 25% of NYC residents could have been infected.

   (4) I still don't see a lot of evidence that people are freaking out way too much. For example, I just went to a church where old people were still packed en masse and kissing icons. At least some of New York's bars were reportedly packed before they were closed. Boris Johnson shook everyone's hands in the Covid Ward. Rudy Gobert made sure to touch every microphone. People are protesting to open the economy back up. The draconian restrictions put into place have only modestly lowered the rate of infection. Even in Italy, six weeks of quarantine have only halved the daily rate of infections. To change people's behavior, you need them to "freak out" to certain degree.

   In any case, I am also opposed to a prolonged shutdown. Only, my alternative is Vo-style ubiquitous testing, or at least doing a vigorous "Test and Trace", plus some surveillance testing, ramping up N-95 mask production to wear in public, and other kinds of surveillance (cops with temperature helmets, like in China), etc. Nevertheless, it's clear now that Covid-19 poses a significantly higher danger in terms of mortality than the flu does.








Friday, February 28, 2020

How to Respond to the Next Recession: Central Bank Fiscal Support

Now that it looks like like the next recession may be coming sooner rather than later, it's time to lay out what I think is clearly the best path forward for monetary policy in a liquidity trap (we should be more creative than we have been). This is an issue I've been thinking seriously since long before I worked in the Obama CEA in 2011 -- I wrote my undergraduate thesis on Japan's liquidity trap in 2003. And Japan is still in a liquidity trap. Some things never change...

Fiscal Support
My idea is simple: Fiscal Support from the Fed to the Treasury (some might call it Fiscal QE, a name I oppose, "Treasury remittances", or simply seigniorage, although I see this as slightly different). The Federal Reserve should print money, buy government bonds, and then -- crucial second step -- return those bonds to the Treasury! That's it. (Well, in fact, there are several more steps I'll discuss below, but these are the key ones...) In fact, the Fed already returns interest on the bonds it holds to Treasury. Over ten years after the first QE, in total, this has actually added up to be a substantial amount of money, and an ignored modest side-benefit of the QE program (in addition to lowering US government borrowing costs, and thus the debt).

Mechanisms
Why would this work? I can see several channels. First, smaller deficits, or even surpluses, would encourage governments to do fiscal stimulus. (How many politicians do you know who couldn't figure out a way to spend a surplus?) In fact, most countries in liquidity traps run such large deficits that after a few years, all but the most ardent keynesians/MMT people recommend fiscal austerity to counterbalance the deficits. Fiscal stimulus is likely to have carry a bit more "oomph" than merely QE alone, particularly since this is a policy with an unlimited ceiling.

Secondly, this QE + fiscal transfer is likely to be more effective even if the government doesn't enact fiscal stimulus today. Why? First, it is a bit more difficult for the central bank to unwind. In addition, if the Fed just wanted a temporary stimulus, they would just go with normal QE rather than tying their hands by transferring these bonds to the Treasury. Thus, it is a signal of looser future monetary policy, and also a signal that the Fed is serious about getting the economy on track. Recall, a key problem in a liquidity trap is that the central bank needs to "credibly promise to be irresponsible" -- to convince the public that it will generate higher inflation, even after the crisis is over. What better way to do that than with "permanent QE"? One thing to note is that the central bank will still be able to raise interest rates later on, or increase reserve requirements, and so I don't think this will, in fact, inhibit the Fed from responding to inflation later on.

A third mechanism, albeit less important, is that, even if fiscal policy is non-reactive today, it will have to be reactive at some point in the future. This implies that people will, at minimum, expect to be taxed less in the future, or have government spending decreased by less.

A fourth mechanism is the classic QE mechanism: purchasing long-term bonds will lower long term interest rates mechanically. A fifth mechanism is that long-term interest rates imply a devaluation of the currency. However, in this case, I'll say full stop that these mechanisms are likely to be modest, but for good reasons -- fiscal policy can be expected to be expansionary and thus raise future interest rates.

Fiscal Support + Communication: 
Were I a central banker, I would not just do this policy quietly (in which case it's likely no one would notice). I would also publicly encourage the government to engage in fiscal stimulus. I would also assure market participants that the policy will be in place and ratcheted up each month, until, at a minimum, the inflation target has been hit (even better would be to raise the inflation target, and in fact price level target, or switch to a 6% NGDP target, but this can be debated elsewhere). This is akin to telling the market you will do "whatever it takes". In fact, committing to potentially doing more at first will likely allow you to do less later on.

Alternative Options:
Of course, this is not the only policy option on the table. Short term interest rates are 1.5%. It makes sense to cut them first. However, the market is smart, and it probably realizes that the Fed only has a few arrows left in this quiver. The closer the Fed gets to zero without a credible plan for what to do next, the less effective each successive rate cut will be.

Next, the Fed could go negative.  I think they should. I think this could be an effective policy particularly if the Fed subsidizes new bank loans (a la TLRTO -- the ECB's program) in addition to taxing reserves held in excess. To prevent a negative effect on total bank profits, the Fed could simultaneously provide banks with a higher positive interest rate on required reserves while taxing some smaller fraction of excess reserves over a target threshold. CBs could also tax large cash holdings of banks. I am not necessarily opposed going this route. But, I also think it will be problematic to pass along negative interest rates to depositors. I also suspect that negative interest rates, absent bank subsidies and taxes on cash holdings, could be problematic, and may be limited in how low a central bank can go (although subsidies of course are unlimited). Also, since I'm a liberal who see value in many forms of public spending, I would prefer more public spending to higher bank subsidies. (Alternatively, since I'm a capitalist who values my hard-earned money, I'd prefer receiving a tax cut to a bank subsidy.) Also, many smart economists think that negative interest rates might be contractionary. Also, are very large bank subsidies, if needed, politically feasible? I have my doubts. Still, if I were Fed Chair, I would include negative interest rates, including new loan subsidies in particular, in the arsenal. I still believe it is a crime that the Fed did not go all the way to zero in the last crisis.

The Fed can also use forward guidance. Of course they should, but talk is always cheap. If the economy and inflation recover, of course the Fed will raise interest rates, and markets know this. Also, both this policy, and plain old vanilla QE, while beneficial overall, also tend to reduce the spread between short-term and long-term interest rates. This reduces bank profits, and may make banks more likely to wait to make loans. The Fed can promise to keep interest rates zero, or at minus one, for one or two years, but can it credibly do it for 5 or 10 years? Most of the FOMC members won't even be around in five years. And the FOMC committee itself rotates every year.

This leads me to vanilla QE. Why not just do more QE? Well, yes, the Fed should. Also, after an announcement that X amount of bonds will be bought each month, if the Fed's growth and inflation targets are not met, the Fed should increase the monthly amount of bond purchases. This is what they did not do last time around, and what the BoJ has also refused to do. One argument against QE is that it was a public relations disaster. Many economists still believe it did not work. And, even though the Fed did QE last time, the economy was very slow to recover. Thus, if the Fed does merely resort to QE this time, they need to do it much more aggressively than before. They could, for example, promise to double the bond purchases every month until NGDP hits a certain target. They also would need a new way to market it.  Instead of calling it just "QE", add a name that every day bond trader bros on Wall Street can relate to: "QE on steroids", or "QE on cocaine".

How about raising the inflation target too? Of course the Fed should. That the Fed continues to maintain a 2% inflation target (that it rarely even hits), now that we know what a problem the ZLB is can be thought of as a badge of incompetence. However, the problem circa 2009-2017 was that, although the Fed had an inflation target, it did not appear to have any ability to actually hit its own target. This almost certainly damaged the Fed's credibility.

Objections:
This will never happen!  Response: It already has! In fact, the Fed already remits interest payments to the Treasury, and other stuff associated with it's operating profits. I've heard they have also remitted bond principal before, although I can't find a link and haven't seen any hard proof of this myself. (Update: A commenter on twitter suggests that the Bank of Japan and Bank of Sweden return principle to their govt's, but I cannot verify this.)

The ECB will never do this! Response: The ECB has also already implemented this before. The ship has sailed.

This will encourage governments to overspend. Response: That's a feature of the plan, not a bug. Indeed, if central banks do not to this, central governments likely will. The only thing stopping Trump from doing this himself is that he doesn't know he can. Indeed, this is the one drawback: if the Fed does it first, Trump will learn that it's possible, and likely do too much. However, as mentioned above, the Fed will still be able to raise interest rates and increase reserve requirements.

Would Fiscal Policy Actually Work? Response: Yes it would. I actually lean toward the low side on fiscal multiplier estimates. However, I also don't think there is any serious doubt that large fiscal stimulus is expansionary. If a government sends everyone a $10,000 check (financed by the printing press), spending and inflation will go up.

What if it Fails to generate growth? Response: Let's say for a second it had no effect at all on the economy. In that case, the only effect would be to lower the national debt with nothing lost! That's still a pretty good deal.

Conclusion:
The Fed has the power to create money. In a downturn, the Fed should simply print money, buy the bonds, and then return those bonds to the Treasury. For the very reason that this policy can be unlimited, it's likely to be more effective than any other option. This is simply an idea whose time has come.

PS: If any of you Macro theorists out there want to write this up into a "serious" paper, I'd be interested.






Monday, May 6, 2019

Journal Reporting Times, EJMR vs. Self-Reported Stats from Journals.

Methodology: I took the self-reported data I collected here (which come from from ejmr here), and compared to the official journal stats collected by Juan Carlos Su├írez here.

Overall, the data line up fairly well.

Here is the correlation in journal first-response times, conditional on being sent out for review.  The R-squared is a respectable .52, although AEJ: Micro is an outlier on EJMR, where it actually does better than what the official statistics suggest (with N=16 though...). Author-reported weight times are about two weeks longer on average, but on ejmr, you round to the nearest month vs. day, so some difference isn't surprising.





Here is the correlation between desk rejections, author vs. journal reported. The regression coefficient is close to 1, and the R-squared is .63. The intercept is -.072, as average reported desk-rejections are lower on ejmr.



Here's the Data:

Journal Desk Reject Rate (EJMR) Desk Reject (Official) First Response Time, Conditional on Being Sent Out to Referees (EJMR) First Response Time, Conditional on Being Sent Out to Referees (Official)
QJE 61% 66% 1.5 1.5
JPE 50% 49% 8.0 4.0
REStud 34% 49% 4.5 3.4
ECTA 23% 32% 3.6 3.4
AEJ: Macro 21% 38% 2.9 3.4
AEJ: Applied 37% 45% 2.5 2.7
Journal of Finance 33% 32% 3.0 2.2
AER 47% 46% 3.7 3.1
JEEA 58% 49% 2.5 3.2
AEJ: Policy 36% 51% 2.8 3.1
EJ 45% 55% 3.7 3.6
AEJ: Micro 25% 38% 3.4 4.5

Monday, February 18, 2019

Ranking Economic Journals by Speed, Updated

I decided to update my previous ranking on economics journals ranked by speed.

In the table below, # is the journal's rank by citations (may be 2 years old, I didn't update this), and then there is data on the acceptance rate, desk rejection rate, average time to acceptance, median time to acceptance, and the 25th and 75th percentiles (all in terms of months). Only journals with at least 5 observations were included. Data come from here.

There are some similarities with the previous ranking by speed. The QJE is still the fastest, and the JME still brings up the rear -- despite the fact I've doubled the sample size.

One note is that I just copy-pasted and ran my previous code, so if you notice any errors below, please let me know.

My big hesitation with doing this is that I think it may not purely be a good thing to incentive journals to respond quickly. They tend to do this by desk-rejecting more papers, and I worry that desk-rejections are often done on the basis of institutional affiliation, or perhaps after a less-than-fully-thorough examination of the paper. I may do an updated version which conditions on the paper going out to journals. Nevertheless, if you are contemplating which journal to submit a paper to, this may be helpful.

# Journal Name Accept % Desk Reject % Avg. Time Median Time 25th Percent. (Months) 75th Percent. (Months) N =
1 Quarterly Journal of Economics 1% 61% 0.59 0 0 1 82
80 Journal of Economic Geography 0% 80% 0.67 0 0 1 5
12 Journal of the European Economic Association 3% 58% 1.16 0 0 2 31
81 Kyklos 8% 33% 1.33 1 0.5 2 12
94 Economics and Human Biology 43% 43% 1.50 1 0 3 7
74 Management Science 18% 45% 1.55 2 0 2 22
15 Review of Financial Studies 11% 37% 1.59 2 0 2 27
78 Demography 0% 80% 1.60 1 0 2 5
14 Journal of Human Resources 15% 54% 1.60 1 0 3 46
31 European Economic Review 28% 44% 1.72 1 0 3 50
6 American Economic Journal: Applied Economics 8% 37% 1.76 2 0 3 49
13 American Economic Journal: Economic Policy 8% 36% 1.83 2 0 3 36
100 China Economic Review 67% 33% 1.83 2 0.5 2 9
7 Journal of Finance 0% 33% 2.05 2 0 3 21
19 Journal of Financial Economics 24% 18% 2.06 2 1 3 17
27 Journal of Financial Intermediation 14% 29% 2.14 3 1 3 7
9 American Economic Review 7% 47% 2.16 1 0 4 91
46 Econometrics Journal 0% 50% 2.17 0.5 0 5 6
55 Review of Income and Wealth 17% 67% 2.17 0.5 0 4 6
38 Journal of Financial and Quantitative Analysis 29% 6% 2.18 2 1 3 17
37 Journal of Population Economics 22% 44% 2.22 3 0 4 9
76 Explorations in Economic History 56% 11% 2.22 2 2 3 9
32 Theoretical Economics 8% 0% 2.23 2 2 2 13
90 Economics Letters 45% 25% 2.24 2 1 2 114
42 Review of Finance 0% 8% 2.25 2.5 1.5 3 12
16 Economic Journal 16% 45% 2.26 2 0 4 56
34 Journal of Health Economics 13% 55% 2.32 2 1 4 31
5 American Economic Journal: Macroeconomics 13% 21% 2.33 2 1.5 3 24
62 Regional Science and Urban Economics 52% 14% 2.35 2 2 3 21
65 International Journal of Industrial Organization 7% 20% 2.47 2 1 4 15
91 Public Choice 14% 48% 2.48 1 0 3 29
35 Labour Economics 9% 27% 2.55 2.5 2 3 22
41 Journal of Law and Economics 0% 43% 2.57 3 1 4 14
61 Journal of Comparative Economics 27% 33% 2.63 2 0.5 4.5 15
71 European Journal of Political Economy 38% 23% 2.69 2 2 3 13
58 B.E. Journals in Economic Analysis & Policy 33% 7% 2.71 3 1 4 15
26 Journal of Urban Economics 22% 17% 2.72 3 1 4 18
30 American Economic Journal: Microeconomics 19% 25% 2.74 3 1 4 16
4 Econometrica 6% 23% 2.77 3 1 4 35
24 Journal of Applied Econometrics 9% 35% 2.78 2 0 5 23
57 Industrial and Labor Relations Review 27% 36% 2.82 2 1 5 11
64 The Journal of Law, Economics, and Organization 18% 27% 2.82 3 2 4 11
85 Energy Journal 38% 0% 2.88 3 2 3.5 8
36 World Bank Economic Review 8% 31% 2.92 2 2 4 13
11 Journal of Labor Economics 0% 32% 3.00 3 0 6 19
21 Journal of Public Economics 12% 27% 3.10 3 1 4 60
83 American Journal of Health Economics 43% 0% 3.14 3 2 4 7
3 Review of Economic Studies 4% 34% 3.16 3 0 5 73
23 Journal of Development Economics 17% 41% 3.16 3 1 4 54
63 Oxford Economic Papers 17% 42% 3.23 3 1 5 12
67 Journal of Banking and Finance 18% 18% 3.25 3 2 4 28
87 Southern Economic Journal 14% 14% 3.29 4 2 4 7
99 American Journal of Agricultural Economics 23% 0% 3.31 3 3 4 13
82 Journal of Macroeconomics 31% 25% 3.31 3 0.5 5.5 16
69 Economics and Politics 40% 0% 3.33 3 1 6 5
89 Journal of Economic History 17% 0% 3.33 3 2 4 6
93 Review of Economics of the Household 20% 10% 3.46 3 1 3 10
17 Journal of Economic Growth 0% 0% 3.50 2.5 2 3 10
92 Small Business Economics 25% 25% 3.50 3 1.5 5 8
10 Review of Economics and Statistics 2% 48% 3.51 2 0 5 65
56 Journal of Economic Behavior and Organization 30% 27% 3.67 4 0 6 33
25 Journal of Business and Economic Statistics 20% 10% 3.70 2.5 2 5 10
49 Canadian Journal of Economics 28% 6% 3.72 4 3 5 18
53 Economics of Education Review 50% 7% 3.79 4 2 5 14
51 Journal of Industrial Economics 13% 33% 3.81 4 1.5 5.5 15
88 World Development 39% 37% 3.92 3 1 6 49
33 Journal of Economic Theory (Elsevier) 21% 18% 3.94 4 3 5 33
22 RAND Journal of Economics 6% 21% 3.97 4 3 5 34
68 Macroeconomics Dynamics (Cambridge) 57% 21% 4.07 4 1 6 14
28 Experimental Economics 38% 8% 4.08 3 2 6 13
40 Journal of Environmental Economics and Management 24% 8% 4.15 3.5 2 5 25
45 Econometric Theory 0% 40% 4.20 4 0 5 5
84 Health Economics 8% 50% 4.21 3 3 6 12
44 Journal of Economic Dynamics and Control 50% 14% 4.22 3 1 4 22
54 Journal of Economics and Management Strategy 54% 8% 4.23 3 3 6 13
66 International Tax and Public Finance 14% 14% 4.29 3 2 7 7
60 B.E. Journal of Macroeconomics 63% 0% 4.33 4 2 6 8
75 Economic Theory 23% 27% 4.35 4 1 6 22
95 Energy Economics 55% 0% 4.42 4 3.5 6 11
98 Economics of Transition 71% 14% 4.50 4.5 2.5 6 7
47 Oxford Bulletin of Economics and Statistics 29% 36% 4.64 4 2 6 14
72 Environmental and Resource Economics 27% 47% 4.69 4 2.5 6.5 15
18 Journal of International Economics 18% 6% 4.71 3 3 6 17
43 Journal of International Money and Finance 40% 5% 4.75 3 2 7 20
97 Journal of Corporate Finance 14% 14% 4.75 3.5 3 6.5 7
73 Review of International Economics 30% 10% 4.91 3 3 6 10
52 Games and Economic Behavior 18% 18% 4.94 4 3 6 50
59 Economic Development and Cultural Change 27% 20% 5.00 4.5 3 6.5 15
48 Economic Inquiry 25% 25% 5.18 3 2 7 36
39 Journal of Economic Surveys 0% 40% 5.20 5 0 7 5
2 Journal of Political Economy 0% 50% 5.34 3 2 8 32
96 Empirical Economics 44% 13% 5.56 5.5 3.5 6 16
50 Economica 11% 53% 5.57 3 1 6 19
20 Journal of Money, Credit, and Banking 13% 13% 5.72 4.5 2.5 9 30
77 Journal of Public Economic Theory 27% 18% 5.86 6 3 9 11
79 The World Economy 40% 0% 6.25 5 3.5 8.5 5
70 Econometric Reviews 33% 0% 6.86 7 4 10 6
29 Journal of Econometrics 10% 10% 6.90 6.5 5 10 10
86 Journal of Empirical Finance 38% 0% 8.22 7 6 13 8
8 Journal of Monetary Economics 21% 11% 9.12 6 4 11 19

Tuesday, October 30, 2018

Are Negative Interest Rates Expansionary? A Review of Eggertsson et al. (2017)

My answer is yes they are.

I recently assigned my students to write a referee report of Eggertsson et al., "Are Negative Nominal Rates Expansionary?" I chose the paper because I'm a huge fan of Eggertsson's work, the paper is well done, it has a nice synthesis of data and theory, and the topic is of central policy importance. The authors find that negative nominal interest rates may not be expansionary, and also, that under certain conditions, quite surprisingly, they may be contractionary.

I like the paper and see a clear contribution. However, I wish the authors would have framed the paper slightly differently. My reading is that they show that negative reserve rates below the cost of hoarding cash can be potentially contractionary under some conditions, and need other policies (negative lending rates for banks/Gesell taxes) to make the policy more expansionary. This is still an important and useful point -- I learned something from them. Yet, since not all of the key modeling assumptions are true, and since "negative interest rates" can include 1. negative rates above the cost of hoarding cash, 2. negative borrowing rates for banks, or 3. Gesell taxes on banks (may seem far-fetched, but something in this spirit has, in fact, happened), I think the paper's actual result is in fact narrower than "negative rates aren't expansionary and may be contractionary", and does not apply to the negative interest rates that we have seen.

From where does the contractionary result come from? It comes from assuming banks profits can affect intermediation costs. Since negative interest rates are a tax on banks, they hurt profitability. But, how much did banks actually pay in taxes due to negative interest rates? Color me skeptical it hurt profitability enough to have materially damaged intermediation. We are talking a second or third-order type of effect. And, to the extent negative interest did increase loans, the impact on profitability would be uncertain. Lastly, the ECB had a tiered rate system, in which required reserves were still paid a positive interest rate, but only the marginal reserves over some threshold were taxed. 

One concern is that the ECB also had a program (dubbed TLRTO -- can't central banks print some money and hire better PR people?) where banks could borrow money at negative interest rates on the condition they lend it out. In that case, hundreds of billions of Euros in loans were taken out, likely many as a cause of the policy, and these "negative interest rates" would also add several billion Euros directly to bank profits over the last few years. The model in the paper does not consider this possibility. 

Two additional key assumptions of the model are that banks are taking out 100% financing from deposits (as many of my students noted) and all bank profits are paid to households. To their credit, the authors fully acknowledge that the first of these assumptions is important and not exactly true. In Sweden, for example, the deposit share is less than 50%. To the extent that financing comes from other sources, banks may actually care about the spread between the reserve rate and the loan rate, particularly if there is a cost of holding cash (say, 1-1.5%). In practice, after the financial crisis, banks like Bank of America cut their dividend payments to almost nothing. In this environment, banks will respond to negative interest rates by making more loans and lowering the interest rate on loans, and may even pay more dividends, boosting aggregate demand. Nothing in the paper contradicts this logic. 

My intuition before reading this paper is that a negative interest rate will be an effective tax on excess reserves, and thus it would make a bank want to lend out money instead of having more reserves. This appears not to happen in the model because banks would hoard physical cash instead of reserves (I think the authors could have been more explicit on what banks do with their mountain of excess reserves when negative rates happen -- the calibration exercise starts with few excess reserves).  There is an implicit assumption that because deposit rates didn't drop (much) below zero, that the cost of holding cash must also be close to zero, and also, in particular, lower than the .4-.7% negative interest rates that we've seen. My gut feeling is that the level is likely to be closer to 1-1.5%, since banks did not, in fact, hoard physical cash. One reason they did not, however, is that central banks aren't stupid -- it seems at least some central banks take into account changes in cash holdings of banks when setting the limits for negative-interest free reserves (see box 2 of this BIS document on negative rates). In addition, there is a paper arguing banks responded to negative interest rates by raising fees instead, effectively lowering deposit rates below zero -- a topic the authors do address (hat tip to a student).

Perhaps banks did not lower their deposit rates below zero, not because there is no cost to holding cash, but because they didn't know how depositors would react, or believed the policy was temporary, and were afraid of a hysteretic effect on depositors, or due to ongoing concerns with their quantities of bad long-term loans. It could also be the case that holding small amounts of cash is relatively costless, but once you have football fields of cash, suddenly you need to hire top-notch security, and buy insurance, so the cost could increase non-linearly. The authors, once again, should be commended for being explicit that this assumption would alter the conclusions of the paper. Yet, even I suspect there is a limit to the cost of hoarding cash. 

I could go on here with theory. This is a one period model. A negative headline rate probably sends a stronger signal about future low short-term rates (commitment to irresponsibility) than keeping a rate at zero, and thus could influence longer maturities (as evidence finds). They could also underscore a central bank's commitment to do "whatever it takes" to achieve inflation. It's also a closed economy -- negative interest rates could trigger capital outflows and a depreciated exchange rate, particularly if it had an impact on the yield curve.

It's also possible to introduce other factors which would imply negative nominal rates are expansionary. See this nice paper by Davide Porcellacchia (who we tried unsuccessfully to hire last year at NES), which argues that negative rates might still lead consumers to save less overall. 

Thus, theoretically, the result is ambiguous. (Isn't that always the case?) Thus, it comes down to empirics. 

I also had some minor quibbles with the empirics in the paper. At one point, they write, speaking of Denmark, that "the negative policy rate has not been transmitted to deposit rates." It actually looks to my eye like there were slight declines in the deposit rates each time the Danish central bank crossed into negative territory. The household deposit rate was cut roughly in half after the second cut in 2014. If you squint, it also looks like the corporate deposit rate dipped below zero at several points. 

Admittedly, in their evidence for Switzerland and Japan, going negative did appear to have, at best, a very minor impact on deposit rates (see below). However, for both Germany, and for the Euro Area as a whole, it looks to me like going negative might have had close to a 1-for-1 impact on deposit rates. Certainly, in at least several cases negative policy rates did translate into lower deposit rates. 

Even more important than deposit rates are lending rates (Figure below). Again, for lending rates, it does look like in at least a few cases, a lower deposit rate did translate into lower lending rates. In the Japanese case, this happened despite the fact that deposit rates did not fall. 





It would also have been nice to look at surprise announcements of ZLB episodes, and look at how the announcement of negative rates affect a variety of interest rates in the economy, including the rates that banks borrow from each other at, longer-term yields, and also of exchange rates and the stock market (admittedly, from what I've seen, this evidence looks mixed). The BIS found that negative interest rates passed through fully into money markets. Switzerland and Denmark did, after all, institute negative rates in part to stabilize the exchange rates. 

I think more work needs to be done on this topic, but from the theory and the data, I don't see much that suggests that negative interest rates can't or didn't stimulate the economy. I am persuaded that negative reserve rates are best complemented with other policies, and I credit this paper with making the point. I believe central banks should do more to boost lending during liquidity trap periods, such as through a full program of quantitative targeting of loan levels, regulation, fines, negative interest rates, or subsidies for loans made. China in fact was said to do quantitative targeting during the Great Recession, and France and Germany did things like this during the Bretton Woods period. (See this nice paper by Eric Monnet.)

Overall, I'm a big believer of much more aggressive monetary policy at the zero lower bound than what we've seen. While I think a healthy degree of skepticism about new policy tools during liquidity trap periods is prudent (probably, mistakes were made with the rollout and PR around QE, even if it does seem to have been stimulative), and while I also think papers like this one move the debate forward, I've also long been skeptical of the skepticism of the idea that monetary policy can't be effective at the ZLB. I worry that this paper may be misinterpreted to suggest that banks shouldn't try to use negative interest rates at the ZLB, and should opt to do nothing instead. This despite the correct conclusion being that central banks should use negative reserve rates in conjunction with other policies. Some like this happened with people who read Paul Krugman only occasionally, but not in the round. They walked away believing that monetary policy had "shot its wad" -- in the words of a former president. Had they read every word Krugman wrote, they would have believed instead that central banks needed to credibly commit to higher future levels of inflation to stimulate the economy, but only that traditional monetary policy was ineffective. I observed this when I worked in the Obama CEA, and some economists there too believed that nothing more could be done on monetary policy, leading them to not want to recommend to the President that he make his vacant FOMC picks. There's a case to be made that this was the single biggest policy mistake Obama made while in office. And that mistake, to a large extent, explains how we got to where we are. 

Note: I should mention that many of the points above were motivated, directly or indirectly, by the insightful referee reports of my students! 



Tuesday, February 6, 2018

On the Uses (and Abuses) of Economath: The Malthusian Models

Many American undergraduates in Economics interested in doing a Ph.D. are surprised to learn that the first year of an Econ Ph.D. feels much more like entering a Ph.D. in solving mathematical models by hand than it does with learning economics. Typically, there is very little reading or writing involved, but loads and loads of fast algebra is required. Why is it like this?

The first reason is that mathematical models are useful! Take the Malthusian Model. All you need is four simple assumptions: (1) that the birth rate is increasing in income, (2) that the death rate is decreasing in income, (3) that income per person is negatively related to population, and (4) the rate of technological growth is slow relative to population growth, and you can explain a lot of world history, and it leads you to the surprising conclusion that income in a Malthusian economy is determined solely by birth and death rate schedules, and is uncorrelated with technology. Using this model, you can explain, for example, why incomes before 1800 were roughly stagnant for centuries despite improving technology (technological advance just resulted in more people; see the graph of income proxied by skeletal heights below). It also predicts why the Neo-Europes -- the US/Australasia/Southern Cone countries are rich -- they were depopulated by disease, and then Europeans moved in with lots of land per person. It is a very simple, and yet powerful, model. And it makes (correct) predictions that many historians (e.g., Kenneth Pomeranz), scientists (e.g., Jared Diamond), and John Bates Clark-caliber economists (see below) get wrong.



A second beneficial reason is signalling. This reason is not to be discounted given the paramount importance of signalling in all walks of life (still not sufficiently appreciated by all labor economists). Smart people do math. Even smarter people do even more complicated-looking math. I gratuitously put a version of the Melitz model in my job market paper, and when I interviewed, someone remarked that I was "really teched up!" Simple models are not something that serious grown-ups partake in. Other social science disciplines have their own versions of peacock feathers. In philosophy, people write in increasingly obtuse terms, using obscure language and jargon, going through enormous effort to use words requiring as many people as possible to consult dictionaries. Unfortunately, the Malthusian model above, while effective in terms of predictive power, is far too simple to play a beneficial signalling role, and as a result would likely have trouble getting published if introduced today. 

A third reason to use math is that it is easy to use math to trick people. Often, if you make your assumptions in plain English, they will sound ridiculous. But if you couch them in terms of equations, integrals, and matrices, they will appear more sophisticated, and the unrealism of the assumptions may not be obvious, even to people with Ph.D.'s from places like Harvard and Stanford, or to editors at top theory journals such as Econometrica. A particularly informative example is the Malthusian model proposed by Acemoglu, Johnson, and Robinson in the 2001 version of their "Reversal of Fortune" paper (model starts on the bottom of page 9). Note that Daron Acemoglu is widely regarded as one of the most brilliant economic theorists of his generation, is a tenured professor of Economics at MIT, was recently the editor of Econometrica (the top theory journal in all of economics), and was also awarded a John Bates Clark medal (the 2nd most prestigious medal in the profession) in large part for his work on this paper (and a closely related paper). Also keep in mind this paper was eventually published in the QJE, the top journal in the field. Very few living economists have a better CV than Daron Acemoglu. Thus, if we want to learn about how economath is used, we'll do best to start by learning from the master himself.

What's interesting about the Acemoglu et al. Malthusian model is that they take the same basic assumptions, assign a particular functional form to how population growth is influenced by income, and arrive at the conclusion that population density (which is proportional to technology) will be proportional to income! They use the model:

p(t+1) = rho*p(t) + lambda*(y-ybar) + epsilon(t),

where p(t+1) is population density at time t+1, p(t) is population at time t, rho is a parameter (perhaps just less than 1), lambda is a parameter, y is income, ybar is the level of Malthusian subsistence income, and epsilon is an error term. If you impose a steady state (p* and y*) and solve for p*, you get:

p* = 1/(1-rho)*lambda(y*-ybar)

I.e., you get that population density is increasing in income, and thus that income per person should have been increasing throughout history. Thus, these guys from MIT were able to use mathematics and overturn one of the central predictions of the Malthusian model. It is no wonder, then, that Acemoglu was then awarded a Clark medal for this work.

Except. This version doesn't necessarily fit the skeletal evidence above, although that evidence may be incomplete and imperfect (selection issues?). What exactly was the source of the difference in the classical Malthusian model and the "MIT" malthusian model? The crucial assumption, unstated in words but there in greek letters for anyone to see, was that income affects the level of population, but not the growth rate in population. Stated differently, this assumption means that a handful of individuals could and would out-reproduce the whole of China and India combined if they had the same level of income. (With rho less than one, say, .98, the first term will imply a contraction of millions of people in China/India. With income over subsistence, we then need to parameterize lambda to be large enough so that overall population can grow in China. But once we do this, we'll have the implication that even a very small population would have much larger absolute growth than China given the same income.) Obviously, this is quite a ridiculous assumption when stated in plain language. A population can grow by, at most, a few percent per year. 100 people can't have 3 million offspring. What this model does successfully is reveal how cloaking an unrealistic assumption in terms of mathematics can make said assumption very hard to detect, even by tenured economics professors at places like MIT. Math in this case is used as little more than a literary device designed to fool the feebleminded. Fortunately, someone caught the flaw, and this model didn't make the published version in the QJE. Unfortunately, the published version still included the view that population density is a reasonable proxy for income in a Malthusian economy, which of course it is not. And the insight that Malthusian forces led to high incomes in the Neo-Europes was also lost. 

Given that this paper then formed part of the basis of Acemoglu's Clark medal, I think we can safely conclude that people are very susceptible to bullshit when written in equations. More evidence will come later in the comments section, as, conditioned on getting hits, I suspect several people will be taken in by the AJR model, and will defend it vigorously. 

This episodes shows some truth to Bryan Caplan's view that "The main intellectual benefit of studying economath ... is that it allows you to detect the abuse of economath." 

Given the importance of signaling in all walks of life, and given the power of math, not just to illuminate and to signal, but also to trick, confuse, and bewilder, it thus makes perfect sense that roughly 99% of the core training in an economics Ph.D. is in fact in math rather than economics.


Update: Sure enough, as I predicted above, we have a defender of the AJR model in the comments. He argues the AJR model shows why math clarifies, even while his posts unwittingly convey the opposite.

Above, I took issue with the steady state relationship in the model and the fallacious assumption which yields it. The commenter points out correctly, that, outside of the steady state, the AJR model actually implies that there are two conflicting forces. But, so what? My argument was about the steady state. If one fixes the wrong assumption, steady-state income in the Malthusian model will be equal to subsistence income, and thus the main argument for correlation between population density and income outside of the steady state will also be shut down.

Second the commenter unfairly smears Acemoglu & Co., writing that the real problem is not with their model, but that they didn't interpret their model correctly: "goes ahead in the empirical work to largely, in contrast to what their model says, take population density as a proxy for income!".

Thus I'd like to defend Acemoglu against this unfair smear. In preindustrial societies, there were vast differences in population densities between hunter-gatherer groups, and agricultural societies, even though there were not vast income differences between the two. In fact, quite surprisingly, hunter-gatherer societies often look to have been richer despite working less (read Greg Clark), and despite far more primitive technology. Thus it is quite reasonable to assume, as AJR did, that it is likely that differences in technology would swamp the differences in other population shocks (A more important than epsilon). The Black Death might have doubled or tripled incomes, but settled agrarian societies might have population densities 1000 times as large as primitive hunter-gatherer tribes. This isn't an airtight argument, but, given their model, I believe AJR's empirical extension is reasonable, particularly given that they provide a caveat. The problem is that their model is not reasonable.

The commenter goes on to argue that I've gotten AJR's conclusion backward: "You claim that the point they are making is "population density will be a decent proxy for income in a Malthusian model." The point they are making is explicitly the exact opposite: that "caution is required in interpreting population density as a proxy for income per capita." 

Huh? The first two lines of the abstract of the AJR paper read: "Among countries colonized by European powers during the past 500 years, those that were relatively rich in 1500 are now relatively poor. We document this reversal using data on urbanization patterns and population density, which, we argue, proxy for economic prosperity."

Seems clear here they are arguing for using it as a proxy.



Wednesday, January 24, 2018

Reminder: Most Published Research is Probably Wrong!

At least in some way. Don't get me wrong, there is a lot of great research out there. However, it has occured to me that many people are much too trusting of published research, particularly when written by people from fancy universities with fancy letters behind their names and when published in prestigious journals. I saw this recently during a very lively session on the Decline in US Manufacturing Growth and Productivity at the AEA meetings in Philadelphia several weeks ago. Several people asked David Autor why his results on the impact of China on US innovation was different from what other prominent researchers had found. (One of the answers, of course, is that there is little reason to believe the competing research, but I digress...) Similarly, one of my complaints of the otherwise excellent Trade Talks podcast with Chad Bown is that published results, particularly by prominent researchers, are generally taken at face value, with not enough discussion, in my view, about potential caveats and shortcomings of the methodologies employed.

The reality is that science is difficult, and that Cowen's First Law (there is something wrong with everything!) applies to economic research.

Here's a moving video from Neil Degrasse Tyson which I mostly love. My only issue was his description of science:
One of the great things about science, is that it's an entire exercise in finding what is true.
You have a hypothesis, you test it. I get a result. A rival of mine double checks it, because they think I might be wrong. They perform an even better experiment than I did, and they find out, “Hey, this experiment matches! Oh my gosh. We’re on to something here!” And out of this rises a new, emergent truth.
This is a description of everything I wish science was! Perhaps it is an accurate description of hard sciences (I'm skeptical), but this is not how the social sciences operate. In practice, when a top researcher has a major finding, other top researchers, with rare exceptions, do not check it. Occasionally, grad students or less prominent researchers will overturn the result, but they will find that journals simply aren't the least bit interested in publishing papers which reverse seminal papers. Thinking like an economist, this creates some rather perverse incentives. If you are a well-connected researcher in a prominent department, you are well-incentivized to publish as much as possible. This means creating research which appears sophisticated, and it also means not pissing off the people who will judge your research. On the contrary, implies that there are benefits from having a lot of close friends (what deGrasse calls your "rivals") in the profession. You don't accomplish this by pointing out that another researcher's results disappear when you control for latitude. As a result, many top researchers are in fact incentivized to crank out many low-quality papers but with seemingly blockbuster results.

Part of the way this system survives is because there is a culture frowning on writing "comment papers", and the other reason is that there is, fortunate for the existence of this system, a willing population of "sheep", the "true believers", available to consume and believe this research.

In any case, on my ride back to Moscow from Philadelphia, I fired up Stata, and took a second look at some of the research which found that the China shock led to a huge increase in productivity and patenting in Europe, published in a leading journal. The thesis sounded quite dubious to me from the beginning. It turned out that including sectoral fixed effects -- a very basic control -- killed the results. If I were to write this up, the journal that published it would never, in a million years, accept it. Secondly, although the original authors seem to me like fine people, traditionally, economists behave in a way which is mafia-level shady (see the comments) when their research comes under attack. Partly, they have to do this, since the masses believe that most top research is correct, it is seen as a huge black mark on someone's reputation to have a paper overturned. If there was widespread knowledge that science is difficult and most papers have flaws, this might not be so necessary. Then, perhaps, we could get a bit closer to Neil Degrasse Tyson's idealized view of science.