This title was written by Noah Smith in a recent Bloomberg column. He makes a lot good points which readers of this blog, focused on manufacturing, are already familiar. Manufacturing output isn't really booming. I would stress that, despite that trade is a big reason for why manufacturing isn't so healthy, the number one thing the government could do to promote manufacturing is adopt looser monetary policy. This would help via growing the economy (manufacturing production and employment is strongly pro-cyclical), and also help by weakening the dollar. That's a way better way of improving the trade balance than via bilateral trade negotiations or protectionism.
In any case, it's worth a read.
A blog about Manufacturing, International Trade, Monetary Policy, and replication by an academic economist.
Thursday, March 30, 2017
Raise Rates to Raise Inflation
That's what Stephen Williamson is still arguing.
Wow. This is fantastic (fantastical?) stuff.
A couple questions for Neo-Fisherians. Both central banks and market participants believe that higher interest rates slow growth and inflation. If, in fact, higher interest rates raise growth and inflation, then, in a booming economy, why don't interest rate hikes and contractions in the money supply lead to hyperinflation? After all, central banks respond to high inflation by raising rates, which they believe will lead to lower inflation. If, instead, inflation goes higher, they will respond again with even tighter money.
Also, interest rate cuts are demonstrated to cause currencies to weaken. We know this, from, among other things, announcement effects using high-frequency identification. The evidence that real exchange rate movements impact the tradables sector is very strong, and merely relies on the theory that relative prices matter, the central teaching of economics. Then, is all of economics wrong, if prices don't matter?
We also know that market participants/the entire financial sector believes that low interest rates are good for the economy and profits and tight money is bad. We know this from market reactions to monetary policy. A series of interest rate hikes would raise borrowing costs considerably, cause the dollar to appreciate (which will affect inflation directly), cause the stock market to depreciate, and generally cause financial conditions to tighten. Thus, once again, to buy the Neo-Fisherian story, you have to believe that prices don't matter. In addition, you have to believe that markets aren't remotely efficient. Thus, you have to give up the central tenets of mainstream economics. Since "the market" believes monetary policy has the right sign, Stephen Williamson could make a lot of money by starting a hedge fund and betting on the "wrong" sign.
Hey, it's worth pointing out that people can disagree about monetary policy. Williamson (2012) says QE is hyperinflationary, while, on the other hand, Williamson (2013) says QE is deflationary. On the third hand, Williamson (2013; blog post) says money is just plain neutral. What's interesting here is that these are all the same Williamsons.
Thus it's worth peering into his intellectual journey. First, after QE, despite high unemployment and a weak economy, he repeatedly predicted that inflation would rise. When it didn't happen, he changed his mind, which is what one should do. Only, he couldn't concede that standard Keynesian liquidity trap analysis was largely correct. That would be equivalent to surrendering his army to the evil of evils, Paul Krugman. Much easier to venture into the wilderness, and instead conclude, not that inflation wasn't rising despite low interest rates because the economy was still depressed, and banks were just sitting on newly printed cash, but rather that inflation was low because interest rates were low!
Fortunately, not all of Macro went in this direction, as Larry Christiano, a mainstream economist, discusses the Keynesian Revival due to the Great Recession.
"recent data is consistent with the view that persistently low nominal interest rates do not increase inflation - this just makes inflation low. If a central bank is persistently undershooting its inflation target, the solution - the neo-Fisherian solution - is to raise the nominal interest rate target. Undergraduate IS/LM/Phillips curve analysis may tell you that central banks increase inflation by reducing the nominal interest rate target, but that's inconsistent with the implications of essentially all modern mainstream macroeconomic models, and with recent experience."
Wow. This is fantastic (fantastical?) stuff.
A couple questions for Neo-Fisherians. Both central banks and market participants believe that higher interest rates slow growth and inflation. If, in fact, higher interest rates raise growth and inflation, then, in a booming economy, why don't interest rate hikes and contractions in the money supply lead to hyperinflation? After all, central banks respond to high inflation by raising rates, which they believe will lead to lower inflation. If, instead, inflation goes higher, they will respond again with even tighter money.
Also, interest rate cuts are demonstrated to cause currencies to weaken. We know this, from, among other things, announcement effects using high-frequency identification. The evidence that real exchange rate movements impact the tradables sector is very strong, and merely relies on the theory that relative prices matter, the central teaching of economics. Then, is all of economics wrong, if prices don't matter?
We also know that market participants/the entire financial sector believes that low interest rates are good for the economy and profits and tight money is bad. We know this from market reactions to monetary policy. A series of interest rate hikes would raise borrowing costs considerably, cause the dollar to appreciate (which will affect inflation directly), cause the stock market to depreciate, and generally cause financial conditions to tighten. Thus, once again, to buy the Neo-Fisherian story, you have to believe that prices don't matter. In addition, you have to believe that markets aren't remotely efficient. Thus, you have to give up the central tenets of mainstream economics. Since "the market" believes monetary policy has the right sign, Stephen Williamson could make a lot of money by starting a hedge fund and betting on the "wrong" sign.
Hey, it's worth pointing out that people can disagree about monetary policy. Williamson (2012) says QE is hyperinflationary, while, on the other hand, Williamson (2013) says QE is deflationary. On the third hand, Williamson (2013; blog post) says money is just plain neutral. What's interesting here is that these are all the same Williamsons.
Thus it's worth peering into his intellectual journey. First, after QE, despite high unemployment and a weak economy, he repeatedly predicted that inflation would rise. When it didn't happen, he changed his mind, which is what one should do. Only, he couldn't concede that standard Keynesian liquidity trap analysis was largely correct. That would be equivalent to surrendering his army to the evil of evils, Paul Krugman. Much easier to venture into the wilderness, and instead conclude, not that inflation wasn't rising despite low interest rates because the economy was still depressed, and banks were just sitting on newly printed cash, but rather that inflation was low because interest rates were low!
Fortunately, not all of Macro went in this direction, as Larry Christiano, a mainstream economist, discusses the Keynesian Revival due to the Great Recession.
Tuesday, March 28, 2017
Is Academia Biased Against Stars?
David Card and Stefano DellaVigna recently came out with a working paper which says yes.
The (abbreviated) abstract:
In any case, here's a second recent paper courtesy Andrew Gelman. "Daniele Fanelli and colleagues examined more than 3,000 meta-analyses covering 22 scientific disciplines for multiple commonly discussed bias patterns. Studies reporting large effect sizes were associated with large standard errors and large numbers of citations to the study, and were more likely to be published in peer-reviewed journals than studies reporting small effect sizes... Large effect sizes were ... associated with having ... authors who had at least one paper retracted." So, studies with large effects are cited more and more likely to be retracted. In general, to get published in a top journal, you almost have to make a bold claim. But of course bold claims are also more likely to be wrong or inflated, and also cited more.
My prior on the situation is that I suspect that any time "quality" is difficult to judge, it is only natural and rational to look for signals of quality, even if blunt. I would rather be operated on by a surgeon with a degree from Harvard, drink a pricey bottle of wine if someone else is paying, and choose to eat in a restaurant which is full. I'd think twice before putting my money in a bank I've never heard of, or of flying on an unknown airline. These are all perfectly rational biases. The more difficult it is to infer quality, the more people are going to rely on signals. An example of this is lineman/linebackers in the NFL -- since stats for these players are difficult, all-star lineman tend to be those drafted in the first or second round. This is much less so for the skill positions, where performance metrics are easier to come by. In the case of academia, I believe it can be enormously difficult to judge the relative quality of research. How to compare papers on different topics using different methodologies? Add to this, top people might referee 50 papers a year. Neither they nor the editors have time to read papers in great detail. And referees frequently disagree with each other. I was recently told by an editor at one of the journals in the Card study that in several years editing she had never seen a paper in which all the referees unanimously recommended accept. If referees always agreed, it should happen in 5% of cases. (Unfortunately, the Card/DellaVigna paper doesn't provide information on the joint distribution of referee responses. This is too bad, because one of the big gaps between R&R decisions and citations they find is that papers in which one referee recommends reject and the other straight accept are usually not given an R&R, and yet are well-cited. The other thing they don't look at is institutional affiliation, but I digress...) This all speaks to the idea that judging research is extraordinarily difficult. If so, then citations and referee and editorial decisions are likely to rely, at least in part, on relatively blunt signals of quality such as past publication record and institutional affiliation. It would be irrational not to.
So, why did the authors go the other way? I didn't find their use of the survey convincing. I suspect it had to do with personal experiences. The average acceptance rate at the QJE is 4%. That means top people like David Card get good papers (and I believe many of his papers are, in fact, very good) rejected at top journals all the time, despite high citations. He has a keen analytical mind, so it's reasonable to conclude based on personal experience that the rejections despite great citation counts are the result of some kind of bias, and perhaps they are. I've once had a paper rejected 9 times. Of course, I don't believe it could possibly be the fault of the author. Much easier to believe the world is conspiring against me. Of course, I'm joking here, but the very low acceptance rates at top journals, combined with some atrocious accepted papers probably feed this perception by everyone. Personally, I'd still prefer to be an NBER member and in a top 10 department with a weekly seminar series in my field (all of which would increase citations) and a large research budget, but hey, that's just me.
Update: I was interested in how correlated referee recommendations are, so I followed the reference in the Card paper to Welch (2014). Here is the key table. Conditional on one referee recommending "Must Accept", the second referee has a 7.7% chance of also recommending "Must Accept", vs. a 3.8% unconditional probability. If perfectly correlated, then it should be 100%. Even with one "Must Accept", chances are still better than even that the second recommendation will be reject or neutral. So, it's much closer to being completely random than referees being in agreement. The lesson I draw from this is not to read too much into a single paper rejection, and thus wait before moving down in the ranks. In a way, this could bolster the arguments of Card and DellaVigna. Referee reports are so random, how can citations be a worse measure of quality? But I think this ignores the fact that referees will spend more time with a paper than potential citers will. Even if all authors read the papers they cite, how meticulously do they read the papers they don't cite? I think the difficulty of assessing paper quality once again implies that signals should play an important role, not to mention personal connections.
The (abbreviated) abstract:
We study editorial decision-making using anonymized submission data for four leadingThe interpretation of course hinges on whether you think more famous authors are likely to get cited more conditional on quality. To some extent, this must be true. You can't cite a paper you don't know about, and you're almost certainly more likely to know about a famous author's paper. This is part of the reason I started blogging -- as a marketing tool.
economics journals... We match papers to the publication records of authors at the time of submission and to subsequent Google Scholar citations.... Empirically, we find that referee recommendations are strong predictors of citations, and that editors follow the recommendations quite closely. Holding constant the referees' evaluations, however, papers by highly-published authors get more citations, suggesting that referees impose a higher bar for these authors, or that prolific authors are over-cited. Editors only partially offset the referees' opinions, effectively discounting the citations of more prolific authors in their revise and resubmit decisions by up to 80%. To disentangle the two explanations for this discounting, we conduct a survey of specialists, asking them for their preferred relative citation counts for matched pairs of papers. The responses show no indication that prolific authors are over-cited and thus suggest that referees and editors seek to support less prolific authors.
In any case, here's a second recent paper courtesy Andrew Gelman. "Daniele Fanelli and colleagues examined more than 3,000 meta-analyses covering 22 scientific disciplines for multiple commonly discussed bias patterns. Studies reporting large effect sizes were associated with large standard errors and large numbers of citations to the study, and were more likely to be published in peer-reviewed journals than studies reporting small effect sizes... Large effect sizes were ... associated with having ... authors who had at least one paper retracted." So, studies with large effects are cited more and more likely to be retracted. In general, to get published in a top journal, you almost have to make a bold claim. But of course bold claims are also more likely to be wrong or inflated, and also cited more.
My prior on the situation is that I suspect that any time "quality" is difficult to judge, it is only natural and rational to look for signals of quality, even if blunt. I would rather be operated on by a surgeon with a degree from Harvard, drink a pricey bottle of wine if someone else is paying, and choose to eat in a restaurant which is full. I'd think twice before putting my money in a bank I've never heard of, or of flying on an unknown airline. These are all perfectly rational biases. The more difficult it is to infer quality, the more people are going to rely on signals. An example of this is lineman/linebackers in the NFL -- since stats for these players are difficult, all-star lineman tend to be those drafted in the first or second round. This is much less so for the skill positions, where performance metrics are easier to come by. In the case of academia, I believe it can be enormously difficult to judge the relative quality of research. How to compare papers on different topics using different methodologies? Add to this, top people might referee 50 papers a year. Neither they nor the editors have time to read papers in great detail. And referees frequently disagree with each other. I was recently told by an editor at one of the journals in the Card study that in several years editing she had never seen a paper in which all the referees unanimously recommended accept. If referees always agreed, it should happen in 5% of cases. (Unfortunately, the Card/DellaVigna paper doesn't provide information on the joint distribution of referee responses. This is too bad, because one of the big gaps between R&R decisions and citations they find is that papers in which one referee recommends reject and the other straight accept are usually not given an R&R, and yet are well-cited. The other thing they don't look at is institutional affiliation, but I digress...) This all speaks to the idea that judging research is extraordinarily difficult. If so, then citations and referee and editorial decisions are likely to rely, at least in part, on relatively blunt signals of quality such as past publication record and institutional affiliation. It would be irrational not to.
So, why did the authors go the other way? I didn't find their use of the survey convincing. I suspect it had to do with personal experiences. The average acceptance rate at the QJE is 4%. That means top people like David Card get good papers (and I believe many of his papers are, in fact, very good) rejected at top journals all the time, despite high citations. He has a keen analytical mind, so it's reasonable to conclude based on personal experience that the rejections despite great citation counts are the result of some kind of bias, and perhaps they are. I've once had a paper rejected 9 times. Of course, I don't believe it could possibly be the fault of the author. Much easier to believe the world is conspiring against me. Of course, I'm joking here, but the very low acceptance rates at top journals, combined with some atrocious accepted papers probably feed this perception by everyone. Personally, I'd still prefer to be an NBER member and in a top 10 department with a weekly seminar series in my field (all of which would increase citations) and a large research budget, but hey, that's just me.
Update: I was interested in how correlated referee recommendations are, so I followed the reference in the Card paper to Welch (2014). Here is the key table. Conditional on one referee recommending "Must Accept", the second referee has a 7.7% chance of also recommending "Must Accept", vs. a 3.8% unconditional probability. If perfectly correlated, then it should be 100%. Even with one "Must Accept", chances are still better than even that the second recommendation will be reject or neutral. So, it's much closer to being completely random than referees being in agreement. The lesson I draw from this is not to read too much into a single paper rejection, and thus wait before moving down in the ranks. In a way, this could bolster the arguments of Card and DellaVigna. Referee reports are so random, how can citations be a worse measure of quality? But I think this ignores the fact that referees will spend more time with a paper than potential citers will. Even if all authors read the papers they cite, how meticulously do they read the papers they don't cite? I think the difficulty of assessing paper quality once again implies that signals should play an important role, not to mention personal connections.
Sunday, March 26, 2017
Corporations in the Age of Inequality: A Skeptic's Take
Nick Bloom contributes an article to the Harvard Business Review: "Corporations in the Age of Inequality". It's also an interesting read, based on his "Firming Up Inequality" paper, which I also recommend. The story of inequality they tell is also one which is essentially technology based (IT and outsourcing), as they find that inequality is almost entirely driven by changes in between firm inequality. They deserve credit for presenting an interesting set of facts.
However, while intriguing, I'm not yet totally convinced this is the key to understanding inequality. Macromon also had an excellent discussion of this research awhile back. The key question she had is about the definition of a firm -- the authors use tax ID numbers, but the average # of tax ID numbers per corporation listed on the NYSE is 3.2. If it's just a story about a growing number of tax IDs per corporation, then it becomes much less interesting. A similar problem is that one could also imagine that if a firm used to employ a janitor, but then outsources it to a different company, within firm inequality could fall or be flat even as wages between the janitor and others working in the same building increase. Thus, their decomposition doesn't necessarily teach us much about why inequality is increasing, and is perfectly consistent with the institutions story I've told in a previous blog post (or many other stories one might dream up).
However, I did notice that the institutions story went unmentioned in their paper and in Bloom's article. As I wrote before, the insights of Piketty, Saez and coauthors seem to have had surprisingly little influence, particularly since the explanation seems perfectly adequate, and has a lot of explanatory power. (Actually, in his book, Piketty also barely mentions the institutions story, which is a major theme of some of his joint papers with Saez and coauthors...) This is where the lack of comment papers in economics is frustrating. Many top people apparently don't buy the institutions story. Why not? Perhaps they have good reasons to be skeptical. I'd like to know what they are. (I'd certainly like to know what Nick Bloom thinks...) The evidence looks quite compelling to me.
I took issue with this comment "Since 1980, income inequality has risen sharply in most developed economies". As my blog readers know, income inequality has not risen dramatically in Germany, France, Japan, or Sweden according to Alvaredo et al.. Thus, this comment threw me: "This means that the rising gap in pay between firms accounts for the large majority of the increase in income inequality in the United States. It also accounts for at least a substantial part in other countries, as research conducted in the UK, Germany, and Sweden demonstrates." Right, but the increases in inequality in Germany and Sweden have been quite minor relative to the US, and are also associated with changes in top marginal tax rates. So, between firm inequality isn't actually explaining much is what I'm hearing.
Bloom attributes rising inequality to "the rise of outsourcing, the adoption of IT, and the cumulative effects of winner-take-most competition". I agree that outsourcing could be causing the rise of measured firm-level inequality, although this could be happening even while having scant impact on overall inequality. Particularly given that "One study found that from 1980 to 2008, the share of workers in Germany employed by temp agencies or cleaning, logistics, or security firms more than tripled, from roughly 2% to 7%." My question with adoption of IT is going to sound familiar: don't the Germans, Swedes, and Japanese also have IT? And why did winner-take-most competition only happen in Reagan-Thatcher countries since 1980? I would point to the savage cuts in top marginal tax rates, as well as to other institutional changes (other tax changes, declining importance of the minimum wage, and declining influence of labor).
In any case, it's still a nice article and I found myself mostly agreeing with his policy recommendations. I particularly liked his focus on the role that luck plays in determining careers and income. If you take two people with the same skills, and put one at Google in 1999 and another at, say, General Motors, the one lucky enough to have chosen Google would have gotten wildly rich. This is another argument for higher top marginal tax rates, which I support.
However, while intriguing, I'm not yet totally convinced this is the key to understanding inequality. Macromon also had an excellent discussion of this research awhile back. The key question she had is about the definition of a firm -- the authors use tax ID numbers, but the average # of tax ID numbers per corporation listed on the NYSE is 3.2. If it's just a story about a growing number of tax IDs per corporation, then it becomes much less interesting. A similar problem is that one could also imagine that if a firm used to employ a janitor, but then outsources it to a different company, within firm inequality could fall or be flat even as wages between the janitor and others working in the same building increase. Thus, their decomposition doesn't necessarily teach us much about why inequality is increasing, and is perfectly consistent with the institutions story I've told in a previous blog post (or many other stories one might dream up).
However, I did notice that the institutions story went unmentioned in their paper and in Bloom's article. As I wrote before, the insights of Piketty, Saez and coauthors seem to have had surprisingly little influence, particularly since the explanation seems perfectly adequate, and has a lot of explanatory power. (Actually, in his book, Piketty also barely mentions the institutions story, which is a major theme of some of his joint papers with Saez and coauthors...) This is where the lack of comment papers in economics is frustrating. Many top people apparently don't buy the institutions story. Why not? Perhaps they have good reasons to be skeptical. I'd like to know what they are. (I'd certainly like to know what Nick Bloom thinks...) The evidence looks quite compelling to me.
I took issue with this comment "Since 1980, income inequality has risen sharply in most developed economies". As my blog readers know, income inequality has not risen dramatically in Germany, France, Japan, or Sweden according to Alvaredo et al.. Thus, this comment threw me: "This means that the rising gap in pay between firms accounts for the large majority of the increase in income inequality in the United States. It also accounts for at least a substantial part in other countries, as research conducted in the UK, Germany, and Sweden demonstrates." Right, but the increases in inequality in Germany and Sweden have been quite minor relative to the US, and are also associated with changes in top marginal tax rates. So, between firm inequality isn't actually explaining much is what I'm hearing.
Bloom attributes rising inequality to "the rise of outsourcing, the adoption of IT, and the cumulative effects of winner-take-most competition". I agree that outsourcing could be causing the rise of measured firm-level inequality, although this could be happening even while having scant impact on overall inequality. Particularly given that "One study found that from 1980 to 2008, the share of workers in Germany employed by temp agencies or cleaning, logistics, or security firms more than tripled, from roughly 2% to 7%." My question with adoption of IT is going to sound familiar: don't the Germans, Swedes, and Japanese also have IT? And why did winner-take-most competition only happen in Reagan-Thatcher countries since 1980? I would point to the savage cuts in top marginal tax rates, as well as to other institutional changes (other tax changes, declining importance of the minimum wage, and declining influence of labor).
In any case, it's still a nice article and I found myself mostly agreeing with his policy recommendations. I particularly liked his focus on the role that luck plays in determining careers and income. If you take two people with the same skills, and put one at Google in 1999 and another at, say, General Motors, the one lucky enough to have chosen Google would have gotten wildly rich. This is another argument for higher top marginal tax rates, which I support.
Friday, March 24, 2017
The Saga of Currency Unions and Trade
One of the first full papers I wrote was on currency unions and trade. I was taking Alan Taylor's field course at UC Davis, which was essentially and Open-Economy Macro History course, and the famous Glick/Rose findings that currency unions double trade was on the syllabus. Not to be outdone, Robert Barro and coauthors then found that currency unions increase trade on a 7-fold and 14-fold basis! This raised the prospect that Frenchman may suddenly go out and buy a dozen or more Volkswagens instead of settling for just one after the adoption of the Euro. Another paper, published in the QJE, found that currency unions even raise growth, via trade. In that case, one can only imagine what the Greek economy would look like if they hadn't joined the Euro. No result, it seems, can be too fanciful.
Always a doubting Thomas, I was instantly skeptical. For one, a doubling of trade implies an impact more than 10 times larger than what has been measured for the Smoot-Hawley tariff, while we already knew the effect didn't operate via the impact of exchange rate volatility on trade, thanks to an excellent paper by Klein and Shambaugh. For two, countries don't enter into (or exit) currency unions randomly. Obvious omitted variables alert! It was only a question what.
To his credit, Andrew Rose always provides his data online. This also helps citations, as the cost of extending or critiquing his papers are small. So, I fired up Stata, and after a solid 30 minutes, I discovered part of what was driving the seemingly magical effect. Roughly one-quarter of the CU changes were of countries that had former colonial relationships, many of them between the UK and its former colonies. It turns out that the impact of the "former colony" dummy in the gravity equation has been decaying slowly over time. This led to a more interesting insight -- that history matters for trade. In any case, many colonies had currency unions with the UK until the late-1960s/early 1970s. Thus, given the decay of trade for all former colonies, if you average trade before and after 1970, of course you get that there was relatively less trade after. However, if you add in a simple trend for trade between the UK and all its former colonies -- a rather mild control -- the magical "currency union" effect on trade disappears completely. Indeed, if you have two variables that exhibit trends, and you regress one on the other, you're almost guaranteed to find a significant correlation even if no relationship exists. I've since discovered that this is by far the most ubiquitous problem in empirical international trade, to the extent that I've been to conference sessions in which every paper regressed trending variables in levels on other trending variables. It doesn't mean that they were all wrong, necessarily, but showing robustness to trends is a best practice. Plotting pre-and post-treatment trends can also help. But the more relevant research question in this case is "How much does a currency union increase trade relative to trend?" Particularly when there are already strong upward or downward pre-trends.
In any case -- back to currency unions. For other country pairs aside from colonies, there were other problems. India and Pakistan dissolved their currency union in 1964, after which trade declined. However, there's an obvious problem here: this occurred because of a brutal border war, and these two countries hate each to this day. A number of other examples were similar. You probably know less about what happened between Madagascar and Comoros in 1975. They dissolved a currency, and trade indeed collapsed. But two minutes of googling told me that, in the same year, there was an ethnic cleansing episode in Mahajanga, targeted at "island" peoples. (Yes, Madagascar is also an island, but this was apparently targeted at people from smaller islands, including the Comorians...) Same thing with the "Liberation War" that resulted in the overthrow of Ugandan dictator Idi Amin -- and also resulted in the tragic end of a currency union, depending on your view. And on and on the examples went. Wars, ethnic cleansing episodes, financial crises, currency crises, banking crises, communist takeovers. These are what cause currency unions to end. And if you take out the most obvious examples of cases where geopolitical events overshadow the change in CU status, there's not much left, and what is doesn't suggest that CUs have any effect on trade. And this is before one even blows up the error terms by adopting realistic assumptions about spatial correlation.
In any case, in 2015, I deleted this paper and my regression and code from my webpage, and assigned the paper to my brilliant undergraduate students. They alerted me to the fact that Glick and Rose had recently written a mea culpa, where the authors declared that they could no longer have confidence in the results. I wasn't cited. Indeed, they probably hadn't seen my paper. Despite this, I was worried about having pissed them off in the first place, and I also wanted to get some love in the final version, so I wrote them an email and thanked them for sharing their data online, congratulating them for having the intellectual integrity to admit that they were wrong (even if their new paper still suffered from many of the same issues, such as not controlling for trends).
This sounds like it should be the end of the story, correct? Particularly since the early evidence on the Euro and trade was not supportive. However, Glick and Rose changed their minds, and decided instead to double-down on a positive, measurable impact of currency unions on trade. This time, they concluded that the Euro has increased trade by a smaller, but still magical, 50%. And I did get some love, although not in the form of a citation -- instead they thanked me in the acknowledgments! I'm still quite happy to have been acknowledged, even though, admittedly, I hadn't commented on the substance of their new paper.
In any case, Jeff Frankel at Harvard apparently used to assign his Ph.D.'s students a "search-and-destroy" mission on the original CU effect. Thus, thanks to the fact that Andrew Rose still provides his data online -- for which I'm grateful -- I've just assigned my students a similar mission on the new EMU result. Thus, we get to see if they can overturn anything that the good referees at the European Economic Review may have overlooked. If I were a gambling man, and I am, I would put my money on my sharp undergraduates at the New Economic School over the academic publication process. If I were Croatia, or Greece, contemplating the relative merits of joining/staying in the Euro, I would write off the academic literature on this topic completely.
Monday, March 20, 2017
Robots and Inequality: A Skeptic's Take
Paul Krugman presents "Robot Geometry" based on Ryan Avent's "Productivity Paradox". It's more-or-less the skill-biased technological change hypothesis, repackaged. Technology makes workers more productive, which reduces demand for workers, as their effective supply increases. Workers still need to work, with a bad safety net, so they end up moving to low-productivity sectors with lower wages. Meanwhile, the low wages in these sectors makes it inefficient to invest in new technology.
My question: Are Reagan-Thatcher countries the only ones with robots? My image, perhaps it is wrong, is that plenty of robots operate in Japan and Germany too, and both countries are roughly just as technologically advanced as the US. But Japan and Germany haven't seen the same increase in inequality as the US and other Anglo countries after 1980 (graphs below). What can explain the dramatic differences in inequality across countries? Fairly blunt changes in labor market institutions, that's what. This is documented in the Temin/Levy "Treaty of Detroit" paper and the oddly ignored series of papers by Piketty, Saez and coauthors which argues that changes in top marginal tax rates can largely explain the evolution of the Top 1% share of income across countries. (Actually, it goes back further -- people who work in Public Economics had "always" known that pre-tax income is sensitive to tax rates...) They also show that the story of inequality is really a story of incomes at the very top -- changes in other parts of the income distribution are far less dramatic. This evidence also is not suggestive of a story in which inequality is about the returns to skills, or computer usage, or the rise of trade with China.
With Lester Lusher, I've also waded into this debate (here and here). What we set out to do is show a variety of evidence all pointing to the conclusion that trade shocks have not caused a rise in inequality. First, we look at the two largest trade shocks in the US, the 1980s dollar bubble, and the late 1990s appreciation + rise of China shock. Perhaps surprisingly, neither hurt the wages of low-wage workers who worked in exposed sectors disproportionately. Sectors more exposed to the shocks also did not experience increases in inequality. We also found no association between capital upgrading by sectors, and inequality, or between TFP growth and inequality. And then we looked at the cross-country evidence, and found that trade with China, trade deficits, trade levels, and changes in trade are all not even correlated with top income shares. As a last bit, we tested the Piketty, Saez, and Stantcheva results out-of-sample. Although we found that a slightly different functional form, and a dynamic model worked much better, we found that their results hold up out-of-sample. Interestingly, even though this was just a small part of our paper, referees protested that the result was not new. They apparently see no value in subjecting previously published work to additional testing, even if it is seminal work on a major policy topic that is also controversial enough that many top people (Krugman, Autor, Acemoglu, etc.) seem not to believe it. And this journal was second tier!
In any case, there is a long history of constructing mathematical models to show how international trade or skill-biased technological change can influence the wage distribution in theory. However, it's not clear to me this literature has been very successful empirically, in the end. It seems to me that any theory meant to apply to all countries will be a theory that doesn't apply to any country. And somehow it always seems that the contributors to this literature are not aware that there is already a perfectly good explanation for rising inequality that has explanatory power internationally (and out-of-sample).
Lastly, why are wages growing slowly? Well, let's not forget that the Fed has raised interest rates 3 times already since the Taper, despite below-target inflation and slow wage growth. The labor market has been bad since 2007 (or, in fact, since 2000 -- see my longer explanation here). Wages tend to grow slowly in bad labor markets, and tight money will keep them from growing quickly. There's no need here to draw parallelograms.
My question: Are Reagan-Thatcher countries the only ones with robots? My image, perhaps it is wrong, is that plenty of robots operate in Japan and Germany too, and both countries are roughly just as technologically advanced as the US. But Japan and Germany haven't seen the same increase in inequality as the US and other Anglo countries after 1980 (graphs below). What can explain the dramatic differences in inequality across countries? Fairly blunt changes in labor market institutions, that's what. This is documented in the Temin/Levy "Treaty of Detroit" paper and the oddly ignored series of papers by Piketty, Saez and coauthors which argues that changes in top marginal tax rates can largely explain the evolution of the Top 1% share of income across countries. (Actually, it goes back further -- people who work in Public Economics had "always" known that pre-tax income is sensitive to tax rates...) They also show that the story of inequality is really a story of incomes at the very top -- changes in other parts of the income distribution are far less dramatic. This evidence also is not suggestive of a story in which inequality is about the returns to skills, or computer usage, or the rise of trade with China.
With Lester Lusher, I've also waded into this debate (here and here). What we set out to do is show a variety of evidence all pointing to the conclusion that trade shocks have not caused a rise in inequality. First, we look at the two largest trade shocks in the US, the 1980s dollar bubble, and the late 1990s appreciation + rise of China shock. Perhaps surprisingly, neither hurt the wages of low-wage workers who worked in exposed sectors disproportionately. Sectors more exposed to the shocks also did not experience increases in inequality. We also found no association between capital upgrading by sectors, and inequality, or between TFP growth and inequality. And then we looked at the cross-country evidence, and found that trade with China, trade deficits, trade levels, and changes in trade are all not even correlated with top income shares. As a last bit, we tested the Piketty, Saez, and Stantcheva results out-of-sample. Although we found that a slightly different functional form, and a dynamic model worked much better, we found that their results hold up out-of-sample. Interestingly, even though this was just a small part of our paper, referees protested that the result was not new. They apparently see no value in subjecting previously published work to additional testing, even if it is seminal work on a major policy topic that is also controversial enough that many top people (Krugman, Autor, Acemoglu, etc.) seem not to believe it. And this journal was second tier!
In any case, there is a long history of constructing mathematical models to show how international trade or skill-biased technological change can influence the wage distribution in theory. However, it's not clear to me this literature has been very successful empirically, in the end. It seems to me that any theory meant to apply to all countries will be a theory that doesn't apply to any country. And somehow it always seems that the contributors to this literature are not aware that there is already a perfectly good explanation for rising inequality that has explanatory power internationally (and out-of-sample).
Lastly, why are wages growing slowly? Well, let's not forget that the Fed has raised interest rates 3 times already since the Taper, despite below-target inflation and slow wage growth. The labor market has been bad since 2007 (or, in fact, since 2000 -- see my longer explanation here). Wages tend to grow slowly in bad labor markets, and tight money will keep them from growing quickly. There's no need here to draw parallelograms.
Sunday, March 19, 2017
The Myth and Reality of Manufacturing in America
That's the title of a small pamphlet on manufacturing by Michael J. Hicks and Srikant Devaraj which argues that "Almost 88 percent of job losses in manufacturing in recent years
can be attributable to productivity growth..."
However, the accounting exercise they conduct to get that number contains a conceptual flaw. Namely, they start at 2000, and ignore the role of productivity growth before that. Productivity growth, if anything, fell after 2000. So, if one does a counterfactual job simulation from 1987 to 2000, one could say that without productivity growth, employment in the manufacturing sector would have increased 60%. In their exercise from 2000, they claim that, absent productivity growth, manufacturing employment would have declined just 12%. But the big question here is why did the "no productivity growth" counterfactual suddenly stop growing after 2000? What explains the difference in the trend? This is something they can't answer, since they decide not to look before 2000.
Yet, somehow this pamphlet keeps getting cited by major news outlets.
An indication of the level of care in this article is that they cite "Acemoglu" as "Ocemoglu". Someone corrected this in the References, but left the citation in alphabetical order where the O should be.
There is a huge hole in Trump's promise to bring back US manufacturing jobs
That's the title of an article from Business Insider.
There is much I like about these articles. This one is better than most, as it at least plots data. However, I don't quite understand why they focus on robots and productivity gains, rather than the slowdown in the growth of real manufacturing output. Real manufacturing output from 2007 to 2017 has barely increased at all. This is for sure a major factor in the decline of manufacturing employment. The article focuses on "manufacturing output hitting record highs in recent years, even as manufacturing employment continues its steady decline." Sorry, but a 1 or 2% increase in manufacturing output over a 10 year period is not a cause for celebration, even if we are now reaching "record highs". One could alternatively frame this performance as approaching "record lows" in terms of the 10 year growth rate of real output.
The other problem with the argument that booming productivity and robots have caused the decline in US manufacturing output, aside from the fact that productivity growth hasn't increased, is that the period since 2000 has also seen the US share of manufacturing exports fall dramatically. Why would booming productivity lead to fewer exports and rising imports? Inquiring minds would like to know.
Yet another problem is that the composition of productivity growth has changed. Productivity for the median sector declined from 2000 to 2010, while the computer sector saw very rapid productivity amidst declining sales.
There is much I like about these articles. This one is better than most, as it at least plots data. However, I don't quite understand why they focus on robots and productivity gains, rather than the slowdown in the growth of real manufacturing output. Real manufacturing output from 2007 to 2017 has barely increased at all. This is for sure a major factor in the decline of manufacturing employment. The article focuses on "manufacturing output hitting record highs in recent years, even as manufacturing employment continues its steady decline." Sorry, but a 1 or 2% increase in manufacturing output over a 10 year period is not a cause for celebration, even if we are now reaching "record highs". One could alternatively frame this performance as approaching "record lows" in terms of the 10 year growth rate of real output.
The other problem with the argument that booming productivity and robots have caused the decline in US manufacturing output, aside from the fact that productivity growth hasn't increased, is that the period since 2000 has also seen the US share of manufacturing exports fall dramatically. Why would booming productivity lead to fewer exports and rising imports? Inquiring minds would like to know.
Yet another problem is that the composition of productivity growth has changed. Productivity for the median sector declined from 2000 to 2010, while the computer sector saw very rapid productivity amidst declining sales.
Friday, March 17, 2017
Links to Two Short Essays on Monetary Policy, One Good, One Bad
First, new FOMC member Neil Kashkari tells us why he dissented on this week's interest rate hike. It's a great essay. I laughed at "over the past five years, 100 percent of the medium-term inflation forecasts (midpoints) in the FOMC’s Summary of Economic Projections have been too high: We keep predicting that inflation is around the corner." That's some 20 forecasts all missed in the same direction. Impressive.
This is exactly the problem with monetary policy in the US, Europe, and Japan. Central banks keep predicting that inflation will return to target. When it doesn't, however, they don't respond by loosening policy, or by updating their inflation forecasts. They just continue being predictably wrong. He makes a number of good points, including that the Fed shouldn't treat their target as though the risks are asymmetric, like they are driving near a cliff. They should occasionally miss by going over 2%. If anything, the worry should be that if the US had a negative shock again, the Fed would have more trouble managing the economy at the zero lower bound. The asymmetric risk is an economy growing too slowly. What he doesn't mention is that this logic should imply a higher inflation target -- but I'll give him a pass on this. Yet another good point he makes is that the rest of the developed world is currently suffering from very low inflation. How likely is it that the US will soon suffer from an inflation spiral that can't be controlled by repeated interest rate increases? When other developed countries have inflation close to zero? Core inflation in the US is just 1.74, he notes, up from 1.71% last year. We appear to be some ways away from an inflation spiral.
Another thing he doesn't do is discuss the fact that, not only has the Fed repeatedly missed its inflation forecasts in the past few years, but it has also under-performed its GDP growth forecasts. You'd think, if you wanted to drive down the center of the road, when you notice your car heading off onto the shoulder that you'd adjust the steering wheel back toward the center. The Fed instead has repeatedly decided not to do this, and then been surprised that the car doesn't return to the center of the road by itself. Many academics then argue that maybe it's because the steering wheel doesn't work. I say the problem isn't with the steering wheel, but that the driver refuses to adjust.
Thus I want to flag another short article, on Japan, which argues exactly this. Sayuri Shirai writes that Japan's non-standard monetary policies since it reached the zero lower bound have not been effective, and argues that Japan should thus raise interest rates. One tell that the author is overselling the ineffectiveness of MP at the ZLB is that although she credits Japan's MP easing with weakening the Yen after 2013, she writes "the yen’s depreciation has reduced the prices of exports but increased those of imports so that the net positive impact has not been as large as in the more distant past." How could a Yen devaluation simultaneously reduce the price of exports and increase import prices? Presumably, many of Japan's exports to the US, for example, are actually priced in dollars. In Yen terms, these prices are thus likely to rise substantially with a fall in the value of the Yen. The article doesn't back up these claims.
The author also doesn't back up other claims, such as that the negative interest rate policy supposedly caused a rash of bad effects, such as "a deterioration in households’ sentiments" and "potential financial instability risk". If either of these were the case, it could easily be documented using high-frequency data from surprise announcements. But that isn't what people who look at the impact of MP announcements at the ZLB typically find. Interestingly, although much of the article is about the negative interest rates, the author never mentions that Japanese rates are just -.1%. Color me skeptical that a change of just .1% is really going to have these dramatic negative effects proposed in the article. Keep in mind that back before the ZLB episode, the BOJ once raised interest rates 3.5% in one year to fight inflation, and in the 1970s, they cut interest rates as much as 6% in a loosening cycle over 2-3 years to fight a recession. That's 60 times more medicine than a move from 0 to -.1%. To provide a similar amount of ammunition now, of course an enormous balance sheet expansion would be in order. It's clearly foolish to conclude that since things haven't gone better since Japan cut interest rates from 0 to -.1%, that Japan should now raise rates.
Aside from the fact that announcements have tended to move things in the right direction for Japan, the author also misses that many times, even as new data comes in which is below the BOJ's forecast, they've declined to take on more stimulative policies, occasionally disappointing markets and tightening monetary conditions. It's hard not to watch this and conclude that the Bank is not totally committed to its growth and inflation targets. A good analogy is your friend who states he wants to lose weight, but refuses to go to the gym more than once a month. Since going to the gym once a month isn't working, why go at all? That's what Shirai is telling us.
This is exactly the problem with monetary policy in the US, Europe, and Japan. Central banks keep predicting that inflation will return to target. When it doesn't, however, they don't respond by loosening policy, or by updating their inflation forecasts. They just continue being predictably wrong. He makes a number of good points, including that the Fed shouldn't treat their target as though the risks are asymmetric, like they are driving near a cliff. They should occasionally miss by going over 2%. If anything, the worry should be that if the US had a negative shock again, the Fed would have more trouble managing the economy at the zero lower bound. The asymmetric risk is an economy growing too slowly. What he doesn't mention is that this logic should imply a higher inflation target -- but I'll give him a pass on this. Yet another good point he makes is that the rest of the developed world is currently suffering from very low inflation. How likely is it that the US will soon suffer from an inflation spiral that can't be controlled by repeated interest rate increases? When other developed countries have inflation close to zero? Core inflation in the US is just 1.74, he notes, up from 1.71% last year. We appear to be some ways away from an inflation spiral.
Another thing he doesn't do is discuss the fact that, not only has the Fed repeatedly missed its inflation forecasts in the past few years, but it has also under-performed its GDP growth forecasts. You'd think, if you wanted to drive down the center of the road, when you notice your car heading off onto the shoulder that you'd adjust the steering wheel back toward the center. The Fed instead has repeatedly decided not to do this, and then been surprised that the car doesn't return to the center of the road by itself. Many academics then argue that maybe it's because the steering wheel doesn't work. I say the problem isn't with the steering wheel, but that the driver refuses to adjust.
Thus I want to flag another short article, on Japan, which argues exactly this. Sayuri Shirai writes that Japan's non-standard monetary policies since it reached the zero lower bound have not been effective, and argues that Japan should thus raise interest rates. One tell that the author is overselling the ineffectiveness of MP at the ZLB is that although she credits Japan's MP easing with weakening the Yen after 2013, she writes "the yen’s depreciation has reduced the prices of exports but increased those of imports so that the net positive impact has not been as large as in the more distant past." How could a Yen devaluation simultaneously reduce the price of exports and increase import prices? Presumably, many of Japan's exports to the US, for example, are actually priced in dollars. In Yen terms, these prices are thus likely to rise substantially with a fall in the value of the Yen. The article doesn't back up these claims.
The author also doesn't back up other claims, such as that the negative interest rate policy supposedly caused a rash of bad effects, such as "a deterioration in households’ sentiments" and "potential financial instability risk". If either of these were the case, it could easily be documented using high-frequency data from surprise announcements. But that isn't what people who look at the impact of MP announcements at the ZLB typically find. Interestingly, although much of the article is about the negative interest rates, the author never mentions that Japanese rates are just -.1%. Color me skeptical that a change of just .1% is really going to have these dramatic negative effects proposed in the article. Keep in mind that back before the ZLB episode, the BOJ once raised interest rates 3.5% in one year to fight inflation, and in the 1970s, they cut interest rates as much as 6% in a loosening cycle over 2-3 years to fight a recession. That's 60 times more medicine than a move from 0 to -.1%. To provide a similar amount of ammunition now, of course an enormous balance sheet expansion would be in order. It's clearly foolish to conclude that since things haven't gone better since Japan cut interest rates from 0 to -.1%, that Japan should now raise rates.
Aside from the fact that announcements have tended to move things in the right direction for Japan, the author also misses that many times, even as new data comes in which is below the BOJ's forecast, they've declined to take on more stimulative policies, occasionally disappointing markets and tightening monetary conditions. It's hard not to watch this and conclude that the Bank is not totally committed to its growth and inflation targets. A good analogy is your friend who states he wants to lose weight, but refuses to go to the gym more than once a month. Since going to the gym once a month isn't working, why go at all? That's what Shirai is telling us.
Wednesday, March 15, 2017
A Quick Argument Why Not to Raise Interest Rates
I teach today, so this will be quick. Why am I opposed to raising interest rates?
First, I should mention this is a debate which we've had before. When the Fed decided to leave it's balance sheet unchanged for all of 2009 and 2010 -- while the economy burned -- I thought it was a mistake. When Ben Bernanke raised the discount rate in 2010, I wondered if he had lost his marbles. The result was a 63 seat loss in the House for the Dems and a full decade of Tea-Baggers in the Congress (and maybe more, depending to what extent gerrymandering affects local House elections which will then affect the next redistricting). Given that the economy continued to grow slowly after that, I think I was justified.
Next, after QE2 at the end of 2010, I believed the Fed should have acted sooner than the end of 2012 for QE3, given the fact that growth was slower than it anticipated.
Then, I thought 2014 was too soon to taper, given that inflation was below target and GDP growth was below trend.
At the end of 2015, I also, not suprisingly, thought it was too soon to tighten. Once again, 2016 was a year of slow GDP growth and below-target inflation. So, in retrospect, I think that was the right call. The last two rate hikes, I believe, are also premature. Unemployment is now super low, and employment gains the past several months indeed look solid. Thus it isn't crazy to hike now. But the unemployment rate and current headline inflation rates are not the only numbers the Fed should be looking at. The Core PCE is still pretty stable, below 1.8% for the last reading, and while headline inflation is now close to 2% and will soon likely move higher if current trends continue, this measure was also close to zero as recently as 2015, a year in which the Fed actually raised interest rates. The Fed shouldn't ignore that inflation has been below it's target for most of the period since 2008, that the employment-to-population ratio for prime-aged workers, while trending in the right direction, still has a lot of room to recover, and that RGDP is some 20-25% below it's long-run trend. Given that inflation has been below target for so long, why not let inflation run above target for awhile to allow GDP to catch up with its previous trend?
And, add to this that our inflation target is too low. Clearly, the Fed has trouble managing the economy at the zero lower bound. A 2% inflation target means that any large shock will push us into a liquidity trap. This seems more-or-less obvious. Why not raise the target to 3%?
The only silver lining here for Democrats is that this hurts Trump and Republicans. But they've been pushing the Fed for tighter policy for years. One wonders if they will soon start to change their tune.
First, I should mention this is a debate which we've had before. When the Fed decided to leave it's balance sheet unchanged for all of 2009 and 2010 -- while the economy burned -- I thought it was a mistake. When Ben Bernanke raised the discount rate in 2010, I wondered if he had lost his marbles. The result was a 63 seat loss in the House for the Dems and a full decade of Tea-Baggers in the Congress (and maybe more, depending to what extent gerrymandering affects local House elections which will then affect the next redistricting). Given that the economy continued to grow slowly after that, I think I was justified.
Next, after QE2 at the end of 2010, I believed the Fed should have acted sooner than the end of 2012 for QE3, given the fact that growth was slower than it anticipated.
Then, I thought 2014 was too soon to taper, given that inflation was below target and GDP growth was below trend.
At the end of 2015, I also, not suprisingly, thought it was too soon to tighten. Once again, 2016 was a year of slow GDP growth and below-target inflation. So, in retrospect, I think that was the right call. The last two rate hikes, I believe, are also premature. Unemployment is now super low, and employment gains the past several months indeed look solid. Thus it isn't crazy to hike now. But the unemployment rate and current headline inflation rates are not the only numbers the Fed should be looking at. The Core PCE is still pretty stable, below 1.8% for the last reading, and while headline inflation is now close to 2% and will soon likely move higher if current trends continue, this measure was also close to zero as recently as 2015, a year in which the Fed actually raised interest rates. The Fed shouldn't ignore that inflation has been below it's target for most of the period since 2008, that the employment-to-population ratio for prime-aged workers, while trending in the right direction, still has a lot of room to recover, and that RGDP is some 20-25% below it's long-run trend. Given that inflation has been below target for so long, why not let inflation run above target for awhile to allow GDP to catch up with its previous trend?
And, add to this that our inflation target is too low. Clearly, the Fed has trouble managing the economy at the zero lower bound. A 2% inflation target means that any large shock will push us into a liquidity trap. This seems more-or-less obvious. Why not raise the target to 3%?
The only silver lining here for Democrats is that this hurts Trump and Republicans. But they've been pushing the Fed for tighter policy for years. One wonders if they will soon start to change their tune.
A Policy Proposal for US Manufacturing, and the Economy: Don't Raise the Federal Funds Rate
Doug Irwin shared a nice article in the WSJ which makes a nice point that trade deficits, by themselves, don't imply that trade is unfair. It's a good article. Here is one part:
The Trump team is correct in believing that sometimes trade imbalances reflect government tampering, though in the capital markets. Messrs. Gagnon and Bergsten found that countries who buy foreign currency to suppress the value of their own tend to have larger trade surpluses.
In the past Chinese currency intervention and tight controls on capital inflows kept the yuan artificially cheap and its trade surpluses inflated. Brad Setser of the Council on Foreign Relations says Taiwan and South Korea have held down their currencies via currency intervention and by encouraging domestic investors to buy foreign assets and discouraging foreign investors from buying domestic assets.
These actions are legitimate cause for complaint. Yet finding an effective deterrent has eluded previous presidents and it’s not clear what Mr. Trump can do differently. Punishing China for manipulating its currency makes less sense now that China is trying to prop it up, rather than push it down as in the past.Let me offer a solution, that happens to cure other, perhaps more serious problems with the economy. The Fed should simply not raise interest rates as it is doing. In fact, it should probably be cutting them. If it did, the dollar will fall, and this will help manufacturing and help the trade balance. It would help the US move back toward its long-run growth path, and help inflation move up to its ostensible target. Unemployment is now low, and while it isn't crazy to think the Fed should raise interest rates, the Fed should transition to a 3% (or 4%) inflation target to make future zero lower bound episodes more unlikely, and they should treat that target symmetrically. This is a simple pro-trade and pro-manufacturing policy that will anger almost no one and actually be good for the rest of the world economy, as it will spur faster growth. Manufacturing employment will grow on two counts -- one from faster growth, and a second from a weaker dollar. This is the obvious play to help manufacturing, and what I pushed for when I worked in the Obama administration.
The Peso has recovered. Good news for the future of NAFTA...
With the US dollar already a bit overvalued, and the US stock market also looking richly valued by historical measures, I need something to invest in each month. Thus, I decided to have a look at LT government bonds in Mexico, and the value of the peso, to play the carry trade/uncovered interest parity puzzle. Last I'd checked, the Peso had depreciated a lot from its pre-election levels. Turns out, not any more. The peso is roughly back to pre-election levels. Looks like the conventional wisdom now is that nothing really will happen to NAFTA.
This is great. Ending NAFTA was a horrible idea, and Mexico has, at most, only a little to do with the decline of American manufacturing. It's also possible that trade with Mexico has helped soften the blow to American manufacturing, as cheap Mexican labor can sometimes be a complement in production for some sectors. This may be why most of the countries near to China, with the exception of Japan, are not now in depressions. (In fact, someone needs to write a paper about this...) Below is the NAFTA trade balance ex-oil. You'll notice it's always been roughly balanced. Ending NAFTA would create winners and losers in the US, but you can bet the losers (at least the exporters, not the consumers) would scream louder than the winners would applaud.
Tuesday, March 14, 2017
The Real Exchange Rate and China's Savings
I attended the Forum for Research in Empirical International Trade (FREIT) in Yogyakarta, Indonesia this past week. Marc Muendler and I were debating the source of China's excess savings, over and above its high level of investment. China's high savings rate could have multiple causes (e.g., lack of a social safety net implies a need for precautionary savings, while countries that are growing fast, such as the Asian Tigers, have a clear tendency to save a lot), and various policies could be at play. However, in this post I merely want to lay out how an undervalued real exchange rate, which can be accomplished with a combination of capital controls and foreign reserve accumulation, could increase the rate of savings.
The short story is that an undervalued RER will increase exports and decrease imports, and this equals the gap between savings and investment by accounting identity.
First, why does a high savings rate matter? A high savings rate (over investment) means that China will be exporting both capital and goods to the US, and that it won't be importing many goods. This is due to accounting identity: Savings (S) - Investment (I) = Exports (X) - Imports (M).
Imagine that, at first, S = I, which implies that X = M, while the RER is fairly valued. Now imagine that China devalues its RER by 40%. This change in relative prices means that exports will surge while imports will fall. This will also imply that savings will be less than investment. What is the logic? Let's have a closer look at accounting identities:
Y = C + I + G + X - M
(Where Y = output, C = consumption, I = Investment, G = Government spending, etc.)
Clearly, if the Yuan is weak, then, on one hand, China will export more, and on the other, it will import less. Thus, output will rise. That's straightforward. Total consumption (of domestically produced goods and of imports) may decrease by a little as imports become more expensive, but this effect is likely to be drowned out by changes in the trade balance. It could decrease by more if China also raises domestic interest rates to counter inflation, something which would cause imports to plunge further. How will savings increase? Savings is income not taxed or spent:
S = (Y - T - C) + (T - G)
Total savings equals private and public savings. Fast Chinese growth will also tend to boost T relative to G, and might also boost Y relative to C. If China undervalues its exchange rate, and also raises its domestic interest rates to contain inflation, C could fall, raising savings. There could also be a cultural explanation as Asian countries tend to have high levels of savings in general, although this doesn't explain why savings is larger than investment, or why savings rates have declined recently in Japan and Taiwan (or why growth pre-dated high savings rates in several of these countries). Trade doesn't show up here, but we can substitute in for Y from the equation above, and get:
S = (( C + I + G + X - M ) - T - C ) + ( T - G )
And now simplify to:
S - I = X - M
Savings minus investment equals exports minus imports. Exports and imports clearly depend on the real exchange rate, and thus so does the gap of savings over investment. I'm not always the biggest advocate for the use of mathematics in economics, but in some cases it can really help for clarification. The above algebra makes this result seem obvious and intuitive, but your average intelligent guy-in-a-bar wouldn't find a link between the exchange rate and savings intuitive or obvious without looking at the math.
A more controversial assertion is that, because of hysteresis, an undervalued exchange rate yesterday could mean you'll have a larger tradables sector today, and thus more savings. On the other hand, in recent years, with weaker Chinese growth and looser monetary policy, the Chinese government in has been trying to prop up the currency, although this doesn't necessarily mean that the Chinese currency is overvalued from the perspective of its tradable sector. But it does mean that this is a topic deserving of further study.
The short story is that an undervalued RER will increase exports and decrease imports, and this equals the gap between savings and investment by accounting identity.
First, why does a high savings rate matter? A high savings rate (over investment) means that China will be exporting both capital and goods to the US, and that it won't be importing many goods. This is due to accounting identity: Savings (S) - Investment (I) = Exports (X) - Imports (M).
Imagine that, at first, S = I, which implies that X = M, while the RER is fairly valued. Now imagine that China devalues its RER by 40%. This change in relative prices means that exports will surge while imports will fall. This will also imply that savings will be less than investment. What is the logic? Let's have a closer look at accounting identities:
Y = C + I + G + X - M
(Where Y = output, C = consumption, I = Investment, G = Government spending, etc.)
Clearly, if the Yuan is weak, then, on one hand, China will export more, and on the other, it will import less. Thus, output will rise. That's straightforward. Total consumption (of domestically produced goods and of imports) may decrease by a little as imports become more expensive, but this effect is likely to be drowned out by changes in the trade balance. It could decrease by more if China also raises domestic interest rates to counter inflation, something which would cause imports to plunge further. How will savings increase? Savings is income not taxed or spent:
S = (Y - T - C) + (T - G)
Total savings equals private and public savings. Fast Chinese growth will also tend to boost T relative to G, and might also boost Y relative to C. If China undervalues its exchange rate, and also raises its domestic interest rates to contain inflation, C could fall, raising savings. There could also be a cultural explanation as Asian countries tend to have high levels of savings in general, although this doesn't explain why savings is larger than investment, or why savings rates have declined recently in Japan and Taiwan (or why growth pre-dated high savings rates in several of these countries). Trade doesn't show up here, but we can substitute in for Y from the equation above, and get:
S = (( C + I + G + X - M ) - T - C ) + ( T - G )
And now simplify to:
S - I = X - M
Savings minus investment equals exports minus imports. Exports and imports clearly depend on the real exchange rate, and thus so does the gap of savings over investment. I'm not always the biggest advocate for the use of mathematics in economics, but in some cases it can really help for clarification. The above algebra makes this result seem obvious and intuitive, but your average intelligent guy-in-a-bar wouldn't find a link between the exchange rate and savings intuitive or obvious without looking at the math.
A more controversial assertion is that, because of hysteresis, an undervalued exchange rate yesterday could mean you'll have a larger tradables sector today, and thus more savings. On the other hand, in recent years, with weaker Chinese growth and looser monetary policy, the Chinese government in has been trying to prop up the currency, although this doesn't necessarily mean that the Chinese currency is overvalued from the perspective of its tradable sector. But it does mean that this is a topic deserving of further study.
Monday, March 13, 2017
Robots on an Assembly Line...
Mercedes A Class Production Line
You'll notice, not a lot of human workers. When you see this, you might think -- "Aha! It's not foreigners who have taken our jobs, but robots." However, increases in productivity are not a new development for US manufacturing. In fact, labor productivity growth has been slower than usual for the median manufacturing sector since 2000.
Second point, labor-saving technology is *not* what is wrong with the US economy. Faster productivity growth in production means lower inflation, which means more stimulative US monetary policy (or at least it should). That means more GDP growth and more employment elsewhere.
This, this video doesn't imply that there still isn't *something* wrong with the US manufacturing sector, and that something is not fast productivity growth. Still, somehow it's fascinating to watch an assembly line.
You'll notice, not a lot of human workers. When you see this, you might think -- "Aha! It's not foreigners who have taken our jobs, but robots." However, increases in productivity are not a new development for US manufacturing. In fact, labor productivity growth has been slower than usual for the median manufacturing sector since 2000.
Second point, labor-saving technology is *not* what is wrong with the US economy. Faster productivity growth in production means lower inflation, which means more stimulative US monetary policy (or at least it should). That means more GDP growth and more employment elsewhere.
This, this video doesn't imply that there still isn't *something* wrong with the US manufacturing sector, and that something is not fast productivity growth. Still, somehow it's fascinating to watch an assembly line.
Saturday, March 11, 2017
Tuesday, March 7, 2017
On Perverse Incentives and Replication in Science
Stephen Hsu has a nice blog post on this topic. He writes about this common pattern:
#1 is spot-on for economics. Woe be to she who bucks the dominant narrative. In economics, something else happens. Following the study, there are 20 piggy-back papers which test for the same results on other data. The original authors typically get to referee these papers, so if you're a young researcher looking for a publication, look no further. You've just guaranteed yourself the rarest of gifts -- a friendly referee who will likely go to bat for you. Just make sure your results are similar to theirs. If not, you might want to shelve your project, or else try 100 other specifications until you get something that "works". One trick I learned: You can bury a robustness check which overturns the main results deep in the paper, and your referee who is emotionally invested in the benchmark result for sure won't read that far.
Hsu then writes:
That's exactly right. Most researchers in Economics go their entire careers without criticizing anyone else in their field, except as an anonymous referee, where they tend to let out their pent-up aggression. Journals shy away from publishing comment papers, as I found out first-hand. In fact, much if not a majority of the papers published in top economics journals are probably wrong, and yet the field soldiers on like a drunken sailor. Often, many people "in the know" realize that many big papers have fatal flaws, but have every incentive not to point this out and create enemies, or to waste their time writing up something which journals don't really want to publish (the editor doesn't want to piss a colleague off either). As a result, many of these false results end up getting taught to generations of students. Indeed, I was taught a number of these flawed papers as both an undergraduate and a grad student.
What can be done?
Well, it would be nice to make replication sexy. I'm currently working on a major replication/robustness project of the AER. In the first stage, we are checking whether results are replicable, using the same data sets and empirical specifications. In the second stage, we plan to think up a collection of robustness checks and out-of-sample tests of papers, and then create an online betting market about which papers will be robust. We plan to let the original authors bet on their own work.
Another long-term project is to make a journal ranking system which gives journals points for publishing comment papers. Adjustments could also be made for other journal policies, such as the extent to which a particular journal leeches off the academic community with high library subscription fees, submission fees, and long response times.
The AEA should also come out with a new journal split between writing review articles (which tend to be highly cited), and comment papers (which tend not to be). In that case, they could do both well and good.
As an individual, you can help the situation by writing a comment paper (maybe light up somebody who isn't in your main field, like I did). You can also help by citing comment papers, and by rewarding comment papers when you edit and serve as a referee. As an editor, do you really care more about your journal's citations than truth? You could also engage in playful teasing of your colleagues who haven't written any comment papers as people who aren't doing their part to make economics a science. (You could also note that it's also a form of soft corruption, but I digress...)
1. Study reports results which reinforce the dominant, politically correct, narrative.
2. Study is widely cited in other academic work, lionized in the popular press, and used to advance real world agendas.
3. Study fails to replicate, but no one (except a few careful and independent thinkers) notices.
#1 is spot-on for economics. Woe be to she who bucks the dominant narrative. In economics, something else happens. Following the study, there are 20 piggy-back papers which test for the same results on other data. The original authors typically get to referee these papers, so if you're a young researcher looking for a publication, look no further. You've just guaranteed yourself the rarest of gifts -- a friendly referee who will likely go to bat for you. Just make sure your results are similar to theirs. If not, you might want to shelve your project, or else try 100 other specifications until you get something that "works". One trick I learned: You can bury a robustness check which overturns the main results deep in the paper, and your referee who is emotionally invested in the benchmark result for sure won't read that far.
Hsu then writes:
"one should be highly skeptical of results in many areas of social science and even biomedical science (see link below). Serious researchers (i.e., those who actually aspire to participate in Science) in fields with low replication rates should (as a demonstration of collective intelligence!) do everything possible to improve the situation. Replication should be considered an important research activity, and should be taken seriously"
That's exactly right. Most researchers in Economics go their entire careers without criticizing anyone else in their field, except as an anonymous referee, where they tend to let out their pent-up aggression. Journals shy away from publishing comment papers, as I found out first-hand. In fact, much if not a majority of the papers published in top economics journals are probably wrong, and yet the field soldiers on like a drunken sailor. Often, many people "in the know" realize that many big papers have fatal flaws, but have every incentive not to point this out and create enemies, or to waste their time writing up something which journals don't really want to publish (the editor doesn't want to piss a colleague off either). As a result, many of these false results end up getting taught to generations of students. Indeed, I was taught a number of these flawed papers as both an undergraduate and a grad student.
What can be done?
Well, it would be nice to make replication sexy. I'm currently working on a major replication/robustness project of the AER. In the first stage, we are checking whether results are replicable, using the same data sets and empirical specifications. In the second stage, we plan to think up a collection of robustness checks and out-of-sample tests of papers, and then create an online betting market about which papers will be robust. We plan to let the original authors bet on their own work.
Another long-term project is to make a journal ranking system which gives journals points for publishing comment papers. Adjustments could also be made for other journal policies, such as the extent to which a particular journal leeches off the academic community with high library subscription fees, submission fees, and long response times.
The AEA should also come out with a new journal split between writing review articles (which tend to be highly cited), and comment papers (which tend not to be). In that case, they could do both well and good.
As an individual, you can help the situation by writing a comment paper (maybe light up somebody who isn't in your main field, like I did). You can also help by citing comment papers, and by rewarding comment papers when you edit and serve as a referee. As an editor, do you really care more about your journal's citations than truth? You could also engage in playful teasing of your colleagues who haven't written any comment papers as people who aren't doing their part to make economics a science. (You could also note that it's also a form of soft corruption, but I digress...)
Sunday, March 5, 2017
WSJ Article on Trump's Fed Vacancies
Given the importance of monetary policy, I've been waiting for signs about who Trump will appoint to the Fed. The media gave Obama's Fed picks very little attention. In Geithner's bio, which was largely about Obama administration economic policy, he didn't even mention Obama's Fed choices. He didn't consider them important. In my view, however, the Fed is crucial in getting the US economy back to normal growth. Will Trump pick hard or soft money types? It's anybody's guess. The WSJ has some details, although they leave out the views on monetary policy of those on the short list.
In any case, here are some key grafs for those that can't get behind the WSJ's paywall:
President Donald Trump will be able to recast the Federal Reserve by filling three or more vacancies on its seven-member board of governors, and is leaning toward candidates with banking and financial world experience rather than academic economists.
After his campaign criticism of the central bank’s low-interest-rate policies, many observers speculated he would seek more “hawkish” candidates who would favor higher borrowing costs. But his choices may be driven less by these issues and more by their practical experience, judging from his early picks for other top economic policy posts in the administration—drawn from investment banking, private equity and business—and the pool of early contenders for the Fed jobs.
...
David Nason, an executive at General Electric Co.’s financing arm and former Treasury Department official during the 2008 financial crisis, emerged as a leading candidate ... Other potential candidates the administration has considered include former BB&T Corp. Chief Executive John Allison, and Rep. French Hill (R., Ark.), a former banker just re-elected to his second House term, these people said....
Mr. Trump’s views on monetary policy are unclear—he both praised and criticized the Fed’s low-rate policies during the campaign. He also promised to roughly double the pace of annual economic growth to 4%, which would be harder if the central bank boosts interest rates more aggressively to restrain inflation.
“There’s no reason to think Trump would go for one of these really predictable, hawkish academics who’s schooled in balancing inflation,” said Sarah Binder, a political-science professor at George Washington University who has studied the Fed’s relationship with Congress. “He seems to want people, as best we can tell, with a background in finance or a background in business that brings some sort of real world—from Trump’s perspective—appreciation for the good stuff that low rates can do,” she said.
The seven-member Fed board has two vacancies and will have a third in April when governor Daniel Tarullo steps down in April.
Yes, Monetary Policy Works in a Liquidity Trap, and Fiscal Stimulus Stimulates
Here's a nice short article in VoxEU by Benoit Mojon showing the decoupling of US and Euro interest rates after the Taper Tantrum, and the increase in US interest rates after the US election of Donald Trump raised likelihood of fiscal stimulus.
Here are the key graphs:
Interestingly, the expected Trump fiscal stimulus increased interest rates in the US, and in Europe, although to a lesser extent. But, longer term, European rates rose just as much. Interesting stuff.
Saturday, March 4, 2017
Ranking of Economists, by # of Twitter Followers
https://ideas.repec.org/top/top.person.twitter.html
| |||||||||||||||||||||||||||||||||||||||||||||||
Thursday, March 2, 2017
What's wrong with the US economy? A theory, in one blog post
Contrary to popular belief among my fellow liberal economists, the US economy is far from great. The graph below compares Real US GDP relative to its long-run trend. It shows that US RGDP in 2016 was some 25% below its long-run trend. While one might be able to quibble a bit with the trend, it's clear that US economic growth has slowed considerably since 2000, with the bulk of this coming since 2008.
How about US manufacturing? A friend who recently published a paper on manufacturing in a top 20 journal complained how hard it was to publish, despite being one of his better papers. For some reason, the topic seems to bring out the crazy in a lot of economists. Why? I'm not sure, but we economists like to think we know something about manufacturing, and now in the Age of Trump, many economists have a completely understandable desire to try to show that everything Trump says is wrong. Like shooting hair-spray at a flame, this has the effect of making the usual Dunning-Kruger effect even worse than normal when the topic turns to the decline of US manufacturing. The general feeling among many economists is that the decline in manufacturing employment is all about robots, productivity, and sectoral shift toward services. People who think trade are part of the story must be ignorant of economics. I was once taken in by this view as well. However, as I laid out in my own research (Susan Houseman does here and here), the data is not supportive of this assertion as an explanation for the relative decline in manufacturing output and exports, or the decline in the level of manufacturing employment. Productivity growth in manufacturing since the mid-1990s has been, if anything, lackluster. Part of the confusion is that there was a long-term trend decline in manufacturing as a share of the population (even while the level was flat) which is due to productivity growth and sectoral shift. Yet, after the Asian Financial Crisis in 1997, the US dollar appreciated substantially, while Asian countries began hoarding dollar reserves. Add in the rise of China and China's WTO accession, and the Bush tax cuts and 9/11 spending, and you have the perfect confluence of factors to cause a "Surprisingly Swift" collapse in the level of manufacturing employment.
The graph below, of real manufacturing output, does not support the thesis that the problem with US manufacturing employment is simply fast productivity growth. Why would fast productivity growth and the introduction of robots cause a large trade deficit and a decline in real output relative to trend? Inquiring minds would like to know.
However, while the collapse in manufacturing output and employment may have been a chief cause of the initial slowdown in overall GDP growth, GDP was only about 5% below trend in the 2000s. What has made things worse has been the reaction of policy to the shock of the rise of China and the dollar appreciation. First, low interest rates spurred by savings out of Asia and the carnage to manufacturing in the US led to a housing bubble. Then, poor bank regulation of the shadow banking sector meant that the collapse of the housing bubble meant a financial crisis. And then the liquidity trap, and difficulty that central banks apparently have managing the economy at the ZLB meant that this temporary shock had a long-term impact. An impact that ultimately results in the Tea Party congress of 2010 to 2020, and the election of Trump. (Here's a slightly longer version of this old bedtime story.)
In the early days of the zero lower bound, it was believed by many that monetary policy couldn't be effective. I also believed that rates couldn't go negative. Others simply believed that monetary policy was much less effective. However, since then we've learned that ST rates can go negative. Perhaps as low as -1.5%. We've also learned that monetary policy announcements still affect stock, bond, and exchange rate markets in ways consistent with monetary policy being effective. An example of this was when the Fed hiked the discount rate in April 2010, causing the dollar to appreciate. And this was when the economy still deeply depressed -- precisely the opposite of what your Econ 1 textbook would recommend in this situation. I once asked Jeremy Stein, who later served on the FOMC, what in the world the Fed was thinking. He tried to tell me that this never happened.
What we've also witnessed during the past few years is that the Fed has repeatedly decided not to alter its monetary policy stance even as it marks down its own forecasts for economic growth and inflation. This can be seen in the graph below. After the first round of QE in 2008, the Fed did almost nothing again until the end of 2010 -- a gap of two years -- even though the economic recovery was much slower than it anticipated. Then, even though growth was slower in 2011 and 2012, they waited another 2 years -- the end of 2012 -- before acting again. What the Fed has come to decide over time is not that they need to do more stimulative monetary policy, but that the economy can't grow like it used to. And given that the Fed can always make the economy grow slower if it wants, this situation has developed not necessarily to America's advantage. Time to dust off Bernanke's "self-induced paralysis" thesis.
How about US manufacturing? A friend who recently published a paper on manufacturing in a top 20 journal complained how hard it was to publish, despite being one of his better papers. For some reason, the topic seems to bring out the crazy in a lot of economists. Why? I'm not sure, but we economists like to think we know something about manufacturing, and now in the Age of Trump, many economists have a completely understandable desire to try to show that everything Trump says is wrong. Like shooting hair-spray at a flame, this has the effect of making the usual Dunning-Kruger effect even worse than normal when the topic turns to the decline of US manufacturing. The general feeling among many economists is that the decline in manufacturing employment is all about robots, productivity, and sectoral shift toward services. People who think trade are part of the story must be ignorant of economics. I was once taken in by this view as well. However, as I laid out in my own research (Susan Houseman does here and here), the data is not supportive of this assertion as an explanation for the relative decline in manufacturing output and exports, or the decline in the level of manufacturing employment. Productivity growth in manufacturing since the mid-1990s has been, if anything, lackluster. Part of the confusion is that there was a long-term trend decline in manufacturing as a share of the population (even while the level was flat) which is due to productivity growth and sectoral shift. Yet, after the Asian Financial Crisis in 1997, the US dollar appreciated substantially, while Asian countries began hoarding dollar reserves. Add in the rise of China and China's WTO accession, and the Bush tax cuts and 9/11 spending, and you have the perfect confluence of factors to cause a "Surprisingly Swift" collapse in the level of manufacturing employment.
The graph below, of real manufacturing output, does not support the thesis that the problem with US manufacturing employment is simply fast productivity growth. Why would fast productivity growth and the introduction of robots cause a large trade deficit and a decline in real output relative to trend? Inquiring minds would like to know.
However, while the collapse in manufacturing output and employment may have been a chief cause of the initial slowdown in overall GDP growth, GDP was only about 5% below trend in the 2000s. What has made things worse has been the reaction of policy to the shock of the rise of China and the dollar appreciation. First, low interest rates spurred by savings out of Asia and the carnage to manufacturing in the US led to a housing bubble. Then, poor bank regulation of the shadow banking sector meant that the collapse of the housing bubble meant a financial crisis. And then the liquidity trap, and difficulty that central banks apparently have managing the economy at the ZLB meant that this temporary shock had a long-term impact. An impact that ultimately results in the Tea Party congress of 2010 to 2020, and the election of Trump. (Here's a slightly longer version of this old bedtime story.)
In the early days of the zero lower bound, it was believed by many that monetary policy couldn't be effective. I also believed that rates couldn't go negative. Others simply believed that monetary policy was much less effective. However, since then we've learned that ST rates can go negative. Perhaps as low as -1.5%. We've also learned that monetary policy announcements still affect stock, bond, and exchange rate markets in ways consistent with monetary policy being effective. An example of this was when the Fed hiked the discount rate in April 2010, causing the dollar to appreciate. And this was when the economy still deeply depressed -- precisely the opposite of what your Econ 1 textbook would recommend in this situation. I once asked Jeremy Stein, who later served on the FOMC, what in the world the Fed was thinking. He tried to tell me that this never happened.
What we've also witnessed during the past few years is that the Fed has repeatedly decided not to alter its monetary policy stance even as it marks down its own forecasts for economic growth and inflation. This can be seen in the graph below. After the first round of QE in 2008, the Fed did almost nothing again until the end of 2010 -- a gap of two years -- even though the economic recovery was much slower than it anticipated. Then, even though growth was slower in 2011 and 2012, they waited another 2 years -- the end of 2012 -- before acting again. What the Fed has come to decide over time is not that they need to do more stimulative monetary policy, but that the economy can't grow like it used to. And given that the Fed can always make the economy grow slower if it wants, this situation has developed not necessarily to America's advantage. Time to dust off Bernanke's "self-induced paralysis" thesis.