Monday, April 24, 2017

How Bad is Peer Review? Evidence From Undergraduate Referee Reports on the Currency Unions and Trade Lit

In a recent paper, Glick and Rose (2017) suggest that the Euro led to a staggering 50% increase in trade. To me, this sounded a bit dubious, particularly given my own participation in the previous currency unions and trade literature (which I wrote up here; my own research on this subject is here). This literature includes papers by Robert Barro that imply that currency unions increase trade on a magical 10 fold basis, and a QJE paper which suggests that currency unions even increase growth. In my own eyes, the Euro has been a significant source of economic weakness for many European countries in need of more stimulative policies. (Aside from the difficulty of choosing one monetary policy for all, it also appears that MP has been too tight even for some of the titans of Northern Europe, including Germany. But that's a separate issue...)

Given my skepticism, I gave my sharp undergraduates at NES a seek-and-destroy mission on the Euro Effect on trade. Indeed, my students found that the apparent large impact of the Euro, and other currency unions, on trade is in fact sensitive to controls for trends, and is likely driven by omitted variables. One pointed out that the Glick and Rose estimation strategy implicitly assumes that the end of the cold war had no impact on trade between the East and the West. Several of the Euro countries today, such as the former East Germany, were previously part of the Warsaw Pact. Any increase in trade between Eastern and Western European countries following the end of the cold war would clearly bias the Glick and Rose (2017) results, which naively compare the entire pre-1999 trade history with trade after the introduction of the Euro.  Indeed, Glick and Rose assume that the long history of European integration (including the Exchange Rate Mechanism) culminating with the EU had no effect on trade, but that switching to the Euro from merely fixed exchange rates resulted in a magical 50% increase. Several of my undergraduates pointed out that this effect goes away by adding in a simple time trend control. Others noted that the authors only clustered in one direction, rather than in two or three directions one might naturally expect. In some cases, multi-way clustering reduced the t-scores substantially, although didn't seem to be critical. One student reasoned that the preferred regression results from GR (2017) don't really suggest that CUs have a reliable impact on trade. The estimates from different CU episodes are wildly different --  GR found that some CUs contract trade by 80%, while others have no statistically significant effect, some have a large effect, while others have an effect that is simply too large to be believed (50-140%). Many of my students noted that there is an obvious endogeneity problem at play -- countries don't decide to join or leave currency unions randomly -- and the authors did nothing to alleviate this concern. The currency union breakup between India and Pakistan is but one good example of the non-random nature of CU exits.

You'd think that a Ph.D.-holding referee for an academic journal which is still ranked in the Top 50 (Recursive, Discounted, last 10 years) might at least be able to highlight some of these legitimate issues raised by undergraduates. You might imagine that a paper which makes some of the errors above might not get published, especially if, indeed, star economists face bias in the publication system. You might also imagine that senior economists, tenured at Berkeley/at the Fed, might not make these kinds of mistakes which can be flagged by undergraduates (no matter how bright) in the first place. You'd of course be wrong.

The results reported and the assumptions used to get there are so bad that you get the feeling these guys could have gotten away with writing "Get me off your fucking mailing list" a hundred times to fill up space.

Before ending I should note that I do support peer review, and also believe that economics research is incredibly useful when done well. But science is also difficult. This example merely highlights that academic economics still has plenty of room for improvement, and that a surprisingly large fraction of published research is probably wrong. I should also add that I don't mean to pick on this particular journal -- if a big name writes a bad article, it is only a question of which journal will accept it. However, this view of the world suggests that comment papers, replications, and robustness checks deserve to be more valued in the profession than they are at present. Much of the problem with that line of work also stems from almost a willful ignorance of history. Thus, it's also sad to see departments such as MIT scale back their economic history requirements in favor of more math. I don't see this pattern resulting in better outcomes.


Update: Andrew Rose responds in the comments. Good for him! Here I consider each of his points.

Rose wrote: "-Get them to explain how that they could add a time trend to regression (2), which is literally perfectly collinear with the time-varying exporter and importer fixed effects."

Sorry, but a Euro*year interactive trend, or, indeed, any country-pair trend, is not going to even close to collinear with time-varying importer and exporter year fixed effects. The latter would be controlling for a general country trend, but not for trends in country-pair specific relationships. To be fair, regression of one trending variable on another with no control for trends is the most common empirical mistake people make when running panel regressions.

-Explain to them how time-varying exporter/importer fixed effects automatically wipe out all phenomena such as the effects of the cold war and the long history of European monetary integration.

Sorry, but that's also not the case. A France year dummy, to be concrete, won't do it. That would pick up trade between France and all other countries, including EU, EMU, and former Warsaw Pact countries. You'd need to put in a France*EU interactive dummy, for example. But such a dummy will kill the EMU. Below, I plot the evolution of trade flows over time (dummies in the gravity equation) for (a) all of Western Europe, (b) Western European EU countries, and (c) the original entrants to the Euro Area (plus Greece). What you can see is that, while trade between EMU countries was much higher before than after (your method), most of the increase happened by the early 1990s, in fact. Relative to 1998, trade even declined a bit. There's nothing here to justify pushing a 50% increase.
































Rose also indicated that my undergraduates should "Read the paper a little more carefully. For instance, consider; and a) the language in the paper about endogeneity b) Table 7 which explicitly makes the point about different currency unions. "


Actually, let's do that. When I search for "endogeneity" in the article, the first hit I get is on page 8, where it is asserted that including country-pair fixed effects controls for endogeneity. Indeed, it does control for time-invariant endogeneity. But if countries, such as India and Pakistan, have changing relations over time (such as before and after partition), this won't help.

The second hit I get is footnote 7: "Our fixed‐effects standard errors are also robust. We do not claim that currency unions are formed exogenously, nor do we attempt to find instrumental variables to handle any potential endogeneity problem. For the same reason we do not consider matching estimation further, particularly given the sui generis nature of EMU." [the bold is mine.]

Actually, a correction here: my undergrads reported that the FE standard errors are actually clustered, but this is a minor point. You may not claim that currency unions are formed exogenously, but, as you admit, your regression results do nothing to try to reduce the endogeneity problem. And, this, despite the fact that I had already shared my own research with you (ungated version), which showed that your previous results were sensitive to omitting CU switches coterminous with Wars, ethnic cleansing episodes, and communist takeovers.

Also, the "For the same reason" above is a bit strange. The sentence preceeding it doesn't give a reason why you don't try to handle the endogeneity problem. The reason is? In fact, a matching-type estimator would be advisable here.

Lastly, in your discussion of Table 7, I see you note that it implies widely varying treatment effects of CUs on trade. But I like my undergraduates interpretation of this as casting doubt on the exercise. Many of the individual results, including an 80% contraction for some currency unions, are simply not remotely plausible. The widely varying results are almost certainly due to wildly different endogeneity factors affecting each group of currency unions, not due to wildly different treatment effects.




Thursday, April 13, 2017

Monday, April 10, 2017

A Quick Theory of the Industrial Revolution (or, at least, an answer to 'Why Europe?')

Following a Twitter debate involving myself, Gabe Mathy, Pseudoerasmus, and Anton Howes on the theory that high wages in England induced labor-saving technologies and led to the Industrial Revolution, I thought I should lay out my own quick theory on why the Industrial Revolution happened in Britain, or at least, why in NW Europe. In short, this theory is too speculative to write an academic paper about (plus, this theory is not popular with Economic Historians, so it wouldn't publish), and I don't have enough time to write a book. Twitter doesn't provide enough space, so a blog post it will be.

As far as we know, the high-wage economy that persisted in Europe after the Black Death was somewhat of an anomaly for the pre-Industrial world. Wages don't seem to have been as high in East Asia or India, while we actually don't know what wages were like in Africa. On the other hand, wages were even higher in many parts of the New World which had been recently depopulated, and where labor-land ratios were high. In any case, one can see the case for why economic agents would certainly try to cut back on high labor costs through new technologies. This theory has some sense, but I also see a few weaknesses. One is that "a wave of gadgets" swept over England at this time, with inventions including things like the flush toilet, which was not actually labor saving, and also the invention of calculus and of Malthusian economics. Also, many of the big technologies invented were actually quite simple, and so effective they would have been cost-effective to develop and implement at a wide range of relative prices/wages. A recent example of labor-saving technologies being worth it even for low labor costs is robots in low-wage China delivering mail.

Instead, I would focus on the other implication of high wages in a Malthusian world: fewer nutritional deficiencies, and higher human capital. This would include having consumers who can do more than simply buy necessities. High wages aren't enough, but rather I would think it's the size of human-capital adjusted population with which one is in contact with which will matter for technology growth. Thus, the Industrial Revolution could not have happened on a remote island with high wages, and would have been much less-likely in continents with North-South axis a la Jared Diamond. It also means that the rise of inter-continental trade would make overall technological growth faster, as technologies could be shared. This latter part of the story is probably crucial, as the rise of cheap American cotton was sure to be combined with the idea of mechanized textile factory production, especially since the latter had already been invented. (Indeed, one can't even produce cotton in England!)

The above logic could also answer why the IR didn't happen immediately after the Black Death. All I would argue is that if human capital is important for growth, then what we should expect to have seen after the BD in Europe is a relative "Golden Age" with a lot of progress and advance. In fact, the Black Death was the beginning of the end of the Dark Ages in Europe, as the Renaissance, the Age of Exploration, the Protestant Reformation, the Scientific Revolution, the Enlightenment, and the US Declaration of Independence all happened in the centuries thereafter. The printing press was invented in the 1450s in high-wage Germany. Modern banking was invented in Italy in the high-wage 1500s. Henry the Navigator had his formative years in the post-BD period of affluence in the 1410s. Brunelleschi discovered perspective in art in 1504. Newton invented the Calculus in the 1650s. The locus of technological change dramatically shifted to Europe -- before it had been a backwater. Europe then began colonizing the rest of the world. And, as it did, stagnant Eurasian agriculture imported a wealth of new agricultural technologies from the New World. However, I think it's better to view the Industrial Revolution not by itself as a special event, but as one of a long-line of major breakthroughs and accomplishments in the centuries after the Black Death, in which one sphere after another of European society was being transformed. Productivity soared in book-making after the introduction of the printing press, only there was much less elastic demand for books than there was for clothes and fashion, and so an Industrial Revolution could not depend on it. But the initial idea for mechanizing cotton textiles was probably not much larger or that much more difficult of a technological or intellectual breakthrough than these other revolutions, only it happened to be much more consequential in economic terms due to the nature of the industry.

The key difference between Europe and the rest of the world was that European cities and people were filthy, thus had high death rates which kept living standards high. This theory can also explain why "Not Southern Europe", as, for malthusian reasons, the post-Black Death high wages began to decline after around 1650 in the south. Note that this theory is also perfectly consistent with a cultural explanation for Britain's "wave of gadgets". A society in which half of the people suffer from protein deficiency is probably not going to be very vibrant culturally. Such a society may also develop institutions (the Inquisition) which place other barriers on development. Conversely, a society benefiting from high wages for Malthusian reasons might also be more likely to develop a culture and institutions conducive to economic growth.

In any case, this theory isn't completely new -- I've heard others, such as Brad DeLong, mention this as a possibility, but I haven't seen it explored in any detail. However, it won't be as popular with economists as the idea that growth is all about genetics. This idea was actually the first really big "Aha" moment I had after starting my Ph.D., as it popped right out of Greg Clark's excellent course on the Industrial Revolution, from where many of these ideas come. I eventually decided it was too speculative to write my dissertation on, so then switched to hysteresis in trade, and then to the collapse in US manufacturing employment. Maybe one day, post-tenure I'll return to growth...

Update: Pseudoerasmus points me to a very nice-looking paper by Kelly, Mokyr, and O' Grada, with a very similar theory to what I've written. They focus on England vs. France, and on the IR rather than everything which happened after the Black Death, and they don't appear to include Crosby-Diamond type effects, but I still approve.

Thursday, April 6, 2017

The Real Exchange Rates and Trade Literature

Noah Smith asks about the literature on the real exchange rate (RER) and trade.

My own main line of research is about RERs and manufacturing (also here on workers), RER measurement, and I've also done work on the impact of currency unions on trade, or lack thereof.

The first and third of these papers does include a bit on trade, although in neither case is it the focus. The third paper shows that the class of Weighted-Average Relative (WAR) exchange rates does a better job predicting US trade balances than do traditional indices. The difference is largely that the WAR indices reflect the impact of increased trade with the People's Republic of China. (In truth, the WARP index was first discovered by Thomas, Fahle, and Marquez in a very under-rated paper...)



I've also looked recently while prepping for class on a good paper on RERs and trade. In truth, the answer is that I also couldn't really find good, modern work that takes identification seriously (if you know of some, please add in the comments. Actually, when I began working on RERs and manufacturing employment, I could see that this literature had already become quite dated. Results were mixed, the best paper was still before the era in which standard errors were clustered in one direction much less two, each paper implied something different (of the four most recent using US data, one said RERs had only a small impact, a second said the impact depended on the specification, and a third said it had none, and a fourth said it was large), and the literature seemed to me in need of a modern treatment. However, I got a lot of pushback from the first time I presented this paper that the topic had "already been done". I think part of this stems from a certain gullability, and over-confidence among certain economists in their profession. If a paper is published in a good journal, then that's the end of the story. It must be true. Even if the paper was from 1991, and the results came from 600 data points and included few controls, and barely any robustness. In truth, I suspect that probably more than half of published research is straight wrong. In addition, much of what is "right" might offer incomplete evidence, or still has plenty of room for improvement. A perfect paper probably does not exist, and so I see papers that subject already published research results to new tests as being helpful, and quite worthy of publication in top field journals. (This is almost certainly a minority opinion...)

One thing I probably overlooked is simply that the literature on RERs and trade isn't that great. I'm still contemplating doing a paper on this, although I'm sure I'd have a lot of similar pushback. ("This result is not new.") The big issue in this literature is identification. In theory, the correlation between the RER and trade could go in either direction. That's because something like the Asian Financial Crisis could cripple the industrial (tradable) sector and also lead to a severe currency depreciation. Thus, the raw correlation between trade and the RER could be negative, positive, or zero in the aggregate. I haven't seen anyone really try to solve the identification problem here. (Well, OK, there was an economic history paper that looked at countries which pegged to gold vs. silver, and then what happened when the price of gold to silver changes, but what it seemed to me to miss was that this also had implications for monetary policy, and so wasn't quite exogenous.)

In any case, the end conclusion is that this is very much a literature in need of improvement.


Tuesday, April 4, 2017

Some International Data on Industrial Robots

Via Adam Tooze on Twitter.  The picture quality is poor, but here it is below. The US in fact has fewer robots than Germany and Japan, and yet manufacturing employment has fallen by more.




Sunday, April 2, 2017

The Gold Standard, the Great Depression, and the Brilliance of NES Students...

Tomorrow's lecture is, in part, covering a homework on the Great Depression in which the students were asked to redo some of a classic Bernanke and James article on the Great Depression. It's a great article, although the empirics -- like all empirics from the early 1990s -- are quite dated. This actually makes it good to assign, in my view, as it makes it relatively easy for students to suggest improvements. The first time I assigned this, I had a student (A.A.) who wasn't quite happy with the limited information contained in a regression of Industrial Production on gold standard status. Thus, she created this. It makes it clear, aside from the wonders of conditional formatting. There are two things this makes obvious: (1) the later you abandon the gold standard, the worse you did. And, (2) at first, if anything, the gold dead-enders were doing slightly better (or as well), but the further they went the worse they did. It blew my mind. Absolutely brilliant table for an undergraduate to turn in for a two page essay. Go back and have a look at the Bernanke/James tables and compare. There's no comparison. I have a feeling she, and many others I've had the good fortune to teach, will go on to do great things).


Thursday, March 30, 2017

Staying Rich without Manufacturing will be Hard

This title was written by Noah Smith in a recent Bloomberg column. He makes a lot good points which readers of this blog, focused on manufacturing, are already familiar. Manufacturing output isn't really booming. I would stress that, despite that trade is a big reason for why manufacturing isn't so healthy, the number one thing the government could do to promote manufacturing is adopt looser monetary policy. This would help via growing the economy (manufacturing production and employment is strongly pro-cyclical), and also help by weakening the dollar. That's a way better way of improving the trade balance than via bilateral trade negotiations or protectionism.

In any case, it's worth a read.

Raise Rates to Raise Inflation

That's what Stephen Williamson is still arguing.

"recent data is consistent with the view that persistently low nominal interest rates do not increase inflation - this just makes inflation low. If a central bank is persistently undershooting its inflation target, the solution - the neo-Fisherian solution - is to raise the nominal interest rate target. Undergraduate IS/LM/Phillips curve analysis may tell you that central banks increase inflation by reducing the nominal interest rate target, but that's inconsistent with the implications of essentially all modern mainstream macroeconomic models, and with recent experience."

Wow. This is fantastic (fantastical?) stuff.

A couple questions for Neo-Fisherians. Both central banks and market participants believe that higher interest rates slow growth and inflation. If, in fact, higher interest rates raise growth and inflation, then, in a booming economy, why don't interest rate hikes and contractions in the money supply lead to hyperinflation? After all, central banks respond to high inflation by raising rates, which they believe will lead to lower inflation. If, instead, inflation goes higher, they will respond again with even tighter money.

Also, interest rate cuts are demonstrated to cause currencies to weaken. We know this, from, among other things, announcement effects using high-frequency identification. The evidence that real exchange rate movements impact the tradables sector is very strong, and merely relies on the theory that relative prices matter, the central teaching of economics. Then, is all of economics wrong, if prices don't matter?

We also know that market participants/the entire financial sector believes that low interest rates are good for the economy and profits and tight money is bad. We know this from market reactions to monetary policy. A series of interest rate hikes would raise borrowing costs considerably, cause the dollar to appreciate (which will affect inflation directly), cause the stock market to depreciate, and generally cause financial conditions to tighten. Thus, once again, to buy the Neo-Fisherian story, you have to believe that prices don't matter. In addition, you have to believe that markets aren't remotely efficient. Thus, you have to give up the central tenets of mainstream economics. Since "the market" believes monetary policy has the right sign, Stephen Williamson could make a lot of money by starting a hedge fund and betting on the "wrong" sign.

Hey, it's worth pointing out that people can disagree about monetary policy. Williamson (2012) says QE is hyperinflationary, while, on the other hand, Williamson (2013) says QE is deflationary. On the third hand, Williamson (2013; blog post) says money is just plain neutral. What's interesting here is that these are all the same Williamsons.

Thus it's worth peering into his intellectual journey.  First, after QE, despite high unemployment and a weak economy, he repeatedly predicted that inflation would rise. When it didn't happen, he changed his mind, which is what one should do. Only, he couldn't concede that standard Keynesian liquidity trap analysis was largely correct. That would be equivalent to surrendering his army to the evil of evils, Paul Krugman. Much easier to venture into the wilderness, and instead conclude, not that inflation wasn't rising despite low interest rates because the economy was still depressed, and banks were just sitting on newly printed cash, but rather that inflation was low because interest rates were low!

Fortunately, not all of Macro went in this direction, as Larry Christiano, a mainstream economist, discusses the Keynesian Revival due to the Great Recession.



Tuesday, March 28, 2017

Is Academia Biased Against Stars?

David Card and Stefano DellaVigna recently came out with a working paper which says yes.

The (abbreviated) abstract:
We study editorial decision-making using anonymized submission data for four leading
economics journals... We match papers to the publication records of authors at the time of submission and to subsequent Google Scholar citations.... Empirically, we find that referee recommendations are strong predictors of citations, and that editors follow the recommendations quite closely. Holding constant the referees' evaluations, however, papers by highly-published authors get more citations, suggesting that referees impose a higher bar for these authors, or that prolific authors are over-cited. Editors only partially offset the referees' opinions, effectively discounting the citations of more prolific authors in their revise and resubmit decisions by up to 80%. To disentangle the two explanations for this discounting, we conduct a survey of specialists, asking them for their preferred relative citation counts for matched pairs of papers. The responses show no indication that prolific authors are over-cited and thus suggest that referees and editors seek to support less prolific authors.
The interpretation of course hinges on whether you think more famous authors are likely to get cited more conditional on quality. To some extent, this must be true. You can't cite a paper you don't know about, and you're almost certainly more likely to know about a famous author's paper. This is part of the reason I started blogging -- as a marketing tool.

In any case, here's a second recent paper courtesy Andrew Gelman. "Daniele Fanelli and colleagues examined more than 3,000 meta-analyses covering 22 scientific disciplines for multiple commonly discussed bias patterns. Studies reporting large effect sizes were associated with large standard errors and large numbers of citations to the study, and were more likely to be published in peer-reviewed journals than studies reporting small effect sizes... Large effect sizes were ...  associated with having ... authors who had at least one paper retracted." So, studies with large effects are cited more and more likely to be retracted. In general, to get published in a top journal, you almost have to make a bold claim. But of course bold claims are also more likely to be wrong or inflated, and also cited more. 

My prior on the situation is that I suspect that any time "quality" is difficult to judge, it is only natural and rational to look for signals of quality, even if blunt. I would rather be operated on by a surgeon with a degree from Harvard, drink a pricey bottle of wine if someone else is paying, and choose to eat in a restaurant which is full. I'd think twice before putting my money in a bank I've never heard of, or of flying on an unknown airline. These are all perfectly rational biases. The more difficult it is to infer quality, the more people are going to rely on signals. An example of this is lineman/linebackers in the NFL -- since stats for these players are difficult, all-star lineman tend to be those drafted in the first or second round. This is much less so for the skill positions, where performance metrics are easier to come by. In the case of academia, I believe it can be enormously difficult to judge the relative quality of research. How to compare papers on different topics using different methodologies? Add to this, top people might referee 50 papers a year. Neither they nor the editors have time to read papers in great detail. And referees frequently disagree with each other. I was recently told by an editor at one of the journals in the Card study that in several years editing she had never seen a paper in which all the referees unanimously recommended accept. If referees always agreed, it should happen in 5% of cases. (Unfortunately, the Card/DellaVigna paper doesn't provide information on the joint distribution of referee responses. This is too bad, because one of the big gaps between R&R decisions and citations they find is that papers in which one referee recommends reject and the other straight accept are usually not given an R&R, and yet are well-cited. The other thing they don't look at is institutional affiliation, but I digress...) This all speaks to the idea that judging research is extraordinarily difficult. If so, then citations and referee and editorial decisions are likely to rely, at least in part, on relatively blunt signals of quality such as past publication record and institutional affiliation. It would be irrational not to.

So, why did the authors go the other way? I didn't find their use of the survey convincing. I suspect it had to do with personal experiences. The average acceptance rate at the QJE is 4%. That means top people like David Card get good papers (and I believe many of his papers are, in fact, very good) rejected at top journals all the time, despite high citations. He has a keen analytical mind, so it's reasonable to conclude based on personal experience that the rejections despite great citation counts are the result of some kind of bias, and perhaps they are. I've once had a paper rejected 9 times. Of course, I don't believe it could possibly be the fault of the author. Much easier to believe the world is conspiring against me. Of course, I'm joking here, but the very low acceptance rates at top journals, combined with some atrocious accepted papers probably feed this perception by everyone. Personally, I'd still prefer to be an NBER member and in a top 10 department with a weekly seminar series in my field (all of which would increase citations) and a large research budget, but hey, that's just me.

Update: I was interested in how correlated referee recommendations are, so I followed the reference in the Card paper to Welch (2014). Here is the key table. Conditional on one referee recommending "Must Accept", the second referee has a 7.7% chance of also recommending "Must Accept", vs. a 3.8% unconditional probability. If perfectly correlated, then it should be 100%. Even with one "Must Accept", chances are still better than even that the second recommendation will be reject or neutral. So, it's much closer to being completely random than referees being in agreement. The lesson I draw from this is not to read too much into a single paper rejection, and thus wait before moving down in the ranks. In a way, this could bolster the arguments of Card and DellaVigna. Referee reports are so random, how can citations be a worse measure of quality? But I think this ignores the fact that referees will spend more time with a paper than potential citers will. Even if all authors read the papers they cite, how meticulously do they read the papers they don't cite? I think the difficulty of assessing paper quality once again implies that signals should play an important role, not to mention personal connections.



Sunday, March 26, 2017

Corporations in the Age of Inequality: A Skeptic's Take

Nick Bloom contributes an article to the Harvard Business Review: "Corporations in the Age of Inequality". It's also an interesting read, based on his "Firming Up Inequality" paper, which I also recommend. The story of inequality they tell is also one which is essentially technology based (IT and outsourcing), as they find that inequality is almost entirely driven by changes in between firm inequality. They deserve credit for presenting an interesting set of facts.

However, while intriguing, I'm not yet totally convinced this is the key to understanding inequality. Macromon also had an excellent discussion of this research awhile back. The key question she had is about the definition of a firm -- the authors use tax ID numbers, but the average # of tax ID numbers per corporation listed on the NYSE is 3.2. If it's just a story about a growing number of tax IDs per corporation, then it becomes much less interesting. A similar problem is that one could also imagine that if a firm used to employ a janitor, but then outsources it to a different company, within firm inequality could fall or be flat even as wages between the janitor and others working in the same building increase. Thus, their decomposition doesn't necessarily teach us much about why inequality is increasing, and is perfectly consistent with the institutions story I've told in a previous blog post (or many other stories one might dream up).

However, I did notice that the institutions story went unmentioned in their paper and in Bloom's article. As I wrote before, the insights of Piketty, Saez and coauthors seem to have had surprisingly little influence, particularly since the explanation seems perfectly adequate, and has a lot of explanatory power. (Actually, in his book, Piketty also barely mentions the institutions story, which is a major theme of some of his joint papers with Saez and coauthors...) This is where the lack of comment papers in economics is frustrating. Many top people apparently don't buy the institutions story. Why not? Perhaps they have good reasons to be skeptical. I'd like to know what they are. (I'd certainly like to know what Nick Bloom thinks...) The evidence looks quite compelling to me.

I took issue with this comment "Since 1980, income inequality has risen sharply in most developed economies". As my blog readers know, income inequality has not risen dramatically in Germany, France, Japan, or Sweden according to Alvaredo et al.. Thus, this comment threw me: "This means that the rising gap in pay between firms accounts for the large majority of the increase in income inequality in the United States. It also accounts for at least a substantial part in other countries, as research conducted in the UK, Germany, and Sweden demonstrates." Right, but the increases in inequality in Germany and Sweden have been quite minor relative to the US, and are also associated with changes in top marginal tax rates. So, between firm inequality isn't actually explaining much is what I'm hearing. 


Bloom attributes rising inequality to "the rise of outsourcing, the adoption of IT, and the cumulative effects of winner-take-most competition". I agree that outsourcing could be causing the rise of measured firm-level inequality, although this could be happening even while having scant impact on overall inequality. Particularly given that "One study found that from 1980 to 2008, the share of workers in Germany employed by temp agencies or cleaning, logistics, or security firms more than tripled, from roughly 2% to 7%." My question with adoption of IT is going to sound familiar: don't the Germans, Swedes, and Japanese also have IT? And why did winner-take-most competition only happen in Reagan-Thatcher countries since 1980? I would point to the savage cuts in top marginal tax rates, as well as to other institutional changes (other tax changes, declining importance of the minimum wage, and declining influence of labor). 

In any case, it's still a nice article and I found myself mostly agreeing with his policy recommendations. I particularly liked his focus on the role that luck plays in determining careers and income. If you take two people with the same skills, and put one at Google in 1999 and another at, say, General Motors, the one lucky enough to have chosen Google would have gotten wildly rich. This is another argument for higher top marginal tax rates, which I support. 



Friday, March 24, 2017

The Saga of Currency Unions and Trade

One of the first full papers I wrote was on currency unions and trade. I was taking Alan Taylor's field course at UC Davis, which was essentially and Open-Economy Macro History course, and the famous Glick/Rose findings that currency unions double trade was on the syllabus. Not to be outdone, Robert Barro and coauthors then found that currency unions increase trade on a 7-fold and 14-fold basis! This raised the prospect that Frenchman may suddenly go out and buy a dozen or more Volkswagens instead of settling for just one after the adoption of the Euro. Another paper, published in the QJE, found that currency unions even raise growth, via trade. In that case, one can only imagine what the Greek economy would look like if they hadn't joined the Euro. No result, it seems, can be too fanciful.

Always a doubting Thomas, I was instantly skeptical. For one, a doubling of trade implies an impact more than 10 times larger than what has been measured for the Smoot-Hawley tariff, while we already knew the effect didn't operate via the impact of exchange rate volatility on trade, thanks to an excellent paper by Klein and Shambaugh. For two, countries don't enter into (or exit) currency unions randomly. Obvious omitted variables alert! It was only a question what.

To his credit, Andrew Rose always provides his data online. This also helps citations, as the cost of extending or critiquing his papers are small. So, I fired up Stata, and after a solid 30 minutes, I discovered part of what was driving the seemingly magical effect. Roughly one-quarter of the CU changes were of countries that had former colonial relationships, many of them between the UK and its former colonies. It turns out that the impact of the "former colony" dummy in the gravity equation has been decaying slowly over time. This led to a more interesting insight -- that history matters for trade. In any case, many colonies had currency unions with the UK until the late-1960s/early 1970s. Thus, given the decay of trade for all former colonies, if you average trade before and after 1970, of course you get that there was relatively less trade after. However, if you add in a simple trend for trade between the UK and all its former colonies -- a rather mild control -- the magical "currency union" effect on trade disappears completely. Indeed, if you have two variables that exhibit trends, and you regress one on the other, you're almost guaranteed to find a significant correlation even if no relationship exists. I've since discovered that this is by far the most ubiquitous problem in empirical international trade, to the extent that I've been to conference sessions in which every paper regressed trending variables in levels on other trending variables. It doesn't mean that they were all wrong, necessarily, but showing robustness to trends is a best practice. Plotting pre-and post-treatment trends can also help. But the more relevant research question in this case is "How much does a currency union increase trade relative to trend?" Particularly when there are already strong upward or downward pre-trends. 

In any case -- back to currency unions. For other country pairs aside from colonies, there were other problems. India and Pakistan dissolved their currency union in 1964, after which trade declined. However, there's an obvious problem here: this occurred because of a brutal border war, and these two countries hate each to this day. A number of other examples were similar. You probably know less about what happened between Madagascar and Comoros in 1975. They dissolved a currency, and trade indeed collapsed. But two minutes of googling told me that, in the same year, there was an ethnic cleansing episode in Mahajanga, targeted at "island" peoples. (Yes, Madagascar is also an island, but this was apparently targeted at people from smaller islands, including the Comorians...) Same thing with the "Liberation War" that resulted in the overthrow of Ugandan dictator Idi Amin -- and also resulted in the tragic end of a currency union, depending on your view. And on and on the examples went. Wars, ethnic cleansing episodes, financial crises, currency crises, banking crises, communist takeovers. These are what cause currency unions to end. And if you take out the most obvious examples of cases where geopolitical events overshadow the change in CU status, there's not much left, and what is doesn't suggest that CUs have any effect on trade. And this is before one even blows up the error terms by adopting realistic assumptions about spatial correlation. 

In any case, in 2015, I deleted this paper and my regression and code from my webpage, and assigned the paper to my brilliant undergraduate students. They alerted me to the fact that Glick and Rose had recently written a mea culpa, where the authors declared that they could no longer have confidence in the results. I wasn't cited. Indeed, they probably hadn't seen my paper. Despite this, I was worried about having pissed them off in the first place, and I also wanted to get some love in the final version, so I wrote them an email and thanked them for sharing their data online, congratulating them for having the intellectual integrity to admit that they were wrong (even if their new paper still suffered from many of the same issues, such as not controlling for trends). 

This sounds like it should be the end of the story, correct? Particularly since the early evidence on the Euro and trade was not supportive. However, Glick and Rose changed their minds, and decided instead to double-down on a positive, measurable impact of currency unions on trade. This time, they concluded that the Euro has increased trade by a smaller, but still magical, 50%. And I did get some love, although not in the form of a citation -- instead they thanked me in the acknowledgments! I'm still quite happy to have been acknowledged, even though, admittedly, I hadn't commented on the substance of their new paper. 

In any case, Jeff Frankel at Harvard apparently used to assign his Ph.D.'s students a "search-and-destroy" mission on the original CU effect. Thus, thanks to the fact that Andrew Rose still provides his data online -- for which I'm grateful -- I've just assigned my students a similar mission on the new EMU result. Thus, we get to see if they can overturn anything that the good referees at the European Economic Review may have overlooked. If I were a gambling man, and I am, I would put my money on my sharp undergraduates at the New Economic School over the academic publication process. If I were Croatia, or Greece, contemplating the relative merits of joining/staying in the Euro, I would write off the academic literature on this topic completely. 






Monday, March 20, 2017

Robots and Inequality: A Skeptic's Take

Paul Krugman presents "Robot Geometry" based on Ryan Avent's "Productivity Paradox". It's more-or-less the skill-biased technological change hypothesis, repackaged. Technology makes workers more productive, which reduces demand for workers, as their effective supply increases. Workers still need to work, with a bad safety net, so they end up moving to low-productivity sectors with lower wages. Meanwhile, the low wages in these sectors makes it inefficient to invest in new technology.

My question: Are Reagan-Thatcher countries the only ones with robots? My image, perhaps it is wrong, is that plenty of robots operate in Japan and Germany too, and both countries are roughly just as technologically advanced as the US. But Japan and Germany haven't seen the same increase in inequality as the US and other Anglo countries after 1980 (graphs below). What can explain the dramatic differences in inequality across countries? Fairly blunt changes in labor market institutions, that's what. This is documented in the Temin/Levy "Treaty of Detroit" paper and the oddly ignored series of papers by Piketty, Saez and coauthors which argues that changes in top marginal tax rates can largely explain the evolution of the Top 1% share of income across countries. (Actually, it goes back further -- people who work in Public Economics had "always" known that pre-tax income is sensitive to tax rates...) They also show that the story of inequality is really a story of incomes at the very top -- changes in other parts of the income distribution are far less dramatic. This evidence also is not suggestive of a story in which inequality is about the returns to skills, or computer usage, or the rise of trade with China.

With Lester Lusher, I've also waded into this debate (here and here). What we set out to do is show a variety of evidence all pointing to the conclusion that trade shocks have not caused a rise in inequality. First, we look at the two largest trade shocks in the US, the 1980s dollar bubble, and the late 1990s appreciation + rise of China shock. Perhaps surprisingly, neither hurt the wages of low-wage workers who worked in exposed sectors disproportionately. Sectors more exposed to the shocks also did not experience increases in inequality. We also found no association between capital upgrading by sectors, and inequality, or between TFP growth and inequality.  And then we looked at the cross-country evidence, and found that trade with China, trade deficits, trade levels, and changes in trade are all not even correlated with top income shares. As a last bit, we tested the Piketty, Saez, and Stantcheva results out-of-sample. Although we found that a slightly different functional form, and a dynamic model worked much better, we found that their results hold up out-of-sample. Interestingly, even though this was just a small part of our paper, referees protested that the result was not new. They apparently see no value in subjecting previously published work to additional testing, even if it is seminal work on a major policy topic that is also controversial enough that many top people (Krugman, Autor, Acemoglu, etc.) seem not to believe it. And this journal was second tier!

In any case, there is a long history of constructing mathematical models to show how international trade or skill-biased technological change can influence the wage distribution in theory. However, it's not clear to me this literature has been very successful empirically, in the end. It seems to me that any theory meant to apply to all countries will be a theory that doesn't apply to any country. And somehow it always seems that the contributors to this literature are not aware that there is already a perfectly good explanation for rising inequality that has explanatory power internationally (and out-of-sample).

Lastly, why are wages growing slowly? Well, let's not forget that the Fed has raised interest rates 3 times already since the Taper, despite below-target inflation and slow wage growth. The labor market has been bad since 2007 (or, in fact, since 2000 -- see my longer explanation here). Wages tend to grow slowly in bad labor markets, and tight money will keep them from growing quickly. There's no need here to draw parallelograms.
































Sunday, March 19, 2017

The Myth and Reality of Manufacturing in America

That's the title of a small pamphlet on manufacturing by Michael J. Hicks and Srikant Devaraj which argues that "Almost 88 percent of job losses in manufacturing in recent years can be attributable to productivity growth..."

However, the accounting exercise they conduct to get that number contains a conceptual flaw. Namely, they start at 2000, and ignore the role of productivity growth before that. Productivity growth, if anything, fell after 2000. So, if one does a counterfactual job simulation from 1987 to 2000, one could say that without productivity growth, employment in the manufacturing sector would have increased 60%. In their exercise from 2000, they claim that, absent productivity growth, manufacturing employment would have declined just 12%. But the big question here is why did the "no productivity growth" counterfactual suddenly stop growing after 2000? What explains the difference in the trend? This is something they can't answer, since they decide not to look before 2000.

Yet, somehow this pamphlet keeps getting cited by major news outlets. 

An indication of the level of care in this article is that they cite "Acemoglu" as "Ocemoglu". Someone corrected this in the References, but left the citation in alphabetical order where the O should be. 

There is a huge hole in Trump's promise to bring back US manufacturing jobs

That's the title of an article from Business Insider.

There is much I like about these articles. This one is better than most, as it at least plots data. However, I don't quite understand why they focus on robots and productivity gains, rather than the slowdown in the growth of real manufacturing output. Real manufacturing output from 2007 to 2017 has barely increased at all. This is for sure a major factor in the decline of manufacturing employment. The article focuses on "manufacturing output hitting record highs in recent years, even as manufacturing employment continues its steady decline." Sorry, but a 1 or 2% increase in manufacturing output over a 10 year period is not a cause for celebration, even if we are now reaching "record highs". One could alternatively frame this performance as approaching "record lows" in terms of the 10 year growth rate of real output. 

The other problem with the argument that booming productivity and robots have caused the decline in US manufacturing output, aside from the fact that productivity growth hasn't increased, is that the period since 2000 has also seen the US share of manufacturing exports fall dramatically. Why would booming productivity lead to fewer exports and rising imports? Inquiring minds would like to know. 

Yet another problem is that the composition of productivity growth has changed. Productivity for the median sector declined from 2000 to 2010, while the computer sector saw very rapid productivity amidst declining sales. 

Friday, March 17, 2017

Links to Two Short Essays on Monetary Policy, One Good, One Bad

First, new FOMC member Neil Kashkari tells us why he dissented on this week's interest rate hike. It's a great essay. I laughed at "over the past five years, 100 percent of the medium-term inflation forecasts (midpoints) in the FOMC’s Summary of Economic Projections have been too high: We keep predicting that inflation is around the corner." That's some 20 forecasts all missed in the same direction. Impressive. 


This is exactly the problem with monetary policy in the US, Europe, and Japan. Central banks keep predicting that inflation will return to target. When it doesn't, however, they don't respond by loosening policy, or by updating their inflation forecasts. They just continue being predictably wrong. He makes a number of good points, including that the Fed shouldn't treat their target as though the risks are asymmetric, like they are driving near a cliff. They should occasionally miss by going over 2%. If anything, the worry should be that if the US had a negative shock again, the Fed would have more trouble managing the economy at the zero lower bound. The asymmetric risk is an economy growing too slowly. What he doesn't mention is that this logic should imply a higher inflation target -- but I'll give him a pass on this. Yet another good point he makes is that the rest of the developed world is currently suffering from very low inflation. How likely is it that the US will soon suffer from an inflation spiral that can't be controlled by repeated interest rate increases? When other developed countries have inflation close to zero? Core inflation in the US is just 1.74, he notes, up from 1.71% last year. We appear to be some ways away from an inflation spiral. 

Another thing he doesn't do is discuss the fact that, not only has the Fed repeatedly missed its inflation forecasts in the past few years, but it has also under-performed its GDP growth forecasts. You'd think, if you wanted to drive down the center of the road, when you notice your car heading off onto the shoulder that you'd adjust the steering wheel back toward the center. The Fed instead has repeatedly decided not to do this, and then been surprised that the car doesn't return to the center of the road by itself. Many academics then argue that maybe it's because the steering wheel doesn't work. I say the problem isn't with the steering wheel, but that the driver refuses to adjust.


Thus I want to flag another short article, on Japan, which argues exactly this. Sayuri Shirai writes that Japan's non-standard monetary policies since it reached the zero lower bound have not been effective, and argues that Japan should thus raise interest rates. One tell that the author is overselling the ineffectiveness of MP at the ZLB is that although she credits Japan's MP easing with weakening the Yen after 2013, she writes "the yen’s depreciation has reduced the prices of exports but increased those of imports so that the net positive impact has not been as large as in the more distant past." How could a Yen devaluation simultaneously reduce the price of exports and increase import prices? Presumably, many of Japan's exports to the US, for example, are actually priced in dollars. In Yen terms, these prices are thus likely to rise substantially with a fall in the value of the Yen. The article doesn't back up these claims.


The author also doesn't back up other claims, such as that the negative interest rate policy supposedly caused a rash of bad effects, such as "a deterioration in households’ sentiments" and "potential financial instability risk". If either of these were the case, it could easily be documented using high-frequency data from surprise announcements. But that isn't what people who look at the impact of MP announcements at the ZLB typically find. Interestingly, although much of the article is about the negative interest rates, the author never mentions that Japanese rates are just -.1%. Color me skeptical that a change of just .1% is really going to have these dramatic negative effects proposed in the article. Keep in mind that back before the ZLB episode, the BOJ once raised interest rates 3.5% in one year to fight inflation, and in the 1970s, they cut interest rates as much as 6% in a loosening cycle over 2-3 years to fight a recession. That's 60 times more medicine than a move from 0 to -.1%.  To provide a similar amount of ammunition now, of course an enormous balance sheet expansion would be in order. It's clearly foolish to conclude that since things haven't gone better since Japan cut interest rates from 0 to -.1%, that Japan should now raise rates. 

Aside from the fact that announcements have tended to move things in the right direction for Japan, the author also misses that many times, even as new data comes in which is below the BOJ's forecast, they've declined to take on more stimulative policies, occasionally disappointing markets and tightening monetary conditions. It's hard not to watch this and conclude that the Bank is not totally committed to its growth and inflation targets. A good analogy is your friend who states he wants to lose weight, but refuses to go to the gym more than once a month. Since going to the gym once a month isn't working, why go at all? That's what Shirai is telling us. 

Wednesday, March 15, 2017

A Quick Argument Why Not to Raise Interest Rates

I teach today, so this will be quick. Why am I opposed to raising interest rates?

First, I should mention this is a debate which we've had before. When the Fed decided to leave it's balance sheet unchanged for all of 2009 and 2010 -- while the economy burned -- I thought it was a mistake. When Ben Bernanke raised the discount rate in 2010, I wondered if he had lost his marbles. The result was a 63 seat loss in the House for the Dems and a full decade of Tea-Baggers in the Congress (and maybe more, depending to what extent gerrymandering affects local House elections which will then affect the next redistricting). Given that the economy continued to grow slowly after that, I think I was justified.

Next, after QE2 at the end of 2010, I believed the Fed should have acted sooner than the end of 2012 for QE3, given the fact that growth was slower than it anticipated.

Then, I thought 2014 was too soon to taper, given that inflation was below target and GDP growth was below trend.

At the end of 2015, I also, not suprisingly, thought it was too soon to tighten. Once again, 2016 was a year of slow GDP growth and below-target inflation. So, in retrospect, I think that was the right call. The last two rate hikes, I believe, are also premature. Unemployment is now super low, and employment gains the past several months indeed look solid. Thus it isn't crazy to hike now. But the unemployment rate and current headline inflation rates are not the only numbers the Fed should be looking at. The Core PCE is still pretty stable, below 1.8% for the last reading, and while headline inflation is now close to 2% and will soon likely move higher if current trends continue, this measure was also close to zero as recently as 2015, a year in which the Fed actually raised interest rates. The Fed shouldn't ignore that inflation has been below it's target for most of the period since 2008, that the employment-to-population ratio for prime-aged workers, while trending in the right direction, still has a lot of room to recover, and that RGDP is some 20-25% below it's long-run trend. Given that inflation has been below target for so long, why not let inflation run above target for awhile to allow GDP to catch up with its previous trend?

And, add to this that our inflation target is too low. Clearly, the Fed has trouble managing the economy at the zero lower bound. A 2% inflation target means that any large shock will push us into a liquidity trap. This seems more-or-less obvious. Why not raise the target to 3%?

The only silver lining here for Democrats is that this hurts Trump and Republicans. But they've been pushing the Fed for tighter policy for years. One wonders if they will soon start to change their tune.




A Policy Proposal for US Manufacturing, and the Economy: Don't Raise the Federal Funds Rate

Doug Irwin shared a nice article in the WSJ which makes a nice point that trade deficits, by themselves, don't imply that trade is unfair. It's a good article. Here is one part: 

The Trump team is correct in believing that sometimes trade imbalances reflect government tampering, though in the capital markets. Messrs. Gagnon and Bergsten found that countries who buy foreign currency to suppress the value of their own tend to have larger trade surpluses. 
In the past Chinese currency intervention and tight controls on capital inflows kept the yuan artificially cheap and its trade surpluses inflated. Brad Setser of the Council on Foreign Relations says Taiwan and South Korea have held down their currencies via currency intervention and by encouraging domestic investors to buy foreign assets and discouraging foreign investors from buying domestic assets.
These actions are legitimate cause for complaint. Yet finding an effective deterrent has eluded previous presidents and it’s not clear what Mr. Trump can do differently. Punishing China for manipulating its currency makes less sense now that China is trying to prop it up, rather than push it down as in the past.
Let me offer a solution, that happens to cure other, perhaps more serious problems with the economy. The Fed should simply not raise interest rates as it is doing. In fact, it should probably be cutting them. If it did, the dollar will fall, and this will help manufacturing and help the trade balance. It would help the US move back toward its long-run growth path, and help inflation move up to its ostensible target. Unemployment is now low, and while it isn't crazy to think the Fed should raise interest rates, the Fed should transition to a 3% (or 4%) inflation target to make future zero lower bound episodes more unlikely, and they should treat that target symmetrically. This is a simple pro-trade and pro-manufacturing policy that will anger almost no one and actually be good for the rest of the world economy, as it will spur faster growth. Manufacturing employment will grow on two counts -- one from faster growth, and a second from a weaker dollar. This is the obvious play to help manufacturing, and what I pushed for when I worked in the Obama administration.

The Peso has recovered. Good news for the future of NAFTA...

With the US dollar already a bit overvalued, and the US stock market also looking richly valued by historical measures, I need something to invest in each month. Thus, I decided to have a look at LT government bonds in Mexico, and the value of the peso, to play the carry trade/uncovered interest parity puzzle. Last I'd checked, the Peso had depreciated a lot from its pre-election levels. Turns out, not any more. The peso is roughly back to pre-election levels. Looks like the conventional wisdom now is that nothing really will happen to NAFTA. 




This is great. Ending NAFTA was a horrible idea, and Mexico has, at most, only a little to do with the decline of American manufacturing. It's also possible that trade with Mexico has helped soften the blow to American manufacturing, as cheap Mexican labor can sometimes be a complement in production for some sectors. This may be why most of the countries near to China, with the exception of Japan, are not now in depressions. (In fact, someone needs to write a paper about this...) Below is the NAFTA trade balance ex-oil. You'll notice it's always been roughly balanced. Ending NAFTA would create winners and losers in the US, but you can bet the losers (at least the exporters, not the consumers) would scream louder than the winners would applaud. 









Tuesday, March 14, 2017

The Real Exchange Rate and China's Savings

I attended the Forum for Research in Empirical International Trade (FREIT) in Yogyakarta, Indonesia this past week. Marc Muendler and I were debating the source of China's excess savings, over and above its high level of investment. China's high savings rate could have multiple causes (e.g., lack of a social safety net implies a need for precautionary savings, while countries that are growing fast, such as the Asian Tigers, have a clear tendency to save a lot), and various policies could be at play. However, in this post I merely want to lay out how an undervalued real exchange rate, which can be accomplished with a combination of capital controls and foreign reserve accumulation, could increase the rate of savings.

The short story is that an undervalued RER will increase exports and decrease imports, and this equals the gap between savings and investment by accounting identity.

First, why does a high savings rate matter? A high savings rate (over investment) means that China will be exporting both capital and goods to the US, and that it won't be importing many goods. This is due to accounting identity: Savings (S) - Investment (I) = Exports (X) - Imports (M).

Imagine that, at first, S = I, which implies that X = M, while the RER is fairly valued. Now imagine that China devalues its RER by 40%. This change in relative prices means that exports will surge while imports will fall. This will also imply that savings will be less than investment. What is the logic? Let's have a closer look at accounting identities:

Y = C + I + G + X - M

(Where Y = output, C = consumption, I = Investment, G = Government spending, etc.)

Clearly, if the Yuan is weak, then, on one hand, China will export more, and on the other, it will import less. Thus, output will rise. That's straightforward. Total consumption (of domestically produced goods and of imports) may decrease by a little as imports become more expensive, but this effect is likely to be drowned out by changes in the trade balance. It could decrease by more if China also raises domestic interest rates to counter inflation, something which would cause imports to plunge further. How will savings increase? Savings is income not taxed or spent:

S = (Y - T - C) + (T - G)

Total savings equals private and public savings. Fast Chinese growth will also tend to boost T relative to G, and might also boost Y relative to C. If China undervalues its exchange rate, and also raises its domestic interest rates to contain inflation, C could fall, raising savings. There could also be a cultural explanation as Asian countries tend to have high levels of savings in general, although this doesn't explain why savings is larger than investment, or why savings rates have declined recently in Japan and Taiwan (or why growth pre-dated high savings rates in several of these countries). Trade doesn't show up here, but we can substitute in for Y from the equation above, and get:

S = (( C + I + G + X - M ) - T - C ) + ( T - G )

And now simplify to:

S - I  = X - M

Savings minus investment equals exports minus imports. Exports and imports clearly depend on the real exchange rate, and thus so does the gap of savings over investment. I'm not always the biggest advocate for the use of mathematics in economics, but in some cases it can really help for clarification. The above algebra makes this result seem obvious and intuitive, but your average intelligent guy-in-a-bar wouldn't find a link between the exchange rate and savings intuitive or obvious without looking at the math.

A more controversial assertion is that, because of hysteresis, an undervalued exchange rate yesterday could mean you'll have a larger tradables sector today, and thus more savings. On the other hand, in recent years, with weaker Chinese growth and looser monetary policy, the Chinese government in has been trying to prop up the currency, although this doesn't necessarily mean that the Chinese currency is overvalued from the perspective of its tradable sector. But it does mean that this is a topic deserving of further study.

Monday, March 13, 2017

Robots on an Assembly Line...

Mercedes A Class Production Line

You'll notice, not a lot of human workers. When you see this, you might think -- "Aha! It's not foreigners who have taken our jobs, but robots." However, increases in productivity are not a new development for US manufacturing. In fact, labor productivity growth has been slower than usual for the median manufacturing sector since 2000.

Second point, labor-saving technology is *not* what is wrong with the US economy. Faster productivity growth in production means lower inflation, which means more stimulative US monetary policy (or at least it should). That means more GDP growth and more employment elsewhere.

This, this video doesn't imply that there still isn't *something* wrong with the US manufacturing sector, and that something is not fast productivity growth. Still, somehow it's fascinating to watch an assembly line.

Tuesday, March 7, 2017

On Perverse Incentives and Replication in Science

Stephen Hsu has a nice blog post on this topic. He writes about this common pattern:

1. Study reports results which reinforce the dominant, politically correct, narrative.
2. Study is widely cited in other academic work, lionized in the popular press, and used to advance real world agendas.
3. Study fails to replicate, but no one (except a few careful and independent thinkers) notices.

#1 is spot-on for economics. Woe be to she who bucks the dominant narrative. In economics, something else happens. Following the study, there are 20 piggy-back papers which test for the same results on other data. The original authors typically get to referee these papers, so if you're a young researcher looking for a publication, look no further. You've just guaranteed yourself the rarest of gifts -- a friendly referee who will likely go to bat for you. Just make sure your results are similar to theirs. If not, you might want to shelve your project, or else try 100 other specifications until you get something that "works". One trick I learned: You can bury a robustness check which overturns the main results deep in the paper, and your referee who is emotionally invested in the benchmark result for sure won't read that far.

Hsu then writes:


"one should be highly skeptical of results in many areas of social science and even biomedical science (see link below). Serious researchers (i.e., those who actually aspire to participate in Science) in fields with low replication rates should (as a demonstration of collective intelligence!) do everything possible to improve the situation. Replication should be considered an important research activity, and should be taken seriously"

That's exactly right. Most researchers in Economics go their entire careers without criticizing anyone else in their field, except as an anonymous referee, where they tend to let out their pent-up aggression. Journals shy away from publishing comment papers, as I found out first-hand. In fact, much if not a majority of the papers published in top economics journals are probably wrong, and yet the field soldiers on like a drunken sailor. Often, many people "in the know" realize that many big papers have fatal flaws, but have every incentive not to point this out and create enemies, or to waste their time writing up something which journals don't really want to publish (the editor doesn't want to piss a colleague off either). As a result, many of these false results end up getting taught to generations of students. Indeed, I was taught a number of these flawed papers as both an undergraduate and a grad student.

What can be done?

Well, it would be nice to make replication sexy. I'm currently working on a major replication/robustness project of the AER. In the first stage, we are checking whether results are replicable, using the same data sets and empirical specifications. In the second stage, we plan to think up a collection of robustness checks and out-of-sample tests of papers, and then create an online betting market about which papers will be robust. We plan to let the original authors bet on their own work.

Another long-term project is to make a journal ranking system which gives journals points for publishing comment papers. Adjustments could also be made for other journal policies, such as the extent to which a particular journal leeches off the academic community with high library subscription fees, submission fees, and long response times.

The AEA should also come out with a new journal split between writing review articles (which tend to be highly cited), and comment papers (which tend not to be). In that case, they could do both well and good.

As an individual, you can help the situation by writing a comment paper (maybe light up somebody who isn't in your main field, like I did). You can also help by citing comment papers, and by rewarding comment papers when you edit and serve as a referee. As an editor, do you really care more about your journal's citations than truth? You could also engage in playful teasing of your colleagues who haven't written any comment papers as people who aren't doing their part to make economics a science. (You could also note that it's also a form of soft corruption, but I digress...)


Sunday, March 5, 2017

WSJ Article on Trump's Fed Vacancies

Given the importance of monetary policy, I've been waiting for signs about who Trump will appoint to the Fed. The media gave Obama's Fed picks very little attention. In Geithner's bio, which was largely about Obama administration economic policy, he didn't even mention Obama's Fed choices. He didn't consider them important. In my view, however, the Fed is crucial in getting the US economy back to normal growth. Will Trump pick hard or soft money types? It's anybody's guess. The WSJ has some details, although they leave out the views on monetary policy of those on the short list.

In any case, here are some key grafs for those that can't get behind the WSJ's paywall:

President Donald Trump will be able to recast the Federal Reserve by filling three or more vacancies on its seven-member board of governors, and is leaning toward candidates with banking and financial world experience rather than academic economists.
       After his campaign criticism of the central bank’s low-interest-rate policies, many observers speculated he would seek more “hawkish” candidates who would favor higher borrowing costs. But his choices may be driven less by these issues and more by their practical experience, judging from his early picks for other top economic policy posts in the administration—drawn from investment banking, private equity and business—and the pool of early contenders for the Fed jobs.
...

      David Nason, an executive at General Electric Co.’s financing arm and former Treasury Department official during the 2008 financial crisis, emerged as a leading candidate ...  Other potential candidates the administration has considered include former BB&T Corp. Chief Executive John Allison, and Rep. French Hill (R., Ark.), a former banker just re-elected to his second House term, these people said. 
...
        Mr. Trump’s views on monetary policy are unclear—he both praised and criticized the Fed’s low-rate policies during the campaign. He also promised to roughly double the pace of annual economic growth to 4%, which would be harder if the central bank boosts interest rates more aggressively to restrain inflation.
        “There’s no reason to think Trump would go for one of these really predictable, hawkish academics who’s schooled in balancing inflation,” said Sarah Binder, a political-science professor at George Washington University who has studied the Fed’s relationship with Congress. “He seems to want people, as best we can tell, with a background in finance or a background in business that brings some sort of real world—from Trump’s perspective—appreciation for the good stuff that low rates can do,” she said.
         The seven-member Fed board has two vacancies and will have a third in April when governor Daniel Tarullo steps down in April.