The economics profession is not in disrepute. Macroeconomics is in disrepute. The micro stuff that people like myself and most of us do has contributed tremendously and continues to contribute. Our thoughts have had enormous influence. It just happens that macroeconomics, firstly, has been done terribly and, secondly, in terms of academic macroeconomics, these guys are absolutely useless, most of them.
Ouch. But not too different from lots of other opinions I've head. "I went to a macro conference recently," a distinguished game theorist confided a couple of years back, sounding guilty about the fact. "I couldn't believe what these guys were doing." A decision theorist at Michigan once asked me "What's the oldest model macro guys still use?" I offered the Solow model, but what he was really claiming is that macro, unlike other fields, is driven by fads and fashions rather than, presumably, hard data. Macro folks, meanwhile, often insist rather acerbically that there's actually no difference between their field and the rest of econ. Ed Prescott famously refuses to even use the word "macro", stubbornly insisting on calling his field "aggregate economics".
So who's right? What's the actual distinction between macro and "micro"? The obvious difference is the subject matter - macro is about business cycles and growth. But are the methods used actually any different? The boundary is obviously going to be fuzzy, and any exact hyperplane of demarcation will necessarily be arbitrary, but here are some of what I see as the relevant differences.
1. General Equilibrium vs. Game Theory and Partial Equilibrium
In labor, public, IO, and micro theory, you see a lot of Nash equilibria. In papers about business cycles, you rarely do - it's almost all competitive equilibrium. Karthik Athreya explains this in his book, Big Ideas in Macroeconomics:
Nearly any specification of interactions between individually negligible market participants leads almost inevitably to Walrasian outcomes...The reader will likely find the non-technical review provided in Mas-Colell (1984) very useful. The author refers to the need for large numbers as the negligibility hypothesis[.]
Macro people generally assume that there are too many companies, many consumers, etc. in the economy for strategic interactions to matter. Makes sense, right? Macro = big. Of course there are some exceptions, like in search-and-matching models of labor markets, where the surplus of a match is usually divided up by Nash bargaining. But overall, Athreya is right.
You also rarely see partial equilibrium in macro papers, at least these days. Robert Solow complained about this back in 2009. You do, however, see it somewhat in other fields, like tax and finance (and probably others).
2. Time-Series vs. Cross-Section and Panel
You see time-series methods in a lot of fields, but only in two areas - macro and finance - is it really the core empirical method. Look in a business cycle paper, and you'll see a lot of time-series moments - the covariance of investment and GDP, etc. Chris Sims, one of the leading empirical macroeconomists, won a Nobel mainly for pioneering the use of SVARs in macro. The original RBC model was compared to data (loosely) by comparing its simulated time-series moments side by side with the empirical moments - that technique still pops up in many macro papers, but not elsewhere.
Why are time-series methods so central to macro? It's just the nature of the beast. Macro deals with intertemporal responses at the aggregate level, so for a lot of things, you just can't look at cross-sectional variation - everyone is responding to the same big things, all at once. You can't get independent observations in cross section. You can look at cross-country comparisons, but countries' business cycles are often correlated (and good luck with omitted variables, too).
As an illustration, think about empirical papers looking at the effect of the 2009 ARRA stimulus. Nakamura and Steinsson - the best in the business - looked at this question by comparing different states, and seeing how the amount of money a state got from the stimulus affected its economy. They find a large effect - states that got more stimulus money did better, and the causation probably runs in the right direction. Nakamura and Steinsson conclude that the fiscal multiplier is relatively large - about 1.5. But as John Cochrane pointed out, this result might have happened because stimulus represents a redistribution of real resources between states - states that get more money today will not have to pay more taxes tomorrow, to cover the resulting debt (assuming the govt pays back the debt). So Nakamura and Steinsson's conclusion of a large fiscal multiplier is still dependent on a general equilibrium model of intertemporal optimization, which itself can only be validated with...time-series data.
In many "micro" fields, in contrast, you can probably control for aggregate effects, as when people studying the impact of a surge of immigrants on local labor markets use methods like synthetic controls to control for business cycle confounds. Micro stuff gets affected by macro stuff, but a lot of times you can plausibly control for it.
3. Few Natural Experiments, No RCTs
In many "micro" fields, you now see a lot of natural experiments (also called quasi-experiments). This is where you exploit a plausibly exogenous event, like Fidel Castro suddenly deciding to send a ton of refugees to Miami, to identify causality. There are few events that A) have big enough effects to affect business cycles or growth, and B) are plausibly unrelated to any of the other big events going on in the world at the time. That doesn't mean there are none - a big oil discovery, or an earthquake, probably does qualify. But they're very rare.
Chris Sims basically made this point in a comment on the "Credibility Revolution" being trumpeted by Angrist and Pischke. The archetypical example of a "natural experiment" used to identify the impact of monetary policy shocks - cited by Angrist and Pischke - is Romer & Romer (1989), which looks at changes in macro variables after Fed announcements. But Sims argues, persuasively, that these "Romer dates" might not be exogenous to other stuff going on in the economy at the time. Hence, using them to identify monetary policy shocks requires a lot of additional assumptions, and thus they are not true natural experiments (though that doesn't mean they're useless!).
Also, in many fields of econ, you now see randomized controlled trials. These are especially popular in development econ and in education policy econ. In macro, doing an RCT is not just prohibitively difficult, but ethically dubious as well.
So there we have three big - but not hard-and-fast - differences between macro and micro methods. Note that they all have to do with macro being "big" in some way - either lots of actors (#1), shocks that affect lots of people (#2), or lots of confounds (#3). As I see it, these differences explain why definitive answers are less common in macro than elsewhere - and why macro is therefore more naturally vulnerable to fads, groupthink, politicization, and the disproportionate influence of people with forceful, aggressive personalities.
Of course, the boundary is blurry, and it might be getting blurrier. I've been hearing about more and more people working on "macro-focused micro," i.e. trying to understand the sources of shocks and frictions instead of simply modeling the response of the economy to those shocks and frictions. The first time I heard that exact phrase was in connection with this paper by Decker et al. on business dynamism. Another example might be the people who try to look at price changes to tell how much sticky prices matter. Another might be studies of differences in labor market outcomes between different types of workers during recessions. I'd say the study of bubbles in finance also qualifies. This kind of thing isn't new, and it will never totally replace the need for "big" macro methods, but hopefully more people will work on this sort of thing now (and hopefully they'll continue to take market share from "yet another DSGE business cycle model" type papers at macro conferences). As "macro-focused micro" becomes more common, things like game theory, partial equilibrium, cross-sectional analysis, natural experiments, and even RCTs may become more common tools in the quest to understand business cycles and growth.
"I've been seeing more and more people working on "macro-focused micro," i.e. trying to understand the sources of shocks and frictions instead of simply modeling the response of the economy to those shocks and frictions."
ReplyDeleteShare some links? I hear there's a huge literature on this.
Great question. I added a few links. Not sure how to measure how many people are working on this - all literatures are always "huge". My impression from looking at the papers at the macro sessions at AEA over the last few years is that this is still a minority, but getting bigger. But I haven't looked at it systematically.
DeleteWhat about global Macro vs.local (US) Macro? Money and Liquidity are bigger than the US. See e.g www.helenerey.eu
ReplyDelete""yet another DSGE business cycle model" type papers at macro conferences)."
ReplyDeleteAre DSGE bus cycle models type papers inadequate, intellectually lazy, or just saturated?
Not sure about intellectually lazy, but probably all inadequate and definitely saturated.
Deletewhat model is a majorn alternative to the DSGE model?
DeleteThanks!
Inadequate at best.
ReplyDeleteGiven the giant duck in your accompanying picture, is this a suggestion that macroeconomists are a bunch of quacks?
ReplyDeleteAs always, check the alt-text! :D
Delete100 duck-sized horses vs. 1 horse-sized duck! I take the 100 tiny horses, that giant duck cannot stand for long on only 2 spindly legs.
DeleteReally I just don't like ducks, and I want to beat one up. :D
DeleteI think the use of Synthetic Control in macro will continue to grow. Effect of terrorism on growth, effect of EU membership on later joiners, effect of Chavez, etc.....
ReplyDeleteYeah. But synthetic controls are tough to do with countries, for the same reason all cross-country comparisons are difficult...A) you have to take a stand on the linkages between countries, and B) there are so many potentially important variables that it's hard to know which ones to use to construct the synthetic controls...
DeleteBut yeah, it's better than normal controls, by far.
Plenty of game theory macro....Russell Cooper, John Bryant. Business cycles are co-ordination games
ReplyDeletehttps://books.google.com/books/about/Coordination_Games.html?id=qJK5hlYxU_IC
Thanks!
DeleteI view economists as a continuum of measure 1, so that any topic is always being worked on by an infinite number of economists. But what really matters is the fraction...
Non-helpful answer to post title's question?
ReplyDelete"We need to use Macroeconomics to accurately measure the size of Lucas and Prescott's moms."
I find it very difficult to understand why there's so much trouble in forming accurate macro models. The only solution I've found is to believe that economists are misled from day one of their education. They are given a pretty neoclassical basis for all of economics, and from there it is very difficult to get out of that cognitive dungeon.
ReplyDeleteI find it very easy to understand macroeconomics but I use my own model. It is based in mathematics, but it doesn't claim to be precise, rather it just provides things to look out for. It is easy to look at trends and predict what trouble they might cause.
Perhaps the solution is just too simple to appeal to economists. Perhaps economists just like complex things that make use of their knowledge and education. Perhaps I could upset the entire industry by publishing a proper working model with predictive ability;-)