Economics Rules notes with some approbation the rise of concern among applied economists, and especially labor economists, about causality. It fails, though, to observe that this newfound concentration has been accompanied, as Jeff Biddle and I show (History of Political Economy, forthcoming 2017), by diminished attention to model-building and to the use of models, which Rodrik rightly views as the centerpiece of economic research. He recognizes, however, that the “causation über alles” approach (my term, not Rodrik’s) has made research in labor economics increasingly time- and place-specific. To a greater extent than in model-based research, our findings are likely to be less broadly applicable than those in the areas that Rodrik warns about. Implicit in his views is the notion that the work of labor and applied micro-economists might be more broadly relevant if the concern with causation were couched in economic modeling. If we thought a bit more about the “how” rather than paying attention solely to the “what,” the geographical and temporal applicability of our research might be enhanced...
In the end, the basic idea of the book—that models are our stock in trade—is one that we need to pay more attention to in our research, our teaching, and our public professional personae. Without economic modeling, labor and other applied economists differ little from sociologists who are adept at using STATA.Oooh, Hamermesh used the s-word! Harsh, man. Harsh.
Anyway, it's easy to dismiss rhetoric like this as old guys defending the value of their own human capital. If you came up in the 80s when an economist's main job was proving Propositions 1 and 2, and now all the kids want to do is diff-in-diff-in-diff, it's understandable that you could feel a bit displaced.
But Hamermesh does make one very good point here. Without a structural model, empirical results are only locally valid. And you don't really know how local "local" is. If you find that raising the minimum wage from $10 to $12 doesn't reduce employment much in Seattle, what does that really tell you about what would happen if you raised it from $10 to $15 in Baltimore?
That's a good reason to want a good structural model. With a good structural model, you can predict the effects of policies far away from the current state of the world.
In lots of sciences, it seems like that's exactly how structural models get used. If you want to predict how the climate will respond to an increase in CO2, you use a structural, microfounded climate model based on physics, not a simple linear model based on some quasi-experiment like a volcanic eruption. If you want to predict how fish populations will respond to an increase in pollutants, you use a structural, microfounded model based on ecology, biology, and chemistry, not a simple linear model based on some quasi-experiment like a past pollution episode.
That doesn't mean you don't do the quasi-experimental studies, of course. You do them in order to check to make sure your structural models are good. If the structural climate model gets a volcanic eruption wrong, you know you have to go back and reexamine the model. If the structural ecological model gets a pollution episode wrong, you know you have to rethink the model's assumptions. And so on.
If you want, you could call this approach "falsification", though really it's about finding good models as much as it's about killing bad ones.
Economics could, in principle, do the exact same thing. Suppose you want to predict the effects of labor policies like minimum wages, liberalization of migration, overtime rules, etc. You could make structural models, with things like search, general equilibrium, on-the-job learning, job ladders, consumption-leisure complementarities, wage bargaining, or whatever you like. Then you could check to make sure that the models agreed with the results of quasi-experimental studies - in other words, that they correctly predicted the results of minimum wage hikes, new overtime rules, or surges of immigration. Those structural models that failed to get the natural experiments wrong would be considered unfit for use, while those that got the natural experiments right would stay on the list of usable models. As time goes on, more and more natural experiments will shrink the set of usable models, while methodological innovations enlarges the set.
But in practice, I think what often happens in econ is more like the following:
1. Some papers make structural models, observe that these models can fit (or sort-of fit) a couple of stylized facts, and call it a day. Economists who like these theories (based on intuition, plausibility, or the fact that their dissertation adviser made the model) then use them for policy predictions forever after, without ever checking them rigorously against empirical evidence.
2. Other papers do purely empirical work, using simple linear models. Economists then use these linear models to make policy predictions ("Minimum wages don't have significant disemployment effects").
3. A third group of papers do empirical work, observe the results, and then make one structural model per paper to "explain" the empirical result they just found. These models are generally never used or seen again.
A lot of young, smart economists trying to make it in the academic world these days seem to write papers that fall into Group 3. This seems true in macro, at least, as Ricardo Reis shows in a recent essay. Reis worries that many of the theory sections that young smart economists are tacking on to the end of fundamentally empirical papers are actually pointless:
[I have a] decade-long frustration dealing with editors and journals that insist that one needs a model to look at data, which is only true in a redundant and meaningless way and leads to the dismissal of too many interesting statistics while wasting time on irrelevant theories.It's easy to see this pro-forma model-making as a sort of conformity signaling - young, empirically-minded economists going the extra mile to prove that they don't think the work of the older "theory generation" (who are now their advisers, reviewers, editors and senior colleagues) was for naught.
But what is the result of all this pro-forma model-making? To some degree it's just a waste of time and effort, generating models that will never actually be used for anything. It might also contribute to the "chameleon" problem, by giving policy advisers an effectively infinite set of models to pick and choose from.
And most worryingly, it might block smart young empirically-minded economists from using structural models the way other scientists do - i.e., from trying to make models with consistently good out-of-sample predictive power. If model-making becomes a pro-forma exercise you do at the end of your empirical paper, models eventually become a joke. Ironically, old folks' insistence on constant use of theory could end up devaluing it.
Paul Romer worries about this in his "mathiness" essay:
[T]he new equilibrium: empirical work is science; theory is entertainment. Presenting a model is like doing a card trick. Everybody knows that there will be some sleight of hand. There is no intent to deceive because no one takes it seriously.In addition, there are also paper groups 1 and 2 to think about - the purely theoretical and purely empirical papers. There seems to be a disconnect between these two. Pure theory papers seem to rarely get checked against data, leading to an accumulation on the shelves of models that support any and every conclusion. Meanwhile, pure empirical papers don't often get used as guides to finding good structural models, but are simply linearly extrapolated.
In other words, econ seems too focused on "theory vs. evidence" instead of using the two in conjunction. And when they do get used in conjunction, it's often in a tacked-on, pro-forma sort of way, without a real meaningful interplay between the two. Of course, this is just my own limited experience, and there are whole fields - industrial organization, environmental economics, trade - that I have relatively limited contact with. So I could be over-generalizing. Nevertheless, I see very few economists explicitly calling for the kind of "combined approach" to modeling that exists in other sciences - i.e., using evidence to continuously restrict the set of usable models.