Then there are natural experiments. These papers try to find some variation in economic variables that is "natural", i.e. exogenous, and look at the effect this variation has on other variables that we're interested in. For example, suppose you wanted to know the benefits of food stamps. This would be hard to identify with a simple correlation, because all kinds of things might affect whether people actually get (or choose to take) food stamps in the first place. But then suppose you found a policy that awarded food stamps to anyone under 6 feet in height, and denied them to anyone over 6 feet. That distinction is pretty arbitrary, at least in the neighborhood of the 6-foot cutoff. So you could compare people who are just over 6 feet with people who are just under, and see whether the latter do better than the former.
That's called a "regression discontinuity design," and it's one kind of natural experiment, or "quasi-experimental design." It's not as controlled as a lab experiment or field experiment (there could be other policies that also have a cutoff of 6 feet!), but it's much more controlled than anything else, and it's more ecologically valid than a lab experiment and cheaper and more ethically uncomplicated than a field experiment. There are two other methods typically called "quasi-experimental" - these are instrumental variables and difference-in-differences.
Recently, Joshua Angrist and Jorn-Steffen Pischke wrote a book called Mostly Harmless Econometrics in which they trumpet the rise of these methods. That follows a 2010 paper called "The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics." In their preface, the authors write:
[T]here is no arguing with the fact that experimental and quasi-experimental research designs are increasingly at the heart of the most influential empirical studies in applied economics.
This has drawn some fire from fans of structural econometrics, who don't like the implication that their own methods are not "harmless". In fact, Angrist and Pischke's preface makes it clear that they do think that "[s]ome of the more exotic [econometric methods] are needlessly complex and may even be harmful."
But when they say their methods are becoming dominant, Angrist and Pischke have the facts right.Two new survey papers demonstrate this. First, there is "The Empirical Economist's Toolkit: From Models to Methods", by Matthew Panhans and John Singleton, which deals with applied microeconomics. Panhans and Singleton write:
While historians of economics have noted the transition toward empirical work in economics since the 1970s, less understood is the shift toward "quasi-experimental" methods in applied microeconomics. Angrist and Pischke (2010) trumpet the wide application of these methods as a "credibility revolution" in econometrics that has finally provided persuasive answers to a diverse set of questions. Particularly influential in the applied areas of labor, education, public, and health economics, the methods shape the knowledge produced by economists and the expertise they possess. First documenting their growth bibliometrically, this paper aims to illuminate the origins, content, and contexts of quasi-experimental research designs[.]
Here are two of the various graphs they show:
The second recent survey paper is "Natural Experiments in Macroeconomics", by Nicola Fuchs-Schuendeln and Tarek Alexander Hassan, It demonstrates how natural experiments can be used in macro. As you might expect, it's a lot harder to find good natural experiments in macro than in micro, but even there, the technique appears to be making some inroads.
So what does all this mean?
Mainly, I see it as part of the larger trend away from theory and toward empirics in the econ field as a whole. Structural econometrics takes theory very seriously; quasi-experimental econometrics often does not. Angrist and Pischke write:
A principle that guides our discussion is that the [quasi-experimental] estimators in common use almost always have a simple interpretation that is not heavily model-dependent.
It's possible to view structural econometrics as sort of a halfway house between the old, theory-based economics and the new, evidence-based economics. The new paradigm focuses on establishing whether A causes B, without worrying too much about why. (Of course, you can use quasi-experimental methods to test structural models, at least locally - most econ models involve a set of first-order conditions or other equations that can be linearized or otherwise approximated. But you don't have to do that.) Quasi-experimental methods don't get rid of theory; what they do is to let you identify real phenomena without necessarily knowing why they happen, and then go looking for theories to explain them, if such theories don't already exist.
I see this as potentially being a very important shift. The rise of quasi-experimental methods shows that the ground has fundamentally shifted in economics - so much that the whole notion of what "economics" means is undergoing a dramatic change. In the mid-20th century, economics changed from a literary to a mathematical discipline. Now it might be changing from a deductive, philosophical field to an inductive, scientific field. The intricacies of how we imagine the world must work are taking a backseat to the evidence about what is actually happening in the world.
The driver is information technology. This does for econ something similar to what the laboratory did for chemistry - it provides an endless source of data, and it allows (some) controls.
Now, no paradigm gets things completely right, and no set of methods is always and universally the best. In a paper called "Tantalus on the Road to Asymptopia," reknowned skeptic (skepticonomist?) Ed Leamer cautions against careless, lazy application of quasi-experimental methods. And there are some things that quasi-experimental methods just can't do, such as evaluating counterfactuals far away from current conditions. The bolder the predictions you want to make, the more you need a theory of how the world actually works. (To make an analogy, it's useful to catalogue chemical reactions, but it's more generally useful to have a periodic table, a theory of ionic and covalent bonds, etc.)
But just because you want a good structural theory doesn't mean you can always produce one. In the mid-80s, Ed Prescott declared that theory was "ahead" of measurement. With the "credibility revolution" of quasi-experimental methods, measurement appears to have retaken the lead.
Update: I posted some follow-up thoughts on Twitter. Obviously there is a typo in the first tweet; "quasi-empirical" should have been "quasi-experimental".
Update: I posted some follow-up thoughts on Twitter. Obviously there is a typo in the first tweet; "quasi-empirical" should have been "quasi-experimental".
Interesting post. Thanks.ReplyDelete
Great post. A&P are puzzling, though. They claim their estimates "almost always have a simple interpretation that is not heavily model-dependent." The fact that they think their results aren't model dependent says a lot about them. After all, they choose a statistical functional form, usually linear, and assume that this accurately describes the relationship they're studying. They almost always uncover a local effect, though they rarely emphasize this when drawing big policy conclusions. In any given case, their approach is likely to be consistent with some economic models and not consistent with others, which means their results are very model dependent indeed.ReplyDelete
Avoiding the use of an economic model for interpreting data does not free you from making assumptions. If you choose not to decide you still have made a choice! Ultimately the worst crime economists can commit is refusing to acknowledge that they are making a lot of assumptions when interpreting data; in this respect, A&P haven't made the field more "credible".
Indeed, as soon as you assume generalizability, that's an implicit model. "The world often works in relevant ways the same way it did here."Delete
Ryan, I can't speak for A&P, but I don't think they think quasi-experimental work is atheoretical. I think they just think that a study doesn't have to commit to one theory.Delete
Agreed, Ryan, and moreover, exclusion restrictions are a model. They aren't a parametric model, but they do involve assumptions about behavior.Delete
Very cool. I think the "credibility revolution" is a positive development! I've always thought that theory should provide the questions, while empirical studies should provide the answers. If theory is completely ignored, you won't be asking the right questions even if you have a great quasi-experimental design. So far, I think most people have been asking good questions using these techniques. The danger is to take the results too seriously. If your study doesn't rely on a model, then you can't claim that your estimates, no matter how credible or significant, should have sweeping policy implications. All you can say is that X causes Y.ReplyDelete
Interesting! I took my undergraduate economics degree at Cambridge back in the late 1960s. In the intervening decades I've watched the subject become so (pointlessly) mathematical that I doubt I would even have been accepted into the program any more. Happy to see that it is now moving back toward the real world!ReplyDelete
Isn't this old news?ReplyDelete
In the mid-80s, Ed Prescott declared that theory was "ahead" of measurement. With the "credibility revolution" of quasi-experimental methods, measurement appears to have retaken the leadReplyDelete
When you facing backwards and don't realize it, it is easy to confuse "ahead" and "behind".
More seriously, as LSummers pointed out (29 years ago!) in a companion piece (https://www.minneapolisfed.org/research/qr/qr1043.pdf )
Prescott's interpretation of his title is revealing of his commitment to his theory. He does not interpret the phrase theory ahead of measurement to mean that we
lack the data or measurements necessary to test his theory. Rather, he means that measurement techniques have not yet progressed to the point where they fully
corroborate his theory. Thus, Prescott speaks of the key deviation of observation from theory as follows: "An important part of this deviation could very well disappear
if the economic variables were measured more in conformity with theory. That is why I argue that theory is now ahead of business cycle measurement "
In all seriousness, how does "Theory is ahead of measurements" differ from "We are just making stuff up?"ReplyDelete
Hasn't science has always been the quest to understand [develop theories about] what is at least observed, even if not precisely measured?
Hasn't economic theory of the last several decades, frex RBC Theory, been the very antithesis of science?
n all seriousness, how does "Theory is ahead of measurements" differ from "We are just making stuff up?"Delete
To be honest, I've never understood how those were supposed to be different.
Data gathering is theory-laden. We gather data because we expect it to be relevant based on some theory. We don't always have the data available to test a theory (string theory is a contemporary example of some controversy), but the more seriously a theory is taken the more we are inclined to seek out relevant data. The takeaway line, which I think holds regardless of what you think of Prescott's actual theory (and Garrett Jones has convinced me it doesn't match the recent US economy), is "This feedback between theory and measurement is the way mature, quantitative sciences advance."Delete
The specific area Prescott discussed in "Theory Ahead of Measurement" where the two diverge is labor elasticity. Sumner/Ambrosini discuss that here.
Yes. Science advances when you have an interplay - basically, when you alternate - between theory and evidence.Delete
What quasi-experimental techniques do is to alter that relationship, make it a bit more like the natural sciences.
Very confusing. I kept reading "Angrist" as "Against", and couldn't figure out who kept being against Noah's proposition(s).ReplyDelete
Note: Never read about economics on Friday evening when, as everyone knows, or should, that the Abrahamic God says one should cleave unto Pizza and Beer. Zachariah...
Good post. Mercifully the field of economics is moving away from implausible assumptions and wild conjecture, and is now embracing evidence-based analysis.ReplyDelete
"The new paradigm focuses on establishing whether A causes B, without worrying too much about why."ReplyDelete
Care to explain how this sentence makes point from an epidemiological point of view? If you don't know WHY A "causes" B then you don't know that A "causes" B, only that if you control for one possible other variable (the "natural experiment") the two variables remain correlated. And of course there's absolutely no guarantee - or even a reason to believe - that next time A happens it will cause B again, or that if A happens in a different country, it will cause B again. So even if you've established (although you really haven't) that in 1972, in the counties of the state of Ohio, A caused B, you can't say that something similar to A will cause B in 2016 in the counties of the state of Alaska.
Angrist and Pischke just fetishize one particular approach that has worked well in the particular sub-sub-field in which they're active. Ok. But to go from there and insist that everyone else must do the same thing, even in different fields, even when they're asking different questions, is just silly and, sort of egoistical.
If you don't know WHY A "causes" B then you don't know that A "causes" B, only that if you control for one possible other variable (the "natural experiment") the two variables remain correlated. And of course there's absolutely no guarantee - or even a reason to believe - that next time A happens it will cause B again, or that if A happens in a different country, it will cause B again.Delete
This is just the problem of induction. It holds for any prediction of the future, including every calculation ever performed in physics and chemistry.
But to go from there and insist that everyone else must do the same thing, even in different fields, even when they're asking different questions, is just silly and, sort of egoistical.
Well, that's how paradigms work. To get everyone on the same page, some good stuff has to get thrown away. That might be worth it, or it might not be. You've read Kuhn, right?
Have you read anything since Kuhn? E.g.: Kuhn's Structure of Scientific Revolutions at Fifty, Richards and Dalton, eds..Delete
above should be "makes sense"ReplyDelete
and it should be epistemological not "epidemiological". Stupid spell checker.ReplyDelete
This one is great and is really a good post. I think it will help me a lot in the related stuff and is very much useful for me... Forex Trading tipsReplyDelete
The most interesting thing about these trends for me is the interest that philosophers have taken in them. The main problem with all observational studies continues to be establishing causal relationships. Yet the basic problem with these studies is no different from a fully controlled, randomized experiment: omitted variable bias. The philosophers who take causality seriously have a lot to teach us here and it goes way beyond the quasi-experimental designs mentioned above.ReplyDelete
They may have taken the "con" out of econometrics, but they have left behind the "tric".ReplyDelete
Correlation is not causation.