Tuesday, February 28, 2017

Can't we all just get along?, Econometrics edition


Some academic fights I understand, like the argument over whether to use sticky prices in DSGE models. Others I have trouble comprehending. One of these is the fight between champions of structural and quasi-experimental econometrics. Angrist and Pischke, the champions of the quasi-experimental approach, waste few opportunities to diss structural work, and the structural folks often fire back. What I don't get is: Why not just do both?

Each approach has unavoidable strengths and weaknesses. Francis Diebold explains these in a nerdy way in a recent blog post. I tried to explain these in a non-nerdy Bloomberg View post a year ago.

The strength of the structural approach, relative to the quasi-experimental approach, is that you can make much bigger, bolder predictions. With the quasi-experimental approach, you typically have a linear model, and you estimate the slope of that line around a single point in the space of observables. As we all remember from high school calculus, we can always do that as long as something is differentiable:


But as you get farther from that point, extrapolation of the curve becomes less accurate. The curve curves. And just knowing the slope of that tangent line at that one point won't tell you how quickly your linear approximation becomes useless as you move away from that point. 

So this means that quasi-experimental methods have limited utility, but we can't really know how limited. Suppose we found out that minimum wage has a very small effect on jobs when you go from $4.25 to $5.05. How much does that tell us about how bad a $7.50 minimum wage would be? Or a $12.75 minimum wage? In fact, if all we have is a quasi-experimental study, we don't actually know how much it tells us. 

Quasi-experimental results come with basically no guide to their own external validity. You have to be Bayesian in order to apply them outside of the exact situation that they studied. You have to say "Well, if going from $4.25 to $5.05 wasn't that bad, I doubt going to $6.15 would be that much worse!" That's a prior.

If you want to believe that your model works far away from the data that you used to validate it, you need to believe in a structural model. That model could be linear or nonlinear, but "structural" basically means that you think it reflects factors that are invariant to conditions not explicitly included in the model. "Structural," in other words, means "the stuff that (you hope) is really going on."

The weakness of structural modeling is that good structural models are really, really rare. Most real-world situations in economics are pretty complicated - there are a lot of ins, a lot of outs, a lot of what-have-you. When you make a structural model you assume a lot of things away, and you assume that you've correctly specified the parts you leave in. This can often leave you with a totally bullshit fantasy model. 

So just test the structural model, and if the data reject it, don't use it, right? Hahahahahahaha. That would kill almost all the models in existence, and no models means no papers means no jobs for econometricians. Also, even if you're being totally serious and scientific and intellectually honest, it's not even clear how harsh you want to be when you test an econ model - this isn't physics, where things fit the data to arbitrary precision. How good should we even expect a "good" model to be? 

But that's a side track. What actually happens is that lots of people just assume they've got the right model, fit it as best they can, and report the parameter estimates as if those are real things. Or as Francis Diebold puts it:
A cynical but not-entirely-false view is that structural causal inference effectively assumes a causal mechanism, known up to a vector of parameters that can be estimated. Big assumption. And of course different structural modelers can make different assumptions and get different results.
So with quasi-experimental econometrics, you know one fact pretty solidly, but you don't know how reliable that fact is for making predictions. And with structural econometrics, you make big bold predictions by making often heroic theoretical assumptions. 

(The bestest bestest thing would be if you could use controlled lab experiments to find reliable laws that hold in more complex environments, and use those to construct reliable microfounded models. But that's like wishing for a dragon steed. Keep wishing.)

So why not do both things? Do quasi-experimental studies. Make structural models. Make sure the structural models agree with the findings of the quasi-experiments. Make policy predictions using both the complex structural models and the simple linearized models, and show how the predictions differ. 

What's wrong with this approach? Why should structural vs. quasi-experimental be an either-or? Why the academic food fight? If there's something that needs fighting in econ, it's the (now much rarer but still too common) practice of making predictions purely from theory without checking data at all.

13 comments:

  1. Anonymous6:23 PM

    I think this is the right take. I have two observations that I don't see people make in these debates:

    (1) There's a pragmatic argument to be made for the quasiexperimental methods: they all rely on the "same" assumption for their internal validity, namely that the treatment is orthogonal to excluded factors. It is easier to train PhDs to critically evaluate this assumption (which we mostly do anyway, since it's the same assumption that allows us to put a causal interpretation to a regression). So within social science, there are more people who are capable of assessing and critiquing the Angrist-style models than the more complex structural models. This could produce better scientific knowledge overall, because it means that the papers which survive this criticism are more effectively vetted.

    (2) That being said, I find that most of these RD/IV/RCT papers implicitly use theory to extrapolate away from the region where they have a credible causal estimate. It's common to see arguments such as "my RD tells me that there is a positive effect of the treatment at the discontinuity, and because [insert loosely considered theoretical justification], it also holds away from the discontinuity." Structural models tend to be (necessarily) more explicit about the theoretical arguments that allow them to run counterfactual experiments. We need to be better as a field at pushing back on the ad-hoc theorizing that fills the conclusions of your typical quasiexperimental paper.

    ReplyDelete
  2. "This can often leave you with a totally bullshit fantasy model. So just test the structural model, and if the data reject it, don't use it right? Hahahahahaha."

    I nominate this excerpt as the finest three consecutive sentences to appear in an Econ blog so far in 2017. Noah back in old form. Great!

    ReplyDelete
    Replies
    1. I second that!

      Delete
    2. Anonymous2:06 PM

      "This can often leave you with a totally bullshit fantasy model. So just test the structural model, and if the data reject it, don't use it right? Hahahahahaha."

      This. In theory, there should be some good structural stuff out there, but in practice, all I ever see in these papers is implausible assumptions buried under a sea of algebra. For some reason, they never test their model out-of-sample. Actually, not for some reason. It's because their models generally have great in-sample fit and are useless out of sample.

      Delete
  3. "The debate is rhetorical. We know, from first principles, that any causal conclusion drawn from observational studies must rest on untested causal assumptions. Therefore, whatever relation an instrumental design bears to an ideal controlled experiment is just one such assumption and, to the extent that the "experimental" approach is valid, it is a routine exercise in structural economics."

    From Pearl, "Trygve Haavelmo and the Emergence of Causal Calculus" Econometric Theory 2015.

    Very much recommended. Link: ftp://pike.cs.ucla.edu/pub/stat_ser/r391.pdf

    ReplyDelete
  4. You seem to be getting close to Robert Lucas's views on calibration. Write down structural models. Use lots of estimates from related studies to pin down many of the key parameters.

    ReplyDelete
  5. Anonymous12:54 PM

    My impression is that for the quasi-experimental crowd, it's just ok to say: I don't know and I can't possibly know until I have appropriate data to test my hypothesis.

    ReplyDelete
  6. It would seem obvious that these things would feed each other, with structural theories prompting experimental tests, and experimental results providing indications of whether a theory is on the right track or missing a variable or three. Fights like this make me want to slap economists. More than I usually want to, that is.

    ReplyDelete
  7. I find it amusing that fights within economstrics are often nastier and bloodier than those in other parts of economics, even though these fights do not obviously involve politics or ideology, although they can when the fighting is about specific models or issues such as labor supply elasticities or school choice, or whatever. But this sort of bitter fighting has been going on for a long time, with battles betweeen frequentists and Bayesians often pretty personal and unpleasant, as well as those between time-series and structural modelers, although the latter seem to have quieted down somewhat, with the contending parties managing a sort of division of labor over data so as to avoid each other, and also as noted above, the calibrators versus pretty much everybody else as well.

    This kind of nasty fighting has been going on for more than 40 years at least, as I remember being shocked as a grad student going to an econometrics session at a conference and seeing people standing up and engaging in nasty personal insults along the lines of, "You do not know what you are talking about!" kinds of language over these disputes. But then the Bayesians and frequentists have been at it for a very long time, not to mention the old Fisher-Pearson fights that are pushing a century in age.

    Barkley Rosser

    ReplyDelete
  8. It is somewhat odd that the arguments among econometricians are so intense and can get so personal, but it has been going on for a long time, nearly a century at least, if not more. We have had Fisher vs Pearson-Neyman. We have had Bayesians versus frequentists. We have had structural modelers versus time-series people. I was shocked 40 years ago when I first walked into an econometrics session at a conference and heard participants accusing each of "not knowing what you are talking about" in raised voices. So it has been going on for some time, and most of these disputes remain at least somewhat unresolved.

    Barkley Rosser

    ReplyDelete
  9. Anonymous2:27 PM

    I don't have twitter account so I can't post this there. But, you might want to read David Frum's criticism of "Coming Apart" book:

    http://www.thedailybeast.com/articles/2012/02/06/charles-murray-book-review.html

    (Make sure to turn the ad-blocker on as the page is unreadable otherwise.)

    ReplyDelete
  10. Anonymous2:03 AM

    “Peter Drucker said ‘There’s a difference between doing things right and doing the right thing.’ Doing the right thing is wisdom, and effectiveness. Doing things right is efficiency. The curious thing is the righter you do the wrong thing the wronger you become. If you’re doing the wrong thing and you make a mistake and correct it you become wronger. So it’s better to do the right thing wrong than the wrong thing right. Almost every major social problem that confronts us today is a consequence of trying to do the wrong things righter.”

    ReplyDelete