Chris House, my old macro prof, has posted a rebuttal to my rebuttal of his original rebuttal of my article in The Week (pant pant pant). So I thought I should respond once more.
Chris begins his latest post with an elaboration of the scientific method, which I agree with. But in his earlier post, he wrote:
In many...areas in economics the theories aren’t rejected because...the theories simply don’t exist.At first I thought that he was complaining about fields of economics that generate theories inductively - i.e., by first observing and then trying to think of a theory to explain those observations. I said "What's the problem with that?" But Chris doesn't have a problem with that, it turns out. My mistake! What he has a problem with, apparently, is that some areas of economics have no theories at all...ever. "The theories simply don't exist."
It would be interesting to know which areas of economics Chris thinks are completely devoid of theories. I have never, to my knowledge, encountered such a thing.
Next, Chris takes issue with my criticism of "moment matching":
[Noah] makes some surprising remarks about “moment matching.” It’s true that macroeconomists often evaluate their theories by comparing the statistical moments implied by the model with the analogous moments in the data but it’s hard to see this as a problem. Moment matching is the underpinning of almost all statistics. Estimating a mean and a variance is a special case of matching moments. So is OLS estimation (so is IV and basically every other econometric technique other than maximum likelihood estimation).Chris may have missed my point here. By "moment matching" I was not referring to method-of-moments estimation. I was referring instead to the technique, common in the early RBC literature, of making eyeball comparisons of a small handful of simulated moments to their empirical counterparts, waving one's hands, and saying "OK, we sort of get these ones right, but not these other ones." I have seen this done in seminars.
Is this kind of thing a legitimate thing to do? Sure, of course! But if you stop there - if you are satisfied with semi-quantitative eyeball comparisons like that - you haven't really validated your model against empirical data. You've only passed a very easy first test. You really ought to do more. So I was criticizing not this practice of "eyeball moment matching", but the idea that you can just stop there and not subject your model to more stringent tests before declaring it a success.
Some macroeconomists, it seems to me, have not always been as eager to compare their models to data as Chris says.
[Noah also] questions whether quantification is valuable when...the theory is rejected. The way I look at it, the parameters are inputs into the models and will likely have relevance in many settings. Suppose you have a labor supply / labor demand model and you want to use it to analyze an increase in the minimum wage...The predicted change in unemployment will be a function of the labor supply elasticity and the labor demand elasticity. You can certainly estimate these parameters without testing the model...[T]he parameter estimates retain their usefulness even if the model is rejected. (emphasis mine)I put some parts of this in bold because I think these parts are wrong. If your model is a bad model, then it's very hard to know when and where your estimated parameter values will be useful.
For example, suppose you're trying to estimate a consumption Euler equation under the assumption of Constant Absolute Risk Aversion utility. But suppose that people actually have Constant Relative Risk Aversion. In that case, your estimate of the coefficient of absolute risk aversion will not be usable in other contexts, because it is not a structural parameter. Alternatively, suppose that many consumer are hand-to-mouth consumers who do not obey an Euler equation at all; in other words, your model is misspecified. In that case, your estimate of the coefficient of absolute risk aversion will be heavily biased, and if you use it in other contexts, you'll be using a bad number. In either of these cases, the data will also reject the Euler equation model itself. The rejection of the model will be a strong hint that the parameter estimates are not usable in any context.
To take another example, suppose I make a macro model in which the erratic breaths of Skippy the Giant Flying Mongoose keep firms from changing their prices. A key parameter in my model is the strength of Skippy's breath. When I estimate the full model, I obtain a quantitative estimate for the breath strength parameter. The data reject my overall model. OK, fine. But I also have another, completely different model in which firms' price changes matter! So I can plug the Skippy breath strength parameter estimate from before into my drug use model, and use that value to figure out how often firms change their prices, right? Well, maybe not. Because actually, it turns out that firms don't change their prices in response to the breath of an imaginary flying mongoose (sorry Skippy!).
Basically, if your parameter isn't a structural parameter - if it isn't "a real thing", in some sense - then that's probably going to cause your model to fail to match the data (at least if you have good enough data). And if other parts of your model are misspecified, that will probably affect your estimates of the parameter in question. Now, Chris is right that the converse is not necessarily true - your model can be misspecified but your parameter might be structural anyway, and your estimate of it in the context of the bad model might still be a good estimate.
It just seems somewhat unlikely.
Chris next addresses my point about "puzzles":
Noah’s claim that empirical puzzles are easy to come by is simply wrong. Noah’s (intentionally) facetious example would only be a puzzle if we had some reason to believe the theory a priori.But that's what I tried to say earlier. "Puzzles" are only "puzzles" when a theory seems so convincing and truth-y that we scratch our heads in wonderment at the fact that it doesn't fit the data. Skippy is not a priori believable enough for his absence to warrant a "puzzle" (sorry Skippy!).
But a priori is a subjective thing. It really means "from one's prior beliefs". Finding a "puzzle" can just mean convincing a bunch of people in your field that a theory is really really plausible even though it doesn't match what we see. That may not be an easy thing to do, but it's not exactly a great scientific discovery either, at least to someone who doesn't share your priors.
Anyway, Chris concludes his post with the example of heliocentrism. Early versions of heliocentric theory made big mistakes, but the basic concept was sound. So eventually they found a heliocentric model that worked great. And yes...eventually, we may find a DSGE model that explains the business cycle really well. I definitely don't count it out. Actually, this exact same analogy was made a couple years ago by Matt Yglesias.
So maybe the current macro paradigm will eventually pay off big.
But OK, suppose for a moment - just imagine - that somewhere, on some other planet, there was a group of alien macroeconomists who made a bunch of theories that were completely wrong, and were not even close to anything that could actually describe the business cycles on that planet. And suppose that the hypothetical aliens kept comparing their nonsense theories to data, and they kept getting rejected by the data, but the aliens still found the nonsense theories very cool and very a priori convincing, and they kept at it, finding "puzzles", estimating parameter values, making slightly different nonsense models, etc., in a neverending cycle of brilliant non-discovery.
Now tell me: In principle, how should those aliens tell the difference between their situation, and our own? That's the question that I think we need to be asking, and that a number of people on the "periphery" of macro are now asking out loud.