Chris House, my old macro prof, has posted a rebuttal to my rebuttal of his original rebuttal of my article in The Week (pant pant pant). So I thought I should respond once more.
Chris begins his latest post with an elaboration of the scientific method, which I agree with. But in his earlier post, he wrote:
In many...areas in economics the theories aren’t rejected because...the theories simply don’t exist.At first I thought that he was complaining about fields of economics that generate theories inductively - i.e., by first observing and then trying to think of a theory to explain those observations. I said "What's the problem with that?" But Chris doesn't have a problem with that, it turns out. My mistake! What he has a problem with, apparently, is that some areas of economics have no theories at all...ever. "The theories simply don't exist."
It would be interesting to know which areas of economics Chris thinks are completely devoid of theories. I have never, to my knowledge, encountered such a thing.
Next, Chris takes issue with my criticism of "moment matching":
[Noah] makes some surprising remarks about “moment matching.” It’s true that macroeconomists often evaluate their theories by comparing the statistical moments implied by the model with the analogous moments in the data but it’s hard to see this as a problem. Moment matching is the underpinning of almost all statistics. Estimating a mean and a variance is a special case of matching moments. So is OLS estimation (so is IV and basically every other econometric technique other than maximum likelihood estimation).Chris may have missed my point here. By "moment matching" I was not referring to method-of-moments estimation. I was referring instead to the technique, common in the early RBC literature, of making eyeball comparisons of a small handful of simulated moments to their empirical counterparts, waving one's hands, and saying "OK, we sort of get these ones right, but not these other ones." I have seen this done in seminars.
Is this kind of thing a legitimate thing to do? Sure, of course! But if you stop there - if you are satisfied with semi-quantitative eyeball comparisons like that - you haven't really validated your model against empirical data. You've only passed a very easy first test. You really ought to do more. So I was criticizing not this practice of "eyeball moment matching", but the idea that you can just stop there and not subject your model to more stringent tests before declaring it a success.
Some macroeconomists, it seems to me, have not always been as eager to compare their models to data as Chris says.
Chris continues:
[Noah also] questions whether quantification is valuable when...the theory is rejected. The way I look at it, the parameters are inputs into the models and will likely have relevance in many settings. Suppose you have a labor supply / labor demand model and you want to use it to analyze an increase in the minimum wage...The predicted change in unemployment will be a function of the labor supply elasticity and the labor demand elasticity. You can certainly estimate these parameters without testing the model...[T]he parameter estimates retain their usefulness even if the model is rejected. (emphasis mine)I put some parts of this in bold because I think these parts are wrong. If your model is a bad model, then it's very hard to know when and where your estimated parameter values will be useful.
For example, suppose you're trying to estimate a consumption Euler equation under the assumption of Constant Absolute Risk Aversion utility. But suppose that people actually have Constant Relative Risk Aversion. In that case, your estimate of the coefficient of absolute risk aversion will not be usable in other contexts, because it is not a structural parameter. Alternatively, suppose that many consumer are hand-to-mouth consumers who do not obey an Euler equation at all; in other words, your model is misspecified. In that case, your estimate of the coefficient of absolute risk aversion will be heavily biased, and if you use it in other contexts, you'll be using a bad number. In either of these cases, the data will also reject the Euler equation model itself. The rejection of the model will be a strong hint that the parameter estimates are not usable in any context.
To take another example, suppose I make a macro model in which the erratic breaths of Skippy the Giant Flying Mongoose keep firms from changing their prices. A key parameter in my model is the strength of Skippy's breath. When I estimate the full model, I obtain a quantitative estimate for the breath strength parameter. The data reject my overall model. OK, fine. But I also have another, completely different model in which firms' price changes matter! So I can plug the Skippy breath strength parameter estimate from before into my drug use model, and use that value to figure out how often firms change their prices, right? Well, maybe not. Because actually, it turns out that firms don't change their prices in response to the breath of an imaginary flying mongoose (sorry Skippy!).
Basically, if your parameter isn't a structural parameter - if it isn't "a real thing", in some sense - then that's probably going to cause your model to fail to match the data (at least if you have good enough data). And if other parts of your model are misspecified, that will probably affect your estimates of the parameter in question. Now, Chris is right that the converse is not necessarily true - your model can be misspecified but your parameter might be structural anyway, and your estimate of it in the context of the bad model might still be a good estimate.
It just seems somewhat unlikely.
Chris next addresses my point about "puzzles":
Noah’s claim that empirical puzzles are easy to come by is simply wrong. Noah’s (intentionally) facetious example would only be a puzzle if we had some reason to believe the theory a priori.But that's what I tried to say earlier. "Puzzles" are only "puzzles" when a theory seems so convincing and truth-y that we scratch our heads in wonderment at the fact that it doesn't fit the data. Skippy is not a priori believable enough for his absence to warrant a "puzzle" (sorry Skippy!).
But a priori is a subjective thing. It really means "from one's prior beliefs". Finding a "puzzle" can just mean convincing a bunch of people in your field that a theory is really really plausible even though it doesn't match what we see. That may not be an easy thing to do, but it's not exactly a great scientific discovery either, at least to someone who doesn't share your priors.
Anyway, Chris concludes his post with the example of heliocentrism. Early versions of heliocentric theory made big mistakes, but the basic concept was sound. So eventually they found a heliocentric model that worked great. And yes...eventually, we may find a DSGE model that explains the business cycle really well. I definitely don't count it out. Actually, this exact same analogy was made a couple years ago by Matt Yglesias.
So maybe the current macro paradigm will eventually pay off big.
But OK, suppose for a moment - just imagine - that somewhere, on some other planet, there was a group of alien macroeconomists who made a bunch of theories that were completely wrong, and were not even close to anything that could actually describe the business cycles on that planet. And suppose that the hypothetical aliens kept comparing their nonsense theories to data, and they kept getting rejected by the data, but the aliens still found the nonsense theories very cool and very a priori convincing, and they kept at it, finding "puzzles", estimating parameter values, making slightly different nonsense models, etc., in a neverending cycle of brilliant non-discovery.
Now tell me: In principle, how should those aliens tell the difference between their situation, and our own? That's the question that I think we need to be asking, and that a number of people on the "periphery" of macro are now asking out loud.
Noah, I don't know why you keep hating on the promising Skippy the Giant Flying Mongoose model, which has already given us a wealth of novel policy insights about the relationship between mongooses and monetary velocity. So what if we use unmeasurable free parameters to fit data to theory curves? Falsifiability just sets you up for potential failure and hurt feelings. Correlation is the next best thing to causation. The artful truth is that you have to know which model to curve fit to any given combination of political bias and policy question, which is why policymakers will need macroeconomist expertise for the foreseeable future.
ReplyDeleteAlso I concur with Dr. House that a scientist is someone who goes through the motions of talking like a scientist. And we can all learn from Kepler that if we cling to our a priori assumptions long enough, we may develop a model that validates them. We in the Skippy camp firmly believe that time will prove that we were the Keplers of macro.
Chris House:
ReplyDelete.[T]he parameter estimates retain their usefulness even if the model is rejected. (emphasis mine)
Noah:
"I put some parts of this in bold because I think these parts are wrong. If your model is a bad model, then it's very hard to know when and where your estimated parameter values will be useful."
Absolutely. House'e claim is truly bizarre.
If your model is wrong, their will be no mapping from any parameter in the model to any real world entity, and the parameter values will be truly gibberish. In fact the model will not even have a domain of applicability. Things get even worse when those model parameters are given names that have meanings in the real world, and too many people, including far too many economists, suffer from the delusion that the naming itself somehow magically creates a mapping from the variable in the model to real world entities with the same name.
Chris House outlines the scientific method... which is excellent, so few people teach it these days. I'll quote the most important step:
ReplyDelete4. Repeat
Well, OK, maybe that isn't the most important step, because as Chris rightly points out, all the steps are important. However, in macroeconomics you don't see too many people going back and repeating the experiment to see if they got the same result. Repetition is the thing that physicists, chemists, and engineers do, and macroeconomists don't do, which is what makes that step important... the fact that it often does not get done.
What's all this talk about mongooses?
ReplyDeletehttp://i.imgur.com/smFP9cQ.jpg
Another interesting post.
ReplyDeleteEconomists often debate the relationship between economics and science. While these debates are interesting to an outsider, they are also frustrating. Here are a few points from a different perspective.
I have no doubt that economics could be studied using a scientific approach. However, it’s not clear whether it is. Many economic theories are difficult to prove true or false. Two questions arise from this.
First, what status should we give to economic theories which have not been proven true or false, particularly when these theories have implications for policy advice? Economists appear to assume that their theories are true until they are proven false. That puts the onus on the rest of society (i.e. non-economists) to work out for ourselves whether the theories are true or false in order to protect ourselves from the implications of the policy advice. That’s too difficult and, whatever else it is, it’s not science.
Second, what criteria do economists use to reject a theory as false? It’s not clear to the rest of us. The absence of such criteria is a matter of ethics. Paul Krugman, amongst others, makes this point regularly from inside the profession. However, I don’t think I have ever read any thoughts from Krugman or other economists on how these ethical concerns might be addressed.
A separate issue relates to the nature of economic science. When economists discuss economics as a science, they almost always draw analogies with physics and astronomy. However, economics is nothing like physics and astronomy. It is more like evolution, biology and medicine, with further analogies to meteorology and seismology in the area of forecasting, and analogies with engineering when policy recommendations are implemented.
Charles Darwin is one of the most important scientists in history. All he really did was to wander about making detailed notes of the diversity of nature and to ask himself how this diversity could have come about. He didn’t develop a mathematical model of his findings. He didn’t make any predictions other than to say that nature would continue to evolve. As for the analogy with medicine, I seem to remember that Keynes once said that economists should be more like dentists i.e. by solving practical problems rather than developing ideologies or trying to predict the future of a complex and evolving system.
Many of the least credible aspects of economics arise from the pursuit of the physics analogy rather than the evolution/medicine analogy. Darwin revelled in diversity – economists assume that everything is the same as everything else e.g. representative agents. Medical doctors study all of the organs of the body – economists make sweeping assumptions e.g. banks don’t matter. Worse, economists often appear look down on people who do study how banks and businesses actually work. Medical doctors have a code of ethics – economists don’t. Engineers focus on the risks associated with their designs, and on the limits of their designs. They ask themselves how their designs will work in practice – not just in theory. Again, economists come off badly in this comparison.
I can't resist the heliocentric bait. Some relevant points
ReplyDelete1) from Copernicus to Kepler thousands of person years of work were not put into heliocentric models. In fact, almost no one read De Revolutionibus ... (search amazon.com for "the book that nobody read" http://amzn.to/MlgRer
The analogy is total nonsense.
2. It is absolutely not at all true that after Copernicus and before Kepler geocentric models out performed heliocentric models. There is no basis for this claim.
3. It is absolutely not true that Ptolomaic models were fiddled until they fit the data. Everyone thinks they know this happened but the claim is not based on any primary source (not a book a pamphlet notes a personal letter or a diary by a Ptlomaic astronomer). Nor does any of the extremely numerous secondary sources which contains the assertion cite a primary source (which might be lost by now but no one has ever claimed to have seen one).
4) In fact, in Copernicus's time astronomers used Ptolemy's model -- the model written down over a milenium earlier. There was a modification -- the transition from the Julian to the Gregorian calender which is a new estimate of a parameter. This claim absolutely *is* based on a primary source (notes to self). The astronomer was this guy named Copernicus.
Kepler my ass. Ptolemy had a model which yielded remarkably accurate predictions centuries and centuries out of sample. The fact that a model which fit huge amounts of data remarkably well now seems to be fundamentally wrong does not imply that models which don't fit data well shall in the future be seen to be fundamentally right.
In any case the argument based on the case of Kepler is, in fact, based on myths not history. False claims of fact.
I might add I read it long ago from Mankiw (who was discussing the rational expectations revolution long before new Keynesian DSGE). Oh and recently from Summers (that huge fan of macro orthodoxy).
I wrote about this at even more boring length (googling I found that our host linked to me -- thanks).
http://rjwaldmann.blogspot.it/2011/04/is-it-true-that-ptolomaic-models-gave.html
http://rjwaldmann.blogspot.it/2011/04/more-on-ptolemy-and-copernicus-it-is.html
Even when you do fully scientific-method-approved experiments of the sort impossible in economics, you can still be fooling yourself. I.e. Michio Kaku's flea story:
DeleteA scientist once trained a flea to jump whenever he rang a bell. Using a microscope, he then anesthetized one of the flea's legs and rang the bell again. The flea still jumped.
The scientist then anesthetized another leg and rang the bell. The flea still jumped.
Eventually, the scientist anesthetized more and more legs, each time ringing the bell, and each time recording that the flea jumped.
Finally, the flea had only one leg left. When the scientist anesthetized the last leg and rang the bell, he found to his surprise that the flea no longer jumped.
Then the scientist solemnly declared his conclusion, based on irrefutable scientific data: Fleas hear through their legs!
Kepler was able to formulate his laws because of the painstaking accumulation of data by Tyco Brahe. We have too many economists who fancy themselves as Kepler and not enough willing to play the role of Brahe.
DeleteOn the matter of Copernicus, latest Scientific American has fascinating article on how the original Copernicus model failed to perform better than Tycho Brahe's model that involved a combination of geo and helio centrism. By the time Newton got going, more followed Copernicus than Brahe, but in fact it was not until 1830 that the final definitive empirical evidence establishing the superiority of Copernican heliocentrism over Tychian combined centrism.
ReplyDeleteAnd while we are at it, what does the Giant Flying Mongoose have over the Giant Flying Spaghetti Monster? After all, those people putting up monuments worshipping the latter in public parks are economists, even if they are a bit peripheral... :-).
Barkley Rosser
Many econometric papers that attempt to prove or state causality do not really have an economic THEORY per say behind it...having a model that of course has certain assumptions does not mean the results have a theory. And no, you can't create theories ad hoc unless you have certain causal assumptions behind it. So he was exactly right.
ReplyDeleteNoah:
ReplyDelete" the aliens still found the nonsense theories very cool and very a priori convincing, and they kept at it, finding "puzzles", estimating parameter values, making slightly different nonsense models, etc., in a neverending cycle of brilliant non-discovery."
At least when physicists working in string theory pretend not to be theoretical mathematicians, they don't get the ability to make policy recommendation based on their work.
I disagree with the idea that there's no problem with generating theories inductively as you describe. What you get when you attempt to observe without any acknowledged theory isn't an unbiased view of the world but observations guided by whatever folk theory or other preconceptions that you have.
ReplyDeleteSure, but that's equally the case when you start deductively (i.e. start from theory). What matters isn't where you start from, it's where you end up. The important part is the alternating iteration of observation and explanation. Eventually, if all goes well, you converge to the right theory.
DeleteIf you're not explicit about where you're starting from or you assume you're just starting from the data, then it's a lot harder to back up and see where you went wrong when you run into something that contradicts what your theory suggests. Pretending that you're only starting with data just makes that more difficult.
DeleteIf the scientific method can't converge any prior to the truth, then it's one of those cases where there's just insufficient data, and backing up and finding a better starting point won't help.
DeleteIt's also possible to just be interpreting the data incorrectly because of theoretical preconceptions. More data could make that more clear but it's not absolutely necessary.
DeleteBrlliant illustration in the end!
ReplyDelete- A PhD student