Saturday, August 29, 2015

The macro/micro validity tradeoff


Michael Lind wrote an article recently suggesting that universities abolish the social sciences. He unfairly credits me with the term "mathiness", which of course is Paul Romer's thing. But anyway, I tweeted the article (though I disagree with it pretty strongly), and that provoked an interesting discussion with Ryan Decker.

When economists defend the use of mathematical modeling, they often argue - as Ryan does - that mathematical modeling is good because it makes you lay our your assumptions clearly. If you lay out your assumptions clearly, you can think about how plausible they are (or aren't). But if you hide your assumptions behind a fog of imprecise English words, you can't pin down the assumptions and therefore you can't evaluate their plausibility.

True enough. But here's another thing I've noticed. Many economists insist that the realism of their assumptions is not important - the only important thing is that at the end of the day, the model fits the data of whatever phenomenon it's supposed to be modeling. This is called an "as if" model. For example, maybe individuals don't have rational expectations, but if the economy behaves as if they do, then it's OK to use a rational expectations model.

So I realized that there's a fundamental tradeoff here. The more you insist on fitting the micro data (plausibility), the less you will be able to fit the macro data ("as if" validity). I tried to write about this earlier, but I think this is a cleaner way of putting it: There is a tradeoff between macro validity and micro validity.

How severe is the tradeoff? It depends. For example, in physical chemistry, there's barely any tradeoff at all. If you use more precise quantum mechanics to model a molecule (micro validity), it will only improve your modeling of chemical reactions involving that molecule (macro validity). That's because, as a positivist might say, quantum mechanics really is the thing that is making the chemical reactions happen.

In econ, the tradeoff is often far more severe. For example, Smets-Wouters type macro models fit some aggregate time-series really well, but they rely on a bunch of pretty dodgy assumptions to do it. Another example is the micro/macro conflict over the Frisch elasticity of labor supply.

Why is the macro/micro validity tradeoff often so severe in econ? I think this happens when an entire theoretical framework is weak - i.e., when there are basic background assumptions that people don't question or tinker with, that are messing up the models.

For example, suppose our basic model of markets is that prices and quantities are set based purely on norms. People charge - and pay - what their conscience tells them they ought to, and people consume - and produce - the amount of stuff that people think they ought to, in the moral sense. 

Now suppose we want to explain the price and quantity consumed of strawberries. Microeconomists measure people's norms about how much strawberries ought to cost, and how many strawberries people ought to eat. They do surveys, they do experiments, they look for quasi-experimental shifts that might be expected to create shifts in these norms. They get estimates for price and quantity norms. But they can't match the actual prices and quantities of strawberries. Not only that, they can't match other macro facts, like the covariance of strawberry prices with weather in strawberry-growing regions. (A few microeconomists even whisper about discarding the idea of norm-driven prices, but these heretics are harshly ridiculed on blogs and around the drink table at AEA meetings.)

So the macroeconomists take a crack at it. They make up a class of highly mathematical models that involve a lot of complicated odd-sounding mechanisms for the creation of strawberry-related norms. These assumptions don't look plausible at all, and in fact we know that some of them aren't realistic - for example, the macro people assume that weather creates new norms that then spread from person to person, which is something people have never actually observed happening. But anyway, after making these wacky, tortured models, the macro people manage to fit the facts - their models fit the observed patterns of strawberry prices and strawberry consumption, and other facts like the dependence on weather.

Now you get to choose. You can accept the macro models, with all of their weird assumptions, and say "The economy works as if norms spread from the weather", etc. etc. Or you can believe the micro evidence, and argue that the macro people are using implausible assumptions, and frame the facts as "puzzles" - the "strawberry weather premium puzzle" and so on. You have a tradeoff between valuing macro validity and valuing micro validity. 

But the real reason you have this tradeoff is because you have big huge unchallenged assumptions in the background governing your entire model-making process. By focusing on norms you ignore production costs, consumption utility, etc. You can tinker with the silly curve-fitting assumptions in the macro model all you like, but it won't do you any good, because you're using the wrong kind of model in the first place. 

So when we see this kind of tradeoff popping up a lot, I think it's a sign that there are some big deep problems with the modeling framework. 

What kind of big deep problems might there be in business cycle models? Well, people or firms might not have rational expectations. They might not act as price-takers. They might not be very forward-looking. Norms might actually matter a lot. Their preferences might be something very weird that no one has thought of yet. Or several of the above might be true.

But anyway, until we figure out what the heck is, as a positivist might say, really going on in economies, we're going to have to choose between having plausible assumptions and having models that work "as if" they're true.

28 comments:

  1. There are other ways of looking at it. Suppose some economic policy is suggested to improve some specific industry or sector of the economy. Suppose after some detailed analysis of the proposal, economists finds that such a policy would necessarily cause the industry to collapse if people had rational expectations. Now imagine that they also find that if instead you model people as not having rational expectations, but some non rational style of expectations - meaning they are consistently fooled or duped into thinking a certain way - THEN the policy is successful. Would you recommend such a policy?

    I wouldn't, perhaps in the short term agents will behave in this way, but in the long term can you really be confident that people would not figure out how the system really works? If your policy is only sustainable so long as people do not form rational expectations, is it really a wise policy?

    Evolutionary economics is the way to go; it can capture the short term dynamics that explains the short term deviations.

    ReplyDelete
  2. Anonymous9:43 PM

    You should not write about stuff you have no clue about. Or at least study before doing it. Whenever you write about methodology it statistics it just seems like a kid talking.

    ReplyDelete
    Replies
    1. you sHould not WrItE abOUt StUFF You hAVe No cLUe AbOUt. Or aT lEASt StUdY BEForE DoiNg it WheneveR YoU wrItE abouT MetHODOlOgy iT STAtiStiCs iT jUsT seeMS lIKE a kId tALkinG

      Delete
  3. Anonymous9:55 PM

    "If you lay out your assumptions clearly, you can think about how plausible they are (or aren't)...........................True enough."

    You have to be kidding Noah. It may cause modellers to think about how plausible their assumptions are - however, it doesn't have them discard aforesaid assumptions going by the plenitude of numbskull models that have been published.

    Henry.

    ReplyDelete
  4. Anonymous10:48 PM

    When the raison d'etre of the model is to provide justification for your favored policies , there is no tradeoff. It's all good , if not great. Like-minded colleagues will go with the flow , and everyone else - well , they're the enemy , right ?

    Don't make it more complicated by trying to explain it using reason.

    Occam's Razor works well here.

    ReplyDelete
  5. I'm not so sure the tradeoff is really between macro and micro validity as much as it is between solvability and micro validity. Good macro models don't just fit facts; they also tell stories and present mechanisms. So a macro model often has some realistic micro part at its core. But, when you write a macro model to formalize a particular mechanism, there are a million micro choices you must make. Do I assume agents have heterogeneous CRRA parameters? heterogeneous time-discount factors? etc. We may have some sense from micro studies of what would be most accurate in each case, but we simply cannot solve a model with thirty different types of heterogeneity. In many macro models, adding more micro realism might improve the macro fit, but it's just not feasible. I don't believe there exists some magical micro framework that suddenly allows us to write down macro models that fit micro and macro data perfectly.

    ReplyDelete
  6. "Smets-Wouters type macro models fit some aggregate time-series really well, but they rely on a bunch of pretty dodgy assumptions to do it." Interesting statement what are the dodgy assumptions? RE? The thing that the I keep being surprised about from the learning literature is how quickly you typically converge to RE.
    And what does it mean it fits the tine series really well? The methodology is that you assume the model is true and back out the shocks to exactly match the data. You may argue that this methodology is unscientific. It is designed to judge one DSGE model to another, but goodness of fit is by construction is not an issue.
    The longer I have been in this profession the more I am convinced I cannot judge a priori what a reasonable versus an unreasonable assumption is. I am definitely an as if guy with assumptions. But there are 2 caveats. I am a big believer in as if when it comes to positive statements. But matching the data does not mean the model is useful for policy in the sense of welfare losses. The second you can worry that DSGE models do not really try to match the data and the validity of assumptions becomes more important.
    Ultimately data without models I find hollow. I am sometimes skeptical of models, but without having a model to explain the data I really don't think much has been learned. Without a model we know something appears to have been true in the past. But we don't know why it was true then or whether it will be true tomorrow.

    ReplyDelete
    Replies
    1. Here's a partial list of weird assumptions in a standard macro model: http://noahpinionblog.blogspot.com/2013/05/what-can-you-do-with-dsge-model.html

      Delete
  7. Most macro models do not make the price-taking assumption. Among other things...

    ReplyDelete
  8. to add, you probably want to go after the way that price stickiness is modeled.

    Also, I doubt putting in "weird preferences" is gonna help much. Those models are already overly complicated. Usually you can't even solve them, only calibrate them. Going with even slightest "weirdness" like non-separability of non-CES makes a complete mess of it. Can't even meaningfully calibrate it. In a way if the results of your model rely on super-funky preferences, they're probably bunk.

    So it actually needs to go simpler. Roger Farmer's work is sort of like that (his models can be complicated but the basic insights can be expressed in a simple version)

    ReplyDelete
  9. Anonymous10:26 AM

    Isn't this the essence of Debreu-Mantel-Sonnenschein? That you can model a whole bunch of different stuff at the micro level and still get the same result at the macro level?

    ReplyDelete
    Replies
    1. DMS is more about possibility of multiple equilibria so in a way that's even a more basic problem then the ones outlined here.

      Delete
  10. Anonymous10:32 AM

    A useful distinction can be made based on the goals of a model. We can differentiate between explanatory models and predictive models. If I want to predict something, it doesn't really matter what I put in the model, whether it is realistic or not, etc., so long as it ends up matching the data and yields accurate predictions. For example, I can accurately predict the height of an unseen twin by treating it as the effect of the height of the known twin even though there is actually no direct causal connection. It doesn't matter that the assumptions are wrong or even silly; the model does what it is supposed to do. On the other hand, we often build models in the hopes of helping us understand the mechanisms that underlie some phenomenon. In this case, including aspects that are not believed to be true is deeply problematic. In sum, your invalid macro models may be good for prediction, but they cannot help us understand the underlying mechanisms of the macroeconomy.

    ReplyDelete
    Replies
    1. I think this might be a way of saying the same thing, actually.

      Delete
  11. "How to make a film," by a guy who watches films.

    ReplyDelete
    Replies
    1. Doesn't that describe every film critic in existence?

      Delete
    2. "Philosophy of science is about as useful to scientists as ornithology is to birds" -- Richard Feynman.

      What he missed though was that birds are precluded from using ornithology because their brains are too tiny!*

      (*I have to give credit where it's due)

      Delete
  12. rayward12:53 PM

    Why do people buy Chrysler cars? Why do people live in Cleveland? Why do people buy suits that are too small and make them look like Pee-Wee Herman? Human behavior is predictably unpredictable. If the path to wealth is to buy low and sell high, why do so many buy high and sell low? I frequently hear investors say they buy Amazon stock because it's safe. What? I also hear investors say that they'd be wealthy if only they had bought lots of Apple stock. I hate roller coasters because I'm afraid of heights and I'm afraid the descent won't end well. Economists were once ridiculed for being two-handed. Now they are ridiculed for being one-handed. It's complicated. Pity the poor soul who chooses to be a two-handed economist. He'd be right but he won't have many friends. I'd recommend that he buy better fitting suits and let Pee-Wee be Pee-Wee. He'll look better for it even if he has no friends.

    ReplyDelete
  13. Anonymous3:24 PM

    Congratulations! You've just articulated what other social scientists have been saying about economics (albeit in different words, but same general logic) for decades now!

    ReplyDelete
  14. Addendum to "Anonymous 3:24pm" in reply to Noah Smith (8/29/2015) on the micro/macro validity trade-off:

    And, I would add, Noah is also articulating here what **many economists** have been saying about macroeconomic theory for decades as well.

    For example, take a look at the "basic issues" that have been listed at the top of the following two sites for a number of years:

    ACE Research Area: Agent-Based Macroeconomics
    http://www2.econ.iastate.edu/tesfatsi/amulmark.htm

    Verification and Empirical Validation of Agent-Based Models
    http://www2.econ.iastate.edu/tesfatsi/EmpValid.htm

    P.S. Why do so many bloggers hide behind "anonymous"?

    ReplyDelete
  15. There's no trade-off. Either the micro description is more accurate and thus better, or less accurate and thus worse.

    If a model built of inaccuracies seems to function accurately, that's probably because it has only been tested for a cherry-picked time period. The other, much rarer possibilities are that all the inaccuracies are unimportant, or that the important ones reliably balance each other out.

    ReplyDelete
  16. "...mathematical modeling is good because it makes you lay our your assumptions clearly."

    I suggest you read Philip Mirowski if you suffer under the illusion that some such statement is a credible description of economics.

    ReplyDelete
  17. Metatone8:23 AM

    I think the crucial complaint is not the use of weird assumptions in complicated prediction models. The "simplifying assumption" is with us in models in many fields and largely uncontroversial as a means of trading tractability for a bit of accuracy.

    The real problem is that the *assumptions* (even when weird) are all too frequently used by economists, many of them famous and respected as the basis for normative prescriptions about how our society should be organised. That's why it matters if the assumptions are plainly false - it's not just about the models.

    Of course, if the models have big and obvious blind spots, people will worry about the weird assumptions too. And that's the other problem in this whole setup. In other fields, where uncertainty lives in the models, you tend to use a bunch of models and try to find useful ways to distinguish between conflicting possibilities. And many economists in central banks and the like do some of this. Unfortunately, the loudest voices in the economics academy, with the greatest influence over tenure, publishing and often, the most input into the mainstream (WSJ/FT/thinktank/politician) policy nexus do so much less often.

    ReplyDelete
  18. Anonymous12:12 PM

    The situation you described is symptomatic of the need for a renormalization group analysis.

    So parameterize your model in some fashion. Then, symbolically do the following:

    fit the model on the micro data
    fit the model on some coarsening of the micro-data.

    The parameter values you find are different, and this gives rise to a renormalization group flow.

    Analyzing this flow lets you figure out which parts of your model are relevant and which parts are irrelevant, as well as phase transitions and exponents around these critical points. These are the information that are somewhat model-independent.

    ReplyDelete
  19. "If you use more precise quantum mechanics to model a molecule (micro validity), it will only improve your modeling of chemical reactions involving that molecule (macro validity)."

    That may be true, but often scale factors start to dominate after a while. In terms of micro vs macro perhaps it's possible that scale comes into play. How does quantum mechanics help you when building a bridge? Or for that matter, how does the internal structure of the proton help you with chemistry? Sometimes the micro properties don't really carry over to the macro (and vice versa). Gas molecules don't have pressure or temperature, and particle physics doesn't generally help much with thermodynamics.

    Of course, I'm just being a modern jackass by bullshitting... I program FPGAs for a living and develop signal processing algorithms. So maybe I got some of that wrong, but I'm pretty sure at least SOME of it is correct. (I can tell you for certain, that I have not yet had occasion to turn to quantum theory, even though the parts I program rely on QM to function).

    ReplyDelete
  20. Actually, some philosophy of science would be very useful here.

    Theories have various virtues. Simplicity, elegance, fruitfulness for further theorizing, mathematical tractability. Maybe economic models have these in spades. I don’t know, I’m not an economist, I’m a philosopher.

    But one big virtue of theories is that they can predict observations. You have some starting conditions, you apply a theory, and you predict your observations. If that’s what you observe, it’s a plus for your theory. If it’s not what you observe, then you have falsified your theory (Popper) or at least you have a strike against it (Quine). This is what makes the theory empirical.

    What Noah is saying is that if we make the starting conditions (micro assumptions) more plausible and accurate, then apply the theory, we get WORSE macro predictions. That just means the theory in question is a failure at the main virtue of a theory, which is predicting observations. Obviously, if I am allowed to assume things that are not true, I can make nearly any theory fit nearly any data.

    So yes, “big deep problems with the modeling framework” sounds pretty accurate.

    ReplyDelete
    Replies
    1. You saw my Feynman quote above? And the response to it (from a philosopher of physics actually)? You might like this (I did).

      Delete
  21. I think you are being unfair here. It's not, and never has been, the goal of mathematical prefisification to check the assumptions are literally true of the world.

    I mean consider an undoubtedly successful mathematical model: that of Newtonian physics (with the kind of standard assumptions about frictionless plains made in the appropriate places). Physics has benefited hugely from formalizing the less organized collection of methods and calculations that came before. Going from Gallelio's observations of the duration of actual falls to the model of objects falling with constant acceleration in vaccuum was a huge leap forward even though it consisted primarily in pretending air didn't exist.

    Also it was preciscely the work done on formalizing and transforming Newtonian mechanics into a variational form that paved the highway we used to understand quantum mechanics. Yet Newtonian mechanics almost makes a point of representing the world in exactly the way quantum mechanics tells us it can't work (definite position/momentum against a immutable, observer independent backdrop of space and time). Newtonian mechanics can't even be said to approximately describe the truths we have learned in quantum mechanics as QM's linearity ensures that there are plentiful macroscopic superpositions over large distanes.

    Nothing in looking at the assumptions of successful models and comparing them to empirical observations or the underlying truth lets us discern their value. It's the choice of assumptions that strip away complexity that we have learned doesn't substantially affect predictions (in a broad swatch of contexts) that matters.

    You are right that something is rotten in the use of models in economics. The benefit of using simple, precisely stated mathematical models primarily arises from our ability to accurately learn when various models are likely to yield reasonable results. Used correctly the most important thing we learn from some macro model is what range of inputs and what kind of additional assumptions/machinery render it's predictions inaccurate. Not only is that the kind of information that helps us choose the right tool to make predictions it points the way to modifications that can render the model more accurate and powerful.

    Unfortunately, it seems the incentives in economic academia disfavor this kind of learning. A good sense of the limits of a model starts as a vague sense virtually impossible to publish (only in light of future improvements in the model can we offer formal results characterizing the circumstances a model will be accurate). Between the motivated desire to wield tools that support your prefered outcomes and the temptation to use existing models as black boxes to describe more and more remote circumstances it's not too surprising that the problems you mention arise.

    It's not that the macro models should use assumptions that are valid in terms of micro. Rather, it's that the macro models need to use assumptions that we have good reason to believe will provide an accurate approximation over a substantial range of circumstances. That confidence should come from both theoretical manipulations informing us of what kinds of deviations from reality will have a substantial effect and empirical results verifying the model's accuracy in a range of circumstances.

    Unfortunately, macro models seem sufficiently complicated (and applicable in a narrow enough set of circumstances) that verification against reality doesn't provide sufficient pressure to overcome the unfortunate incentives to use these models badly.

    ReplyDelete