Why are microfoundations useful? The usual answer is that "microfoundations make models immune to the Lucas Critique." The idea is that the rules of individual behavior don't change when policy changes, so basing our models purely on the rules of individual behavior will allow us to predict the effects of government policies. Actually, I'm not sure this really works. For example, most microfounded models rely on utility functions with constant parameters - these are the "tastes" that Bob Lucas and other founders of moden macro believed to be fundamental and unchanging. But I'd be willing to bet that different macro policies can change people's risk aversion. If that's the case, then using microfoundations doesn't really answer the Lucas Critique.
A better reason to use microfoundations, in my opinion, is that they probably lead to better models. "Better," of course, means "more useful for predicting the future." If our models predict future aggregate macro variables (GDP, etc.) based solely on the past values of those variables, we'll almost certainly be using less information than is available; if we figure out how economic actors are making their decisions, we will have a lot more information. More information = better model. And there are all kinds of ways to observe and model individual behavior - survey data, lab experiments, etc.
So why would we want to use models that don't have microfoundations? Here is Simon Wren-Lewis' answer:
[S]uppose there is in fact more than one valid microfoundation for a particular aggregate model. In other words, there is not just one, but perhaps a variety of particular worlds which would lead to this set of aggregate macro relationships. (We could use an analogy, and say that these microfoundations were observationally equivalent in aggregate terms.) Furthermore, suppose that more than one of these particular worlds was a reasonable representation of reality. (Among this set of worlds, we cannot claim that one particular model represents the real world and the others do not.) It would seem to me that in this case the aggregate model derived from these different worlds has some utility beyond just one of these microfounded models. It is robust to alternative microfoundations.
In these circumstances, it would seem sensible to go straight to the aggregate model, and ignore microfoundations...I don't really like this answer. Presumably, there is some set of sets of microfoundations that leads to Aggregate Relationship A and some other set of sets of microfoundations that leads to Aggregate Relationship B. How do you choose which set is better? Well, you could look at survey data and lab experiments to figure out which microfoundations are really in effect. But if you can do that, why do you care about "robustness to alternative microfoundations" in the first place? And if you can't choose which microfoundations are better, why does "robustness to alternatives" matter?
Pretty much any model, in economics or physics or whatever, has a bunch of possible microfoundations that could give rise to it. That fact alone does not make microfoundations less important, since presumably some microfoundations are actually happening, and others aren't!
Here are Paul Krugman's answers to why we might not need microfoundations:
1. Even in microeconomics, we don’t insist on using models built up from maximizing behavior all the time. Exhibit A: supply and demand!...
2. Relatedly, as a practical matter intellectual scratch-pads — approximate version of what we really believe, but stripped down to be tractable — are what one uses for applied economic analysis all the time.If I want to ask what the effects of some shock will be, it rarely makes sense to demand that the analysis always go all the way back to the intertemporal choices of optimizing agents.Hmm. I think that incorporating microfoundations into a model is different than starting from microfoundations when applying that model.
3. In the hard sciences, when dealing with complex systems people have often used higher-level, aggregative concepts that seem to work empirically long before they have a full derivation of effects from the underlying laws of physics...Why, then, do some economists think that concepts like the IS curve or the multiplier are illegitimate because they aren’t necessarily grounded in optimization from the ground up?I think this is where the Lucas Critique comes in. The Phillips Curve is the famous example of why aggregate relationships might not be useful without understanding the microfoundations. That doesn't make aggregate-only models useless, but it should make people cautious about using them.
4. And when making such comparisons between economics and physical science, there’s yet another point: what we call “microfoundations” are not like physical laws. Heck, they’re not even true. Maximizing consumers are just a metaphor, possibly useful in making sense of behavior, but possibly not. The metaphors we use for microfoundations have no claim to be regarded as representing a higher order of truth than the ad hoc aggregate metaphors we use in IS-LM or whatever; in fact, we have much more supportive evidence for Keynesian macro than we do for standard micro.I think that this is the real argument against microfoundations as they are currently used in macro. Basically, Krugman is saying that the "microfoundations" we now use really deserve to have quotes around them, because they actually don't describe individual behavior.
In other words, our current microfoundations are mostly just garbage.
If this is true - and I think that the evidence overwhelmingly says that it is! - it means that our modern "microfounded" macro models are no more useful than aggregate-only models. The logic should be obvious. Using wrong descriptions of how people behave may or may not yield aggregate relationships that really do describe the economy. But the presence of the incorrect microfoundations will not give the aggregate results a leg up over models that simply started with the aggregates.
In other words, if you put garbage in, you may or may not get garbage out, but why bother putting the garbage in in the first place?
(Note: if you started to angrily type out the reply "But all models are wrong!", please refer to my 2nd Principle for Arguing With Economists. You are wrong.)
When I look at the macro models that have been constructed since Lucas first published his critique in the 1970s, I see a whole bunch of microfoundations that would be rejected by any sort of empirical or experimental evidence (on the RBC side as well as the Neo-Keynesian side). In other words, I see a bunch of crappy models of individual human behavior being tossed into macro models. This has basically convinced me that the "microfounded" DSGE models we now use are only occasionally superior to aggregate-only models. Macroeconomists seem to have basically nodded in the direction of the Lucas critique and in the direction of microeconomics as a whole, and then done one of two things: either A) gone right on using aggregate models, while writing down some "microfoundations" to please journal editors, or B) drawn policy recommendations directly from incorrect models of individual behavior.
Brad DeLong puts this rather more pithily:
I now have the most bizarre image in my mind:
A seminar at the Library of Alexandria in 300 A.D., with an astronomer trying to provide micro foundations in the form of calculations of how large their wings must be and how fast their wings must beat for the angels to push the planets on their tracks through the quintessential spheres…Thus it seems to me that the microfoundations revolution has not really gotten us very far yet. I would be willing, of course, to be convinced otherwise.
So what to do? The answer is clear: macroeconomists should continue using aggregate relationships for now, and try to check these against the best microfoundations available. But in the meantime, recognize that these aggregate models will have severe limitations until microeconomists come up with better explanations of individual behavior. Which, of course, they are working on.
But note that there is also a political danger here. Macroeconomists who desire a certain policy conclusion - for example, that fiscal stimulus never works - may be tempted to continue to use bad microfoundations that support that conclusion, even when microeconomists have found something better. This is sometheing the profession should work to avoid, by actively recognizing that microfoundations that fit the micro data are inherently preferable to those that do not.
Update: Paul Krugman, commenting, has a good point about what kind of predictions we should expect from economic models. Big qualitative predictions ("quantitative easing will cause runaway inflation") are more important than precise quantitative ones ("GDP growth will be 1.7% next quarter"). I agree, of course. When I say models should "predict the future," this is really what I mean.
Update 2: Richard Serlin points out that aggregation is a huge challenge for microfounded models, since complex systems often have chaotic properties. Very true. But that doesn't mean that just observing the aggregate will give you more information! What you need to handle complexity and chaos is a ton of computing power and some agent-based modeling, as is done in weather forecasting. This can provide a very important check on non-agent-based models that make simplifying assumptions in order to aggregate individual agents.
Update 3: Peter Dorman is of the opinion that the reason our current microfoundations are crappy is that the entire framework by which microeconomics is now done - equilibrium analysis and optimizing behavior - does not describe reality. I'm not willing to go that far (and besides, what about game theory?), but if he's right, it would certainly strengthen my case substantially.
Update 4: Andrew Gelman and Peter Dorman are basically on the "our current microfoundations suck, and we should get better ones" bandwagon.
Engineers do not use the micro-foundations of fundamental particles when calculating the bending of a beam.
ReplyDeleteAgent based systems based on hundreds of thousands of interacting agents operating under different preferences, with limited information and subject to appropriate budget restraints is more likely get you a realistic model of the economy than trying to model the whole economy with a few perfect stochastic differential equations that seek to capture the whole economy. Any good model should be robust under the errors in the preferences of the individual agents.
Huh? Paragraph does not follow from first sentence.
DeleteIndeed engineers do not always build their very useful and robust models and tools from "micro-foundations" such as particle mechanics, such as statistic foundations for thermodynamics. Instead they use "macro" models that work very, very well and have real world applicability. So then why does your paragraph seem to support the notion that better results are achieved from much more complex models?
My modelling text starts by saying that the best model is the simplest one that provides sufficiently accurate and precise results for the task at hand. (Here, "simplest" means "least work".) A robust aggregate model would be better than any microfounded model for applied macro on this criterion.
ReplyDeleteRobustness means precisely that it *doesn't matter* which set of microfoundations you use: you will get the same aggregated model, obviating the need to do surveys to establish which particular set of microfoundations is closer to "the truth".
Robustness would be a very useful result, unless you really like doing surveys over and over again. I haven't noticed such a predilection in economists - quite the reverse.
Sorry, I rarely drop in on these 'theory' blogs, but a quick question:
ReplyDeleteDo you guys ever deal with the critiques that indicate that micro and macroeconomics are inherently contradictory? It has always seemed to me intuitively obvious and there are some more 'formalised' critiques out there (SMD theorem etc.). But even intuitively, it seems perfectly obvious: you cannot have a model that goes from the individual up and then have this coincide with a model that goes from Society or The Economy down. They're quite obviously not going to 'meet up'. You either begin from aggregation or you don't.
Take one or the other. But you cannot have your cake and eat it. It seems to be a case of chasing a ghost, really...
In physics there is a search for a grand unified theory. Basically you've got quantum mechanics which work very well in one domain (very small) and then general relativity that describes another (very large). (In the 'middle' you've got 'simplified' Newtonian and old electromagnetics that work very, very well for most purposes.)
DeleteNone of this prevents physics from making discoveries and progressing. Though physicists search for a more complete truth they do not stop advancing within their fields simply because they cannot build their foundations on other principles.
"Note: if you started to angrily type out the reply "But all models are wrong!", please refer to my 2nd Principle for Arguing With Economists. You are wrong."
ReplyDeleteAlso this -- and the linked to 'proof' -- is a strawman attack. When people who actually study economics claim that all economic models are wrong they do so for three (or more?) reasons (neither of which have to do with point-scoring based on Quantum Physics etc.):
(1) That closed-case economic models cannot actually work because they necessarily include indeterminate variables within them. Since economics -- or at least microeconomics -- is a theory of human action, you always need to posit a model of Man. This may well be philosophically and theoretically invalid. (This is the Austrian argument).
(2) There is a critique currently coming out of an economics department in Greece from ex-game theorists that you cannot logically model a multi-sector economy while at the same time keeping the dimension of time. The critique continues that any model that tries to avoid these two conditions is a priori too unrealistic to be useful in understanding the actual economy.
(3) That modeling is itself largely invalid in economics because the economy has too much institutional heterogeneity (among other things). Thus attempts to model are seen as largely attempts to impose restrictive, non-empirical limits on what one should understand about a functional economy and this blinds the economist from really coming to terms with various institutional and economic changes that are taking place. (Strong case for this to be made in relation to recent evolutions within the banking sector).
So, I wouldn't be so confident (and so condescending) if I were you. There are certain critiques out there that are put forward that have a tremendous amount of merit to them. Which is probably why they're largely ignored by the profession.
I think the Wren-Lewis answer is philosophically a good one, and is standard in many-body physics/"complex systems." There are two halves to the claim: (a) models with slightly different microfoundations can yield wildly different aggregate behavior (e.g., a solid vs. a liquid as you heat slightly past 100 C), (b) a broad range of microfoundations can yield very similar behavior (all solids are roughly alike). (Both (a) and (b) are generically true for any complex dynamical system; see the notion of "fixed points" etc.) Because any scientific model of human behavior is going to have some amount of uncertainty in it, and because that uncertainty propagates and multiplies as you aggregate behavior, the task of deriving The One True Macro from The One True Micro is presumably not feasible and one has to be a pluralist about models.
ReplyDelete(A different way of saying this, I think from Philip Anderson's "More is Different" paper, is that evidently the behavior of aggregates of objects can be described much more precisely than you'd expect if you took the precision of a microscopic description and propagated it upwards.)
This is a fairly standard point but see, e.g., an old post of mine, http://glassbottomblog.blogspot.com/2010/09/emergence-and-limits.html, as well as the Batterman ref. cited there.
Thanks for the link and for putting me in sych good company.
ReplyDeletePardon the naive question:
ReplyDeleteWhy not try to derive a microfoundation or two from well-working macro models plus some sense of how present microfoundations are wrong? Then try operating it/them independently to see if they continue to generate reasonable results over time, and then see if there are any unexpected testable implications for macro ... rinse and repeat.
@ Wayne K.
ReplyDelete"Why not try to derive a microfoundation or two from well-working macro models plus some sense of how present microfoundations are wrong?"
That's sort of what the post-Keynesian school do (although I'm reluctant to call what they use 'models' as that term is currently understood in economics). They take aggregation first and then try to build in empirics into these aggregations (e.g. how firms actually respond to higher demand etc.).
Whole different approach. And one not very conducive to the 'physics-modeling' that has been so prevalent in the discipline since the turn of the 20th century.
Another important point would be that microfoundations are only constituted relative to some other, "macroscopic" theory. For instance, why stop at individual behavior for macroeconomic microfoundations? Why not delve into the biological or physical roots of that behavior? At some level, we don't know what things the universe is made of (i.e. What are quarks or electrons or strings made of? What are those things made of? etc.). So with microfoundations the buck has to stop somewhere. But where? The whole thing reeks of fatuous cultural construction.
ReplyDeleteBut fuck, I don't know.
"Because any scientific model of human behavior is going to have some amount of uncertainty in it, and because that uncertainty propagates and multiplies as you aggregate behavior, the task of deriving The One True Macro from The One True Micro is presumably not feasible and one has to be a pluralist about models."
ReplyDeletePerfectly said. Shouldn't that single sentence be game, set and match on the subject?
Hey Noah,
ReplyDeleteAs someone who has studied physics, do you see any useful analogy to be made between the development of thermodynamics and a (possible) robust theory of macroeconomics? After all, thermodynamics describes its realm of aggregate phenomena incredibly well and was derived before any connection to the underlying microphysics was made: as you know, the derivations of the various thermodynamic relationships from statistical physics do not really rely on the precise form of the individual particles' energy, etc., and so the theory is robust to its "microfoundations".
BTW, any recommendations, books or otherwise, for learning about basic macro for someone with a physics PhD (from your undergrad alma mater, apparently)?
@ Anonymous
ReplyDeleteEconomics has long ripped-off thermodynamics, as the eminent historian of economic science Philip Mirowski has shown:
http://www.amazon.com/More-Heat-than-Light-Perspectives/dp/0521426898/ref=sr_1_1?ie=UTF8&qid=1330821901&sr=8-1
It has, in my opinion and in the opinion of Mirowski, been a terrible marriage altogether. This is because where physics began to work with some extraordinary contradictions in the first half of the 20th century, neoclassical economics (which was based on a similar framework) simply suppressed these contradictions.
"For example, most microfounded models rely on utility functions with constant parameters"
ReplyDeleteSuch models assume that tastes are exogenous, that they fall onto the transactors like mana from heaven. Once it is recognized that tastes are learned, such models are not immune to the Lucas critique. A change in regime that causes consumers to change their choices of consumption bundles exposes them to new information about the degree of utility that such alternate bundles provide that they had not been aware of before. This new information is likely to change their preferences. Therefore when the old regime is reinstated, their preferences are likely to be different. In other words, preferences are not immune to changes in regime. Therefore models using them are not immune to the Lucas critique.
The same is true with learning by doing production functions.
Re: thermodynamics and economics...
ReplyDeleteThere are a number of techniques and concepts in use in economics that come from thermodynamics (and some people called "econophysicists" who want to draw even more parallels), but I don't really think it works very well, because the economy doesn't work like a gas.
One thing to realize is, aggregate thermodynamic relationships are emergent properties. If we knew of some emergent properties in the macroeconomy, it would be a different story...
@ Noah Smith
ReplyDeleteMirowski's argument is that ALL neoclassical economics is implicitly based on thermodynamic principles. The 19th century developers, according to him, copied what the physicists were doing at the time. SO, deep within the structure of neoclassical economics you have buried the structure of 19th century energy physics.
I think Mirowski is right. And I think for that reason, among others, neoclassical economics is wrong.
The Lucas Critique, I think, seemed to mean something different, when first presented, than it apparently does now. In the 1970s, many of the young assistant and associate professors in macro at a school like Michigan had come into Economics to work on computer-based simulation models, composed of simultaneous equations. I remember having a Teaching Fellow, who explained that he was working on a four equation model of aggregate auto demand, as a sectoral component of Michigan's big model. Otto Eckstein's Data Resources, Inc. (DRI) was a prime purveyor of the results of such modeling exercises, and there was a market for their work.
ReplyDeleteThose models did attempt to work directly with aggregate concepts, but they did not work well at all. They ran aground very quickly, and were not just poor predictors of the short-term, but likely to generate absurdities. One fundamental problem was that a linear model could not simulate the non-linear economy; another fundamental problem was that economic behavior is a product of continuous choices through time -- it doesn't just start down a fixed course, like a Newtonian clock; pretty soon the young Allen Sinai at DRI was just making up subjectively plausible scenarios, and dressing them up as computer printouts -- Wizard of Oz style.
It was in the context of the puzzlement and disappointment of academics over the failure of simulation modeling that the Lucas Critique had its full force. Paralleling that, of course, was a certain loss of innocence among the Keynesian stars, who had composed the Kennedy-Johnson Council of Economic Advisors, and the elevation of the Solow growth model.
I suppose it must have seemed promising, at first, and turning econ departments into boot camps for the Lucas "Let's do hard math" program had its advantages in socializing a professional cadre, but, to me on the outside, resurrecting the failed classical model amid endless exercises in intertemporal optimization, just seems like a mind-numbing waste.
What's wrong with Keynes' attempt to create an economics of aggregates isn't that his macro-economic Sky needs a mathematical Atlas to hold it up on His Shoulders. The problem is that no one knows how to relate to the abstract aggregates.
Macroeconomics is about money, not aggregates. You have to study money and banking and finance in all its institutional gore. (And, no I'm not intending to endorse MMT) Money as the universal signalling and score-keeping device is what drives otherwise diverse economic behavior into aggregate regularity.
The policy problem is one of correctly establishing the micro-economic context of macro policy: the micro effects of macro conditions and policy. No one has to hold up the macro sky; you do have to explain the macro "weather", in terms of micro effects (not causes). It isn't macro that needs micro foundations; micro needs a macro sky, a macro meteorology.
Right now, it seems like you have one camp, insisting that the economy is always close to micro perfection, and trying to explain everything aggregate in a way that maintains the illusion of perfection -- even to the point of suggesting that mass, involuntary unemployment is really spontaneous vacation-taking.
And, the other camp is dedicated to the idea of micro-imperfection, in the form of pandemic price-sluggishness, which, in its way can be just as absurd and ignorant.
It wouldn't be so bad, if economists made themselves irrelevant to substantive policy debate by this nonsense, but, instead, they neuter the popular will, by leaving politicians and statesmen and voters with no usable framework to identify conflicts of interest or the consequences of policy.
"Macroeconomics is about money, not aggregates. You have to study money and banking and finance in all its institutional gore. (And, no I'm not intending to endorse MMT) Money as the universal signalling and score-keeping device is what drives otherwise diverse economic behavior into aggregate regularity."
ReplyDeleteI think you just did ;) But you're absolutely right, of course.
Seriously though. The work has already been done -- and it has been done well.
http://www.amazon.com/Monetary-Economics-Integrated-Approach-Production/dp/0230301843/ref=sr_1_1?ie=UTF8&qid=1330829409&sr=8-1
And most of the 'theory' blogs are tailing behind big time. All the financial blogs etc. (which I write for) are so far ahead in this regard its not even funny. On them there's stimulating debate, looks at new approaches, arguments about what is valid in the old theories etc.
Then you come on the 'theory' blogs (usually via Krugman) and it's all naval-gazing about freshwater/saltwater and silly old debates that should have never taken place because the classical garbage should have been dropped down the memory hole in 1936.
Sorry for posting so much here today, but I thought I should intervene to some degree because I'm probably not going to come back at any point. The stuff in academia these days smells musty and irrelevant. The 'trail-blazing' argument today is that we should essentially regress to a model that is now over 80 years old (ISLM) and that was wrong to begin with (it's founder rejected it in the late-70s).
Meanwhile, most of the finance community together with leading economic analysts are adopting the Godley/post-Keynesian approach through the sectoral balances of aggregate demand model. Some of them are getting it through MMT (yeah, academia IS going to have to deal with that in the coming years, trust me) while some of them are getting it via Martin Wolf and the Financial Times.
I'm going to stop posting here now because I don't want to come across as 'hogging' space. But seriously, it looks to me like academic economics is going to become largely irrelevant if the most cutting-edge thing they can offer is an 80 year old model that its author explicitly rejected.
Buck up, guys. Seriously.
Congratulations on the Krugman link. I note that a lot of little predictions can be decisive too.
ReplyDeleteThe risk that people will stick to microfoundations which have been proven not to be approximately true is not theoretival. Macroeconomists have definitely been doing that for decades -- in particular choosing parameters which make it easier for their model to give the same variances and covariances as macro data rather than those which fit direct data on the varibles which are the range and domain of the function parametrised by the parameter.
Importantly, this is not just true of estimates using micro data. For mant macro models, the intertemporal elasticity of consumption crudely estimated with macro data should be one. Those medels survived crude macro estimates that the parameter is o.1. The lesson was to not calculate the implied regression coefficient of consumption on r when claculating moments to confront with "stylised facts". This was vastly more dishonest than ignoring what microeconmetricians were doing.
I think that after some decades of debate, macroeconomists have agrred that, to be published, macro models must fit this fact so log utility only if there are liquidity constrained consumers. But note how many top economists claimed that we "know" there are no liquidity constrained consumers.
I don't do scare quotees. I used quotes because I am quoting pRobert Lucas without distorting his use of the word by removing necessary context.
Pity for the rich reminds me of a Seinfield show where Jerry and Elaine are boarding a plane, and Jerry's ticket is just upgraded to First Class. Rather than give the upgraded ticket to poor Elaine, who has never flown First Class, he keeps it for himself. He reasons that if he sits in Cabin he will hate it because he knows how much better 1st Class is and what he'll be missing, but if Elaine stays in Cabin, she won't miss what she has never known.
ReplyDeleteAnd now we're seeing this played out everyday from traders: "without my outsized bonus for creating a highly speculative financial product that will blow up in 2 years, my life is ruined. No summer cottage in the Hamptons, send the girls to public schools (yuck) rather than the private schools. I'll have to drive the same three cars (BMW, Mercedes, Bugatti) I did last year. And it's all because those poor people didn't pay their mortgages like they promised."
Economists should never use examples from physics. They always misinterprete the essense of fundamental laws and the role of statistics. In physics,statistics plays central role and allows quantitative judgement on hypothesis. In economics, statistics is used for data description not model selection. Otherwise, all economic models would have been rejected.
ReplyDeleteThe example with qualitative predictions is also wrong. Nobody can answer the question where is the boundary between enough and not enough QE for hyperinflation? Currently, is not enough. Same logic works for any level which does not fit the qualitative prediction. The problem is also how much of QE will destroy the economy before it gets hyperinflation? Let's give $10,000,000 to everyone. Will the economy survive?
This is kind of Lucas critique - where is the bifuraction point of a given economic system?
Robert:
ReplyDeleteIt seems we are in agreement about most of this stuff.
A big thing is what you referred to a few posts ago. If you're going to start out micro and aggregate up to a very complicated reality, then it's super hard to do without making extremely strong, simplifying, and unrealistic assumptions, and that's where the microfondations models can be very unrealistic and bad for understanding, predicting, and policy.
ReplyDeleteThis issue is in finance too. One of the biggest, and certainly loudest, critics of microfounded finance models is Robert Haugen. Haugen now runs an investment services firm, but he was previously a professor at UC Irvine, and is #17 on a list of finance's most prolific authors (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1355675).
Haugen's criticism is that the aggregate of very complicated, highly interacted, micro behavior can be better understood if you just observe the behavior of that aggregate, rather than trying to understand it, predict it, and make good policy from modeling micro unit behavior and then aggregating up.
In finance, then, you can understand and model the behavior of financial asset markets just by looking at how those markets behave over time, and creating a model to fit that observed behavior. This will get you a much more accurate, realistic, and useful model, than if you make very simplifying assumptions about individual behavior and interaction so that it's tractable to aggregate up to the market as a whole.
So in other words, a model of aggregates can be much more realistic and accurate in describing the behavior of those aggregates because you aren't forced to make extremely unrealistic simplifying assumptions about micro units in order to make aggregating them tractable.
In Haugen's own words:
Chaos aficionados sometimes use the example of smoke from a cigarette rising from an ashtray. The smoke rises in an orderly and predictable fashion in the first few inches. Then the individual particles, each unique, begin to interact. The interactions become important. Order turns to complexity. Complexity turns to chaotic turbulence... ("The New Finance", 2004, 3rd Edition, page 122)
How then to understand and predict the behavior of an interactive system of traders and their agents?
Not by taking a micro approach, where you focus on the behaviors of individual agents, assume uniformity in their behaviors, and mathematically calculate the collective outcome of these behaviors.
Aggregation will take you nowhere.
Instead take a macro approach. Observe the outcomes of the interaction – market-pricing behaviors. Search for tendencies after the dynamics of the interactions play themselves out.
View, understand, and then predict the behavior of the macro environment, rather than attempting to go from assumptions about micro to predictions about macro... (page 123)
For more on this see: http://richardhserlin.blogspot.com/2009/04/induction-deduction-and-model-is-only.html
there may be a need for some deep thinking about these issues, but the margin is too narrow anyways...
ReplyDelete1. economics is very different from physics (as you note) in that standard concepts of causality are trickier in economics : if agents anticipate a certain event, their actions will occur before the event even though their actions may be caused by it. I think this is the basic cause of misunderstanding between economists and physicists (and others). Physicists don't understand our concern with microfoundations if they don't understand this simple thing: (some) economic agents look into the future and this makes mechanical models suspicious.
2. There are circumstances where mechanical models work --- and by that I mean models without a forward-looking agent somewhere. Whenever expectations about the future are not terribly important. If the future is perceived as impossibly bleak (end of the world, we're all going to die with the Lehman brothers, etc.), then maybe mechanical models will do just fine, and your IS-LM model probably doesn't need to be terribly sophisticated to get the basics right about the effects of multiplier --- some rough number for the mpc is probably good enough.
3. Another example where microfoundations may not matter very much is where the aggregation process nullifies it, as it were. For instance, if back in 2004 you were not buying American or Spanish houses, you'd have been wrong for 3 years, because millions of backward-looking buyers were irrationally extrapolating the present into the future. In this example, the correct microfoundations are (probably) 1 forward-looking agent for 9 backward-looking agents.
4. Related examples, the effect of taxes on labor supply and saving: it's so difficult to predict future taxes that it's probably not worth trying, no wonder that Ricardian equivalence has no empirical basis. Who would have thought that an "audacity of hope" president would extend cowboy tax cuts for the rich? This is really a Knight-Keynes uncertainty situation I'm describing here. Your microfoundations would have to be incredibly smart to deal with this problem.
5. Another example, off the top of my head, is where externalities are overwhelmingly strong. Krugman's stuff on the formation of cities is a good example where you don't need forward-looking agents to make a prediction, adaptive expectations do just as well, but you can have them in if you want, it's "robust."
All this doesn't sound as smart as I thought it might ...
I think those who are looking for the appropriate use of/analogy from thermodynamics should read Ian Wright's paper Implicit Microfoundations for Macroeconomics as a matter of urgency. "Implicit Microfoundations" are those generated by examining the constraints on agents rather than their context-free individual behaviour. And sorry Noah, the economy really does work like a gas :)
ReplyDeleteThanks for the mention Noah. I think looking at the behavior of the aggregates and looking at the behavior of the underlying micro units are both potentially very valuable, and can be combined synergistically. I get the impression, though, that some economists won't even consider studying and modeling the aggregates as a whole.
ReplyDeleteI was also thinking that meteorology (and climatology) was a good example.
Have macroeconomists completely missed complexity theory? The whole field seems to assume that you can model the behaviour of the system by modelling its parts. Why should this be true?
ReplyDeleteRobert: "
ReplyDeleteImportantly, this is not just true of estimates using micro data. For mant macro models, the intertemporal elasticity of consumption crudely estimated with macro data should be one. Those medels survived crude macro estimates that the parameter is o.1. The lesson was to not calculate the implied regression coefficient of consumption on r when claculating moments to confront with "stylised facts". This was vastly more dishonest than ignoring what microeconmetricians were doing."
Could you please clarify this paragraph? There were enough typos that I couldn't follow it.
"The example with qualitative predictions is also wrong. Nobody can answer the question where is the boundary between enough and not enough QE for hyperinflation? Currently, is not enough. Same logic works for any level which does not fit the qualitative prediction. The problem is also how much of QE will destroy the economy before it gets hyperinflation? Let's give $10,000,000 to everyone. Will the economy survive? "
ReplyDeletePlease note that Krugman has been addressing fears of inflation (let alone hyperinflation) for a while; you might want to read his blog posts.
Noah, can you please specify whether the problem is with macroeconomics or microeconomics? Because if by garbage you mean constrained optimization, full information, and so on then that same "garbage" is used to derive supply and demand and a whole bunch of other results in Micro theory. So if we accept your reasoning we should discard about 2/3 of what is taught in upper level microeconomics and simply posit aggregate (at the market level) relationships.
ReplyDeleteMacroeconomists can only use the microfoundation tools that microeconomists provide!
Because if by garbage you mean constrained optimization, full information, and so on then that same "garbage" is used to derive supply and demand and a whole bunch of other results in Micro theory. So if we accept your reasoning we should discard about 2/3 of what is taught in upper level microeconomics and simply posit aggregate (at the market level) relationships.
ReplyDeleteI am not willing to toss constrained optimization. Peter Dorman seems to be.
My point is that the micro behavior assumed by models like RBC models, etc. is very unrealistic and probably wouldn't be used by any serious microeconomists to describe real consumer behavior.
I also don't like many of the features of RBC models. But it seems to me that the micro behavior assumed is pretty standard in traditional micro. Constrained optimization (of which rational expectations is a form), market clearing, etc. Fortunately a new research effort has begun to incorporate labor and capital market frictions in models with endogenous technological change and study the implications for business cycles. But this would not have been feasible without the development first of search theory in the late 1970s and early 1980s. The assumptions of the first RBC models simply where based on the micro toolbox of the time. So you micro guys should also do some soul-searching(and I say this jokingly)!
ReplyDeleteI was thinking largely the same thing as Dornan, see:
ReplyDeletehttp://richardhserlin.blogspot.com/2012/03/microfoundations-models-square-peg.html
With regards to game theory, you need to keep the same kind of things in mind, a model is only as good as its interpretation. I remember in my first PhD game theory class, the professor was a visiting game theory expert, and I asked him, in these models we assume that the first person does what's best given what the second person is doing (and vice-versa), but an optimizing person who doesn't know what the second person is doing may select something very different, and with both optimizing blindly we may end up with something very different than the Nash equilibrium. And even in long run equilibrium we may not end up with the Nash equilibrium. He replied that game theorists understand this and consider it. How many really do (in public), I'm not sure.
Ok maybe "largely" is too strong wrt Dorman, let's say, something similar to some of what he said.
ReplyDeleteWhile the thermo-economic anaolgy may not be perfect, from the point of view of the evolution of an academic discipline they are quite analagous.
ReplyDeleteMicrofoundations appear (to a PhD Chemical Engineer) as the economic analog of statistical mechanics and macroeconomics as the economic analog of classical thermodynamics.
When designing a distillation column, a chemical engineer would never explicitly use statistical mechanics. Typically, classical thermodynamics would be used with ideal behavior modified by empirically derived activity coefficients. Likewise, diffusion in the pores of catalyst particles would not be modeled as a random walk. An empirically derived diffusion coefficient would be used instead.
While the connection between stat mech and classical thermo is usually discussed in terms of partition functions, the miore useful connection is in using stat mech to estimate the macroscopic (aggregative) activity coefficients or transport parameters (viscosity, thermal conductivity, diffusion coefficient, etc.).
There are some tractable problems where stat mech can yield useful results. For instance, the transport of gases can be estimated fairly well from kinetic theory (low density) or Chapman-Enskog theory (high density) and activity coefficients can be estimated using group-contribution methods such as UNIFAQ. However, the temptation to apply these micro-based models to more complex systems (generally with non-independent particles) has led many astray.
I see this as folks requiring microfoundations to be built up all the way to macro models. It should be possible in certain cases that may have wide applicability, but the complexity is usually not worth the effort.
And, in contrast to what many others have said here, aggregation does not necessarily result in instability. It is true for certain cases, but those tend to be the excpetion rather than the rule.
Onsager may have quibbled with the notion that economic actors using their expectations to motivate behavior has no analog in thermodynamics.
It should also be noted that from my experience, physicists tend to use statistical mechanics and thermodynamics as synonyms, whereas stat mech is a small subset of thermo.
Noah
ReplyDeletethe comments you have gotten have been super informative
""1. Even in microeconomics, we don’t insist on using models built up from maximizing behavior all the time. Exhibit A: supply and demand!...""
ReplyDeleteWhat? Did you ever get past an introductory economics class?
The standard intermediate micro textbook, Hal Varian's, spends the first 6 chapters showing how the downward-sloping demand function is an outcome of the maximizing behavior of 'rational' agents. Chapters 19-23 do the same for supply functions and profit-maximizing firms.
Unless, of course, you want to accuse Hal Varian of not being a 'serious' microeconomist...
Update 2
ReplyDeleteYes. That is the way to go. I've always thought that ignoring the aggregation problem IS the big problem. Disaggregate and understand. Its like treating a human body as one single entity instead of dividing it in to organs and structural components. Pretty dumb. And you miss out on everything that is interesting about it.
"Its like treating a human body as one single entity instead of dividing it in to organs and structural components. Pretty dumb. And you miss out on everything that is interesting about it."
DeleteOn the contrary. The "reductionist" method, looking at the individual pieces of the body, has had great success... but the "holistic" method has had some massive successes too.
If you're doing ecology, you can just look at the human body as one single entity and the details are pretty unimportant most of the time.
"Holistic" and "reductionist" methods are complementary. It is harder to get good holistic theories, and hard to extend them, but when you do get them (example: theory of evolution) they can be far, far more powerful predictively than the theories you get from reductionist research. Keynes discovered some sound holistic theories in economics.
most microfounded models rely on utility functions with constant parameters - these are the "tastes" that Bob Lucas and other founders of moden macro believed to be fundamental and unchanging. But I'd be willing to bet that different macro policies can change people's risk aversion Why invoke something so technical? Tastes themselves can change. Economists never study where tastes/preferences come from but people in certain businesses definitely do.
ReplyDeleteThe usual answer is that "microfoundations make models immune to the Lucas Critique." Surely they're not the only kind that are immune to the Lucas Critique. If you take Y = C(Y−T) + I + G then the C people respond to government behaviour.
A better reason to use microfoundations, in my opinion, is that they probably lead to better models. "Better," of course, means "more useful for predicting the future." Evidence that this is true?
A related comment -- and maybe this is what you meant by "more information" -- with little agents running around you can get distributional information, whereas with nameless macro forces that might be less believable.
In this debate analogies with physics theories (gravity, thermodynamics) keep being thrown about but I think there's one important insight that's missing, possibly because physicists only know about it since about 1970, and economics appears to be stuck trying to mimic 19th century physics (and no, the new idea I'm talking about is not from quantum mechanics or general relativity, I'm talking about classical complex systems and statistical physics which someone like Poincare back in the 1890s might have been entirely comfortable with).
ReplyDeleteIn particular, your
"Pretty much any model, in economics or physics or whatever, has a bunch of possible microfoundations that could give rise to it. That fact alone does not make microfoundations less important, since presumably some microfoundations are actually happening, and others aren't!"
is off - empirically - in the theory of collective phenomena. Microscopic behaviour is an object of study in its own sake, not because it is essential for understanding collective behaviour (which it isn't).
In the physics theory of collective phenomena there's a concept of 'universality classes'. As it turns out, it doesn't matter what your complex system is composed of. All that matters is two things: the effective dimensionality of the space the system lives in (basically, the number of 'nearest neighbours' an elementary entity of the model interacts with - which is why the physics of thin films or layered materials is different from physics in the bulk) and the dimensionality of the 'relevant state variable' of the elementary entities. And then you can get predictions of so-called 'critical exponents' which describe the appearance of long-range correlations in collective phenomena (what we call 'phases of matter' or, more generally and by analogy, just 'phases' of the system under study). What this means is that systems with widely different 'microfoundations' can belong to the same universality class, so characterising the universality class by empirical measurement tells you precious little about the microfoundations.
For example, it turns out that the phases of a fluid (liquid/gas) and the phases of a ferromagnet (magnetised/demagnetised) are in the same universality class. But the microfoundations coudn't be more different.
What this means also is that by studying the collective phenomena you can't deduce much about the microfoundations of the system. The atomic theory of fluids wasn't established by studying the liquid/gas phase transition, indeed we now know it probably couldn't have been.
I'm not saying that this is true of economics, just that maybe economists should stop trying to philosophize about the Lucas critique by appeal to a physics worldview of 100 years ago.
"In other words, our current microfoundations are mostly just garbage.
ReplyDeleteIf this is true - and I think that the evidence overwhelmingly says that it is!"
Thank you, Noah. I've been pushing this point for several years. I convinced you -- that's a success. :-)
It is worth noting that while "microfounded" models are nice, there's simply no particular reason to prefer them. If a macro model works empirically, it doesn't matter whether you understand the micro behavior which creates it -- what matters is the empirical evidence full stop.
ReplyDeleteClassic example from biology:
The theory of evolution by natural selection was known to be correct, and was extremely useful, for *decades* before its "microfoundation" (DNA) was discovered.
"Because if by garbage you mean constrained optimization, full information, and so on then that same "garbage" is used to derive supply and demand and a whole bunch of other results in Micro theory. So if we accept your reasoning we should discard about 2/3 of what is taught in upper level microeconomics and simply posit aggregate (at the market level) relationships. "
ReplyDeleteI say yes, get rid of it. And I'm going to use the example of supply and demand.
Example one. Empirically, supply and demand do not reliably determine price! Merchants WILL price their goods at other-than-profit-optimizing levels, except in certain auction-driven commodity markets. Actually, "cost of inputs plus markup" is a standard pricing mechanism.
Example two. Empirically, demand is not independent of supplier behavior! It's driven largely by advertising, publicity, and product differentiation. This means the construction of supply and demand curves doesn't even make sense.
Microeconomics as we know it is the theory of coal sales.
It applies to a few other commodities. But its foundations are completely wrong for markets driven by price-setting sellers and advertising. They're also completely wrong for markets driven by price-setting buyers and contractors submitting sealed bid packets. Neither of these really behaves in a "supply and demand" fashion.
Now, I'm sure someone will come back and say "But the work of so-and-so analyzes this situation". Yeah. And is that taught in freshman micro? Does any macroeconomist know about it?
I've read Richard Thaler's book, Misbehaving, and would guess that microfoundations that assume all actors within a market act in a completely rational manner fall short of describing reality. Furthermore, I wouldn't be surprised to hear that the individual behaviors, once understood, still contribute to the formation of a system, which itself exhibits patterns and features that are not directly attributable to individual behavior but represent a kind of emergent property of the system. In that case, we may find that any attempt to construct a complete model will depend on developing technology suitable to the task, such as quantum computers. Of course, that is not to say that the development of useful models are out of reach. But complete understanding of the forces at work may remain elusive.
ReplyDelete