Monday, April 09, 2012

Weak defenses of the Lucas/Prescott Program

In the 1970s, Robert Lucas perceived that there was a big problem in macroeconomics. Models that didn't allow for human beings to adjust their behavior couldn't be used for policy, because if you tried to use them, people would alter their behavior until the models no longer worked. This is known as the "Lucas Critique". The solution, Lucas said, was to explicitly model the behavior of human beings, and to only use macro models that took this behavior into account. This is called the "microfoundations" approach. Readers of this blog will know that I am a fan of the general idea.

But in economics, as in many sciences, simply tearing down existing theories doesn't satisfy people. You have to replace the old with the new, or people will just go on using the old. The first research program that came along and tried to answer the Lucas Critique was the "Real Business Cycle" program. This program, spearheaded by researchers such as Ed Prescott, made use of a new modeling approach called "DSGE". It also incorporated Robert Lucas' "Rational Expectations Hypothesis". Lucas didn't invent all of this stuff, but since A) it was invented in response to his Critique, B) he invented some of it, and C) he seemed to sign off on the parts he didn't invent, I feel justified in calling this new research program the "Lucas/Prescott Program" (as in the title of this post).

Anyway, as I've mentioned before, DSGE took over the macro field, and Rational Expectations became nearly as common. Now, in the aftermath of the crisis, both of these are coming under increasing skepticism, from some economists, but especially from the general public. In a recent blog post, Justin Fox expresses this skepticism and mentions some defenses of the Lucas Program (including one by Lucas):
Nobelist Robert Lucas says we're never going to have "a set of models that forecasts sudden falls in the value of financial assets." There's a certain logic to that. Lucas again: "If an economist had a formula that could reliably forecast crises a week in advance, say, then that formula would become part of generally available information and prices would fall a week earlier." But I don't think it's logically impossible to be able to judge when asset markets are at greater risk of trouble than normal. And I wouldn't be too certain that the various classes of macroeconomic models (DSGE models among them) that have evolved from Lucas's rational expectations work in the 1970s are the best possible tools for making such judgments. You could describe what today's macroeconomists are doing as the forward march of science: they're revising and adding to their theories in light of new evidence. But sometimes scientists hit dead ends. Ever heard of phlogiston? 
I'm not saying the DSGE theorists should give up — their work may turn out to be of great value. I'm just saying that policy makers and others who want to understand the short-term movements of the economy should keep their options open. And those who educate macroeconomists should be more open to different methods as well...My view of the Great Recession was very much shaped by an October 2008 phone call with Eichengreen: "I doubt that we'll be able to avoid double-digit unemployment," he told me. "But I'm still confident we can avoid 24% unemployment like in 1933." The U.S. unemployment rate hit 10% for one month in 2009, but didn't go past it. It was a very good forecast, expressed at just about the right level of precision, based mostly on historical experience and off-the-cuff judgment, without a DSGE model in sight. There's more than one valid way to do macroeconomics.
The first issue here is something I've addressed before, so I'll be brief. First off, a "financial crisis" is not necessarily the same thing as a fall in asset prices. If you knew 6 months ahead of time that the financial system was going to break down on some specific day, it's probably true that asset prices would fall as soon as the knowledge was obtained. But it would still be better to have that model, in order to prepare for the consequences of the financial system's collapse.

Second of all, models can make policy-contingent predictions without violating the weak form of the Efficient Markets Hypothesis. If you tell a policymaker, "The financial system will crash 6 months from now, UNLESS you take such-and-such an action," only the policymaker knows what he or she will do. It's private, not public, information. Again, it's better to have the model.

Third of all, market efficiency may not hold. See Abreu & Brunnermeier (2003) for an example of a market in which everyone knows that a crash is coming before the crash comes, but can't make money off of that knowledge. Lucas doesn't know that this kind of thing is impossible; hence, he is making unsupported assertions.

Now onto the second critique. Fox's story about Barry Eichengreen's successful predictions regarding the recent recession is just one data point, it's true. Eichengreen might have just gotten lucky. But the larger issue is that DSGE models have so far proven themselves to be essentially useless at forecasting the macroeconomy, relative to the judgment-based forecasts of people like Eichengreen.

Now, there are two reasons why we might value a macroeconomic model. One is forecasting ability. The other is policy advice. If existing DSGE models are crappy at the former, might they not be useful for the latter? In fact, they might be, and their supporters insist that they are. But with little consensus in the macro profession on how to choose which model applies to the economy at which time, we find ourselves at any given time with a dizzying array of contradictory models instead of one model that we can trust. 

Hence, Fox is exactly right when he says that DSGE models "may turn out to be of great value." As far as we can tell, this has not yet happened. The Lucas/Prescott Program has not yet panned out as hoped, nor are there good logical reasons why other programs couldn't possibly do better. Sometimes scientists are forced to live through long periods of doubt, where old certitudes have crumbled but new ones have not yet arrived to take their place. We appear to be living through such a time.

Note: Let no one interpret this post as an attack on Lucas as an economist. I like Lucas! The Lucas Critique was spot-on. I agree with Lucas' push for microfoundations. And I think the Lucas Islands Model is neato.


  1. ivansml5:11 PM

    And the point is ... what exactly? That macro is hard and we should keep an open mind wrt. to other approaches and methodologies? That's a trivial truth that most macroeconomists would probably agree with, at least to some extent - it's not like you will be burned at stake if you publish non-DSGE paper (e.g. that Abreu & Brunnermeier paper wasn't published in some obscure heterodox journal, but in Econometrica). But "keeping an open mind" means that you are willing to consider merits of those alternative approaches, not necessarily work on them yourself, so the burden of proof is still with proponents of the alternative.

    And any such alternative must start with an honest appraisal of current DSGE-based theories (i.e. do your homework and study what you are criticizing). Unfortunately that's usually not the case and critics often rely on straw-men and caricatures, as if DSGE macro stopped evolving 30 years ago. That may explain some of the hostility.

    Just to illustrate - following the latest Krugman/Keen kerfuffle, I tried to read Keen's paper that started the whole thing. A few pages in, without further discussion, he dismisses whole neoclassical/DSGE approach because it relies on "equilibrium", and that's a bad thing, as promptly demonstrated by a quote from Fisher. Yet the quote clearly shows that Fisher's notion of an equilibrium is a static one (steady state), whereas the modern sequential/recursive equilibrium concept in DSGE models is a dynamic one, allowing for fluctuations and endogenous dynamics. Thus 1) Fisher's quote is irrelevant and 2) Keen either doesn't know basic things about modern macro, or he knows but lies about it anyway. In either case, I don't see much point in reading further.

    1. And the point is ... what exactly?

      That the microfounded, hypothetically structural approach adopted by macro theorists in response to the Lucas Critique has not demonstrably increased our understanding of the macroeconomy.

      the burden of proof is still with proponents of the alternative

      So you think that the currently dominant approach has satisfied its "burden of proof", such that other approaches should be labeled "alternatives"? There's where we disagree.

    2. Anonymous10:35 PM


      Funny, not really addressing any of the substantive critiques of DSGE.

      1. Local dynamics around steady state do not generally qualify as "dynamics" in any science outside of economics. Keen 1 DSGE 0.

      2. Capital aggregation: DSGE FAIL.

      3. Representative consumer: DSGE FAIL (I am aware of the limited use of more than one household type in some DSGE papers addressing very limited question, with mixed results.)

      I think you need to read the literature better before you jump on others. Perhaps you may find some enlightenment.

    3. ivansml6:02 AM

      So you think that the currently dominant approach has satisfied its "burden of proof", such that other approaches should be labeled "alternatives"? There's where we disagree.
      I don't think it's necessary to argue about labels, there is a dominant methodology and the rest is "alternative", pretty much by definition. All I was saying is that 1) the dominant paradigm is less rigid than its critics think, and 2) if the alternative really is better, it should be able to convince us by evidence and superior insights, not by whining, metadebates about history and caricatures of DSGE models.

      BTW, as you are generally critical of mainstream macro, what do you consider as most promising alternatives?

      @ Anon 07:35

      Thank you for prompt illustration of what I was writing about.

      1. Local dynamics around steady state is still different from being stuck at that steady state, so Keen is still wrong. And if you like, you can build DGE ("E" stand for equilibrium) models with complicated, even chaotic deterministic dynamics (see this short survey by Benhabib).

      2. You forgot to mention SMD theorem :) Seriously, proclaiming "capital aggregation" doesn't prove anything unless you can demonstrate that reswitching and all that are empirically relevant phenomena.

      3. Some DSGE models use representative agent, others do not. I think it's you who needs to study literature - models with heterogeneity have been around for more than two decades, and have been applied to range of topics. If you claim they've had mixed results, perhaps you should elaborate, instead of repeating same old (and wrong) criticisms all over again.

    4. Anonymous2:35 PM


      No, just repeating the same insinuation proves nothing other than you are being obtuse.

      1. Oh, so now you agree that real dynamics are with deterministic models--yes, very useful for understanding the real world!!! Local dynamics are not useful dynamics at all.

      2. You display ignorance here. I was fully expecting you to come up with this absurd boiler plate reply. Even without the reswitching issue, aggregation of capital is a problematic--just google franklin fisher and educate yourself.

      3. This is the clasic bait and switch. DSGE models with heterogeneity have been around for long, but they have been largely used to answer some narrow puzzles, and the results have been mixed. You are just asserting--show me some DSGE models with heterogeneity that is widely cited in current debates and then we are talking. Pointing to some obscure paper that has no implication is not going to cut it. Sorry.

    5. ivansml6:31 PM

      @Anon 11:35

      1. No, I'm saying that the tools of macroeconomic theory allow you to do both, depending on what makes more sense for the problem you are studying.

      2. First of all, general equilibrium allows you to have multiple firms / sectors, and there are papers that do that (e.g. Long & Plosser (1983), or applied CGE models).

      But all-right, aggregate production function (just like Gorman-form representative consumer) exists only under special circumstances. So what? You cannot interpret any model literally, they're all false anyway. The question is, in the context of a specific economic question, whether abstracting from distributional issues (allocation of wealth across consumers, allocation of capital across firms) misses something important.

      So if there is a bunch of papers showing _specifically_ how accounting for distribution of capital across firms shatters foundations of mainstream macroeconomics, now it would be a good time to cite them. Otherwise, I don't care very much.

      3. Economics is so specialized nowadays that any single paper will be dealing with just a narrow puzzle. But take for example Constantinides & Duffie (1996). They have shown how in a setting with idiosyncratic shocks and incomplete markets, cross-sectional dispersion in marginal utilities (which, with incomplete markets, are not equalized) can influence asset prices and explain equity premium.

      The idea wasn't entirely new, but they put together a model which illustrated it nicely, indicated that it's really time-variation in the dispersion that matters for equity premium, and thus motivated subsequent empirical work that looked at disaggregated consumption data. I'm not an expert on asset pricing so I can't definitely say how much is this line of argument accepted as explanation for equity premium (e.g. Cochrane sounds somewhat skeptical in his survey), but it was definitely an influential paper, with over 800 citations on Google Scholar (176 on Web of Science).

  2. I thought the Vox link was going to show DSGE performing worse than something else (abstract said it performed better than other forecasts they compared), but instead it said everything performed badly. Doesn't sound that like contradicts Lucas/EMH much at all!

    1. If a model does not forecast better than judgment, why do we need the model?

      (Also, we should suspect that the model was subject to publication bias, and that its forecasting performance will go down in the future. In practice this always happens.)

  3. There are two types of financial crisis - one a fall in asset prices and the other a simple bank run / liquidity crisis unrelated to a fall in asset prices. The pure bank run can be solved by a central bank that acts as lender of last resort.

    While economists may never be able to pinpoint exactly when an asset bubble will burst they should be able to identify that an asset bubble is developing and be able to recommend policies to stop it.

    1. The pure bank run can be solved by a central bank that acts as lender of last resort.

      I'd like to believe this is the case...

    2. Anonymous10:17 PM

      In practice, it is impossible to illiquidity from insolvency. Drying up of liquidity reduces asset prices and vice versa.

      Liquidity, as measured by bid-ask spread, spreads between on and off the run securities are highly correlated to credit and other risk spreads, especially when there is a crisis.

      That said, the LOLR function is important--we just have to live with the fact that when we bail out the system, some insolvent entities are made solvent, thus enriching their owners.

  4. If existing DSGE models are crappy at the former, might they not be useful for the latter?

    you are being too generous. 1-models should contain falsifiable assumptions. 2-The absence of the ability to forecast financial panics means that they also do not generate useful advice to prevent them (how do i know i've prevented them if they arent even supposed to exist). 3-Economists need to adopt Occam's Razor: a lot of economists are running around trying to generate novel theories about unemployment and inflation (when good old sticky wage and price models fit the data fine).

    In my abstract algebra undergrad there is a quote from a mathematician that his theorems were never sullied by practical applications (math as art). Fortunately he did not live long to see his theory applied to electrical engineering. I am fine with econ-models as art, as long as we admit thats what they are and no more. Like all pure research, eventually it might lead to something. but if we are serious about models that tell us about policy, see 1-3 above.

    1. models should contain falsifiable assumptions


      The absence of the ability to forecast financial panics means that they also do not generate useful advice to prevent them

      Disagree. I think you are making a logical error. Read this post to see why:

    2. poor word choice: I meant the absence of the ability to forecast financial panics because they do not contain any behavioral assumptions or structural variables that lead to financial panics (or policy variables that let me prevent them: in your post, Case 3: asteroids do not exist). A brownian motion for example is useless because its agnostic about all structural variables. IMO people use rational expecations and EMH to assume away all the dirty stuff. And thats fine, in many contexts. But my point is more basic: if I have simplified my model and assumed away asteroids, the forecast may indeed be *exactly the same* as that from a model with asteroids and a policy response to prevent them, but one is ultimately useful for policy predictions and one is not.

    3. Victor Matheson9:06 AM

      To follow up on Noah with a brief example think of a DSGE as a finely calibrated machine (or a Rube Goldberg mechanism if you prefer). Now take away or change one part of the machine and see what happens. That's your typical DSGE paper.

      You may not be able to predict when that part of the machine is going to change in the real world economy, but if your model is correct, your model can give you an idea of what will happen if that part of the machine changes in real life.

      Truthfully, I'm not sure the DSGE models work particularly well, and as full disclosure, I was a student of Prescott at the University of Minnesota. But the lack of ability to predict financial crises is not among the reasons to condemn them.

    4. You may not be able to predict when that part of the machine is going to change in the real world economy, but if your model is correct, your model can give you an idea of what will happen if that part of the machine changes in real life.

      Sure, but if I have a menu of thousands of such machines, how do I know which one to use as my "policy laboratory"?

      But the lack of ability to predict financial crises is not among the reasons to condemn them.

      Well, it's a reason to say that they have failed to explain the phenomena that we most want them to explain...maybe no model can do it. But we might as well keep looking!

  5. Noah, the defense with which you credit Lucas is a dishonest defense. I've never heard anybody talk about such specific predictions, save right-wing economists' strawmen.

    What I have heard is the simple fact that the 'reforms' proposed by them, and implemented with their support made the world financial system catastrophically unstable, and people like (the entire right-wing econosphere) had no f-ing clue until looooooooooooong after it had happened.

    1. Noah, the defense with which you credit Lucas is a dishonest defense. I've never heard anybody talk about such specific predictions, save right-wing economists' strawmen.

      Actually, it's more reasonable than it sounds, and Fox misses this in his post. Suppose at date T our best models reveal that the risk of a financial crisis, going forward, is higher than we had realized. Asset prices will fall in response to that news, since more risk is bad. So Lucas' defense is not actually limited to predictions about specific dates, but also extends to increased risk.

      However, my critiques of Lucas' defense still hold.

    2. "Suppose at date T our best models reveal that the risk of a financial crisis, going forward, is higher than we had realized."

      Noah, do you understand that the situation was the *exact opposite*? Why are you making a supposition which both supports those who failed, and is contradictory to reality?

      The 'best models' were predicting sunny skies and fair weather, and then the ship capsized.

  6. Michael Harris12:59 AM


    The interminable and often unreasonable "Macro Wars" stuff going on in the blogosphere has brought to mind two quotes. Both are micro in context rather than macro, but I don't know why that should matter.

    1. "A college graduate in engineering can predict that a badly designed bridge will fall down, and why; a college graduate in chemistry can predict that a badly designed compound will blow up, and why. A college graduate in economics should be able to predict that a badly designed tax on gasoline will hurt society, and why. Too often, I think you will agree, he cannot."

    [Deirdre McCloskey, writing as Donald, The Applied Theory of Price.]

    2. "The probability of collapse rapidly converged on 1.0; only the timing was uncertain."

    [Australian agricultural economist Bob Richardson, commenting after the fact on the collapse of the Wool Reserve Price Scheme in the late 1980s. Richardson's words, while written with hindsight, were consistent with what economists were saying at the time.]

    Williamson's mega-comments-thread (in which you were a participant) saw him repeatedly say "We're doing science!", as opposed to KrugmanDeLongSummers and all the saltwater hand-wavers, who were presumably doing political posturing hidden behind ad hoc equations. When asked for a model criterion ("What are your criteria for assessing an economic model?") he responds "It's useful for the purpose for which it was designed."

    Huh? That's nicely self-serving, and allows Williamson plenty of wiggle-room. But your post above seems to be about a not-so-subtle sub-text of the freshwater Lucas-Prescott school, which is "Don't expect us to be useful! As soon as we say anything useful, the model agents will change their behaviour in response to our insight and our model will stop being useful."

    Sheesh. I don't expect an ecologist to be able to forecast in detail when a given ecosystem will collapse, or when a species will go extinct. I don't expect a limnologist to be able to forecast precisely when a lake or river will turn eutrophic. Accordingly, I am OK that macroeconomists can't forecast with precision when a collapse in asset prices (or consumption demand, or employment). Those turning points are damned hard to get right in advance, if not impossible.

    But I do expect an ecologist to be able to pinpoint changes that will raise the probability of a collapse or an extinction (or the hastening of one). Same with a limnologist and eutrophication. At least pinpoint whether this set of conditions makes outcome X more likely or less likely. In serious cases, indicate whether pressures are consistently building that make it close to inevitable, as with the Wool Reserve Price Scheme.

    That doesn't seem like an unreasonable professional expectation to have of economists, does it?

    Yet we (well, macroeconomists, of whom I am not really one) seem to be truly awful at this, and the "useful for the purpose for which it was designed" freshwater crew seem to regard the purpose for which their models are designed to be to convince themselves that various matters are now "settled" and therefore can be dismissed on blog comments threads, or as rogue economists with newspaper columnists who should be disregarded.

    1. What's f*cking irritating is that nobody was ragging on economists for not predicting that company X or asset Y would experience a collapse at time T. Nobody except people making excuses for right-wing macro failures.

      And Krugman has regular updates on various economists who've spent the last few years working hard to maintain their 'dead wrong' streak.

  7. I'm not sure I understand a 'rational expectations' model that claims that all those rational choosers occasionally produce an irrational collective situation (which would seem to violate the methodological individualism they are basing their theory on) in the face of which our choice is a miracle or the null set - the miracle that the announcement of the irrational situation would, in advance, prevent the situation from coming to its conclusion, or the inability to say anything at all about the situation because -- its irrational! This strikes me as a first order double bind that vitiates the idea that this is a foundation- if "foundation" still has the commonsense meaning. Rather, it is one method of training people to interpret one small subset of economies, those with quasi-capitalist features. It is a heuristic. And not a very good one.

  8. "Now, there are two reasons why we might value a macroeconomic model. One is forecasting ability. The other is policy advice. If existing DSGE models are crappy at the former, might they not be useful for the latter?"

    DSGE has failed on both counts with stagnating median incomes, large amounts of private debt, poverty outside of China (who quite clearly don't rate neoclassical economics) pretty much at a standstill & rising in Africa, and of course the crisis they couldn't predict and can't model.

  9. Good post as always. Just one thing.

    I'm not sure if its really fair to describe the "rational expectations hypothesis" as "Lucas' "rational expectations hypothesis"". True Lucas was the first macroeconomist to really seriously incoprorate the idea of RE into macro models, and develop ideas such as his surprise supply function/island's parable/signal extraction etc which led to influential work such as Sargent and Wallace's policy ineffectiveness proposition, but Muth (1961) marked the formulation of Rational Expectations.

    Also Keuzenkamp (1991) apparently suggests Tinbergen formulated the basic idea of RE 30 years earlier, but I have never checked this out.

    1. Muth's formulation was a bit different, and - more importantly - he didn't insist that it always held.

  10. Michael Harris6:13 AM

    I have a feeling that the "islands parable" was due to Phelps, no? I haven't checked, but my recollection was that it was his idea.

    1. Oh, I don't know. Never read Phelps! That's cool, though.

    2. Michael Harris5:13 PM

      Noah, see

      "In a 1969 paper, Phelps sketched an economy of widely separated "islands" in which workers have to decide whether to accept the local market wage or to move on. Even in an equilibrium scenario, workers on an island with an appreciably inferior wage will get on the boat to try another island, suffering voluntary unemployment during their search."

  11. Noah, a serious question for someone who left the world of academic papers behind for a career in investment management over 30 years ago. For those like you who say they are a fan of the general idea of micro foundations, how do you respond to the Sonnenschein, Debreu, Mantel theorem? Back when I was the young guy studying this stuff I thought that pretty much answered the question of how useful it could be. Personally, I don't care for models that can have multiple solutions. Clearly, the academic profession has ignored this result for the most part. Why? Is there something inherently flawed in the theorem? Thanks.

    1. The theorem is correct. It means that you can't always use a representative agent. That doesn't mean you can't have microfounded models, though.

    2. I don't know about that. To me it means that a micro-founded model is, at the end of the day, a heuristic. Maybe of some use, but I would note that on Wall St. where this stuff is done to make money as opposed to explain the world, really nobody uses any kind of RBC-based model, though there are some New Keynesian models that are DSGE oriented. I think of DSGE as more of a technique as opposed to being consistent with a particular theory. Anyway, I think SDM raises some legitimate issues for micro-founded approaches. At least if you are in search of truth and beauty.

  12. Noah, it's not just that the Lucas/Prescott/Sargent Program hasn't "delivered" yet - it actually will never be able to "deliver" - and I try to explain WHY in an article on my blog:

  13. "Lucas again: "If an economist had a formula that could reliably forecast crises a week in advance, say, then that formula would become part of generally available information and prices would fall a week earlier."

    It may not be easy for an economist to know when the market and financial system is going to crash within a week, or a year, But:

    1) The stock market portfolio can have the same market value of, say, $100/share in situation (a) You know for certain stocks will be worth $102 next year, and (b) You think there is p% probability the stocks will be worth $40 next year and (1-p)% probability the stocks will be worth $120 next year. For some p, (a) and (b) are both worth the same amount, so an economist can know were in situation (a), or a type (a) situation, and that that's very risky and bad for the country, without having an abnormally large profit opportunity.

    2) A lot of smart people did know that we we're cruising for a bruising, and got out of the market, but the very expert and well informed in finance and economics only control so much money, and can only push prices so far to efficiency, and as I wrote in a 2006 letter in The Economists' Voice:

    ...One reason which was missing, at least explicitly, and which I have not seen yet in the literature, at least explicitly, is that a smart rational investor is limited in how much of a mispriced stock he will purchase or sell by how undiversified his portfolio will become. For example, suppose IBM is currently selling for $100, but its efficient, or rational informed, price is $110. It must be remembered that the rational informed price is what the stock is worth to the investor when added in the appropriate proportion to his properly diversified portfolio of other assets. Such a savvy investor will purchase more IBM as it only costs $100, but as soon as he purchases more IBM, IBM becomes worth less to him per share, because it becomes increasingly risky to put so much of his money in the IBM basket. By the time this investor has purchased enough IBM that it constitutes 20 percent of his portfolio, the stock may have become so risky that it’s worth less than $100 to him for an additional share. At that point he may have only purchased enough IBM stock to push the price to $100.02, far short of its efficient market price of $110. Thus, if the rational and informed investors do not hold or control enough—a large enough proportion of the wealth invested in the market—they may not be able to come close to pushing prices to the efficient level.


  14. Anonymous8:50 PM

    Your comment about the potential benefits of models being to provide a signal about when the possibility of crisis is heightened (but not neccessarily exactly when it will occur) resonated very strongly with me.

    To me it seems we can't really get to this point though unless we recognize that market participants and economists have imperfect knowledge both about how fundamentals will unfold (even beyond a probability distribution, more along the lines of Knightian uncertainty), and about how those fundamentals will effect prices (in part because of higher order expectations as Brunnermeier has worked on). So in economist speak, not only do we not know the future X's (in a way in line with what Lucas worked on with imperfect information), but we also do not know the betas (which REH rules out). This concept seems to be in line with the writings of George Soros on financial markets, and the work of Frydman and Goldberg on Imperfect Knowledge Economics. I am curious if you are familiar with either.

  15. Let no one mistake this comment as other than an attack on Lucas as an economist. In "Econometric Policy Evaluation: A Critique" Lucas made no claim of originality. He wasn't just being modest. The paper starts with a literature review in which Lucas quoted many papers which made the point. Someone who was there (Larry Summers) noted that when the paper was presented, the general view was that everyone knew that.

    Lucas has made an extremely influential contribution to the study of the history of thought -- he has somehow convinced people that before Lucas macro economics was dominated by idiots (such as Sameulson, Solow and Marshack -- I mean really is it plausible ?).

    You kids jump from the Lucas critique to DSGE. Back in the day, there was a period of fascination with the Lucas supply function. It is now agreed that this was a totally silly idea, not just wrong but silly.

    The Lucas growth model is due to Uzawa.

    Lucas did some interesting work on General equilibrium with multiple agents based on Lawrence Weiss's demonstration that results with the Lucas supply function depend on the assumption of symmetric information. No huge deal but a contribution.

    The DSGE model was presented by Arrow and Debreu in the early 50s. The original contributions of Keydland and Prescott were two. The first was to make critical totally implausible assumptions such that there is a representative consumer -- this makes a huge difference (as was well known as a topic in first year graduate micro). It is totally implausible. The implications are totally false. It is not an advance. The second is to claim that a very simple DSGE model with parameters supported by long term trends or micro data gives implications similar to the data. This is, as have notetd in this blog, a totally incorrect claim. This was not an advance either.

    You mention in passing that DSGE models might give useful policy guidance. You know and hint that this is just because "might" makes right. So *might* astrology.

    In contrast, the theory of phlogiston fit facts and made it possible to predict thre results of experiments. I see no basis for a comparison of the scientific status of research on phlogiston and DSGE macro. Nor was it a dead end -- Lavoisier's experiments sure seem to be attempts to measure the amount of Phlogiston in mercury. The model was strongly rejected by the data as the measured amount was negative. Science advanced as it does when models are tested and rejected -- provided they aren't assumed to be useful approximations even if predictions based on the models are contradicted by the data.

    1. I've actually never heard of the Lucas growth model (is it the same as the Uzawa 2-sector growth model?

      Also, I've never heard of the Lucas supply function. What is it? do I need to know?

      Lucas has made an extremely influential contribution to the study of the history of thought -- he has somehow convinced people that before Lucas macro economics was dominated by idiots

      How do you think he managed to do that? I don't know nearly enough of this history, but I'm very interested in it.

    2. Michael Harris2:01 AM

      Noah, as an interested amateur in "history of thought" and someone who once taught enough undergraduate macro to become very aware of how much I didn't know:

      - the Lucas "surprise" supply function was his very early contribution to explaining business cycles with rational expectations. In it, prices would change and the agents faced a signal-extraction problem: they couldn't tell whether any given price changed they observed was part of an absolute price level change (i.e. inflation), or a relative price change. Price changes due to inflation should not have real effects, but relative price changes would. This resulted in a monetary-driven business cycle model in which policy ineffectiveness held for all but very short-term monetary "surprises". It was a key pre-real-business-cycle new classical business cycle moment, but it's basically been abandoned as far as I can tell. You could catch up on it quickly through books like Steve Sheffrin's on rational expectations, and Kevin Hoover's book on new classical macro is probably worth delving into. Also:

      - the best history-of-thought (in modern macro) treatments I know of are in the various books by Snowdon and Vane (and occasional others). They have, for example, books of interviews with eminent macroeconomists from across the spectrum, and typically have an opening chapter that lays out the historical development of ideas. I picked up a recent book of theirs, but haven't had a chance to work through it (I think it's the updated version of an older one). (I'd also recommend the earlier and argumentative interviews book by Klamer and Colander.)

      - as I see Robert Waldmann's point, it's that the "history of macro" has, from current perspectives, been misleadingly collapsed into a very stylised path going something like:
      -- neoclassical/Keynesian synthesis (1950s-60s)
      -- vertical (expectations-augmented) Phillips Curve (due to Friedman, and maybe Phelps if one remembers to include him, late 1960s)
      -- Lucas critique (mid 1970s)
      -- and, voila, DGSE! (1980s and beyond)
      The problem seems to be that it neglects (and so trivialises) important things that happened along the way, and as Robert W notes, it has the effect of suggesting that everything before Lucas (particularly before Lucas-critique Lucas) is silly and dismissable. By contrast, the macro that I learned (badly, most likely) stressed a "microfoundations" debate stretching through the '50s and into the '70s that included names like Patinkin, Clower and Leijonhufvud. There's a nice summary treatment of this literature by an actual GE theorist, E Roy Weintraub, who looked at the microfoundations debate from two GE perspectives: Walrasian and Cournotian. But by the time that book came out, the whole microfoundations debate seemed to have shifted to being a response to the Lucas critique.

      I wouldn't suggest you dive into all these readings now. (Finish that damn dissertation.) But if in your new teaching job you get some macro courses to teach, you have an excuse to do some browsing into new -- but old! -- areas.

    3. This is very very very interesting stuff. If I had the time, I'd write a history-of-thought book about what happened to macro in the 70s and 80s. Maybe someday...