Thursday, April 13, 2017

Ricardo Reis defends macro


I really like this defense of macroeconomics by Ricardo Reis. He makes it clear that he's sort of playing devil's advocate here:
While preparing for this article, I read many of the recent essays on macroeconomics and its future. I agree with much of what is in them, and benefit from having other people reflect about economists and the progress in the field. But to join a debate on what is wrong with economics by adding what is wronger with economics is not terribly useful. In turn, it would have been easy to share my thoughts on how macroeconomic research should change, which is, unsurprisingly, in the direction of my own research. I could have insisted that macroeconomics has over-relied on rational expectations even though there are at least a couple of well developed, tractable, and disciplined alternatives. I could have pleaded for research on fiscal policy to move away from the over-study of what was the spending of the past (purchases) and to focus instead on the spending that actually dominates the government budget today (transfers). Going more methodological, I could have elaborated on my decade-long frustration dealing with editors and journals that insist that one needs a model to look at data, which is only true in a redundant and meaningless way and leads to the dismissal of too many interesting statistics while wasting time on irrelevant theories. However, while easy, this would not lead to a proper debate
Reis goes on to defend academic macro from some of the main recent criticisms, including:

  • Macro relies on representative agents
  • Macro ignores inequality
  • Macro ignores finance
  • Macro ignores data and focuses mainly on theory

He gives a sampling of 8 job market papers by recent highly successful job market candidates, and a sampling of recent articles in the Journal of Monetary Economics. This actually seems like a pretty stringent test of the criticisms to me - job market papers are probably weighted toward theory, for signaling purposes, while the JME has a reputation as a very (small-c) conservative macro journal.

But as Reis shows, modern macro papers generally don't fit the caricature described above. There's lots of heterogeneity in the models, a fair amount of attention to inequality and distributional concerns, plenty of finance, and lots and lots of data.

Reis is right; a lot of these criticisms are now out of date. That doesn't mean they were never right, though. There was a time when macro models mostly did use representative agents, when financial sectors were rarely modeled, and when calibration served as the only empirical test of many models. The point is not that the critics were full of it, but that macroeconomists were aware of the problems and moved their field in the direction it needed to go. Macro is a dynamic field, not a hidebound one.

And Reis himself shows that macroeconomists - at least, many of them - know of a number of areas where the field still needs to improve. He wants to move away from exclusive reliance on rational expectations, and stop forcing authors to stick in theory sections when they're not really needed.

This all sounds great to me. Personally, I'm particularly happy about the increase in "micro-focused macro". Arlene Wong's JMP, which Reis references, is a great example of this. Very cool stuff. Basically, finding out more micro facts about the effects of business cycles will help guide new theories and (ideally) discipline old ones.

But one problem still nags at me, which Reis doesn't really address. Why didn't macro address some of these problems earlier - i.e., before the crisis? For example, why was finance so ignored? Sure, there were some macro models out there that included finance - even a few prominent ones - but most researchers modeling the business cycle didn't feel a need to put financial sectors in their models. Another example is the zero lower bound, and its importance for monetary policy. A few macroeconomists were definitely clued into this years before the crisis, but they seem to have been a less-influential minority, mostly confined to international macro. In the runup to the crisis, macro researchers were generally not sounding the alarm about the danger from financial shocks, and after the recession hit and rates went to zero, many leading macroeconomists still dismissed the idea of fiscal stimulus.

Fixing problems quickly is great, but it's also important to ask why the problems were there in the first place.

One possible answer is sociological - as Paul Romer tells it, it was largely the fault of Robert Lucas and Thomas Sargent for bullying the profession into adopting bad models (or the fault of their early critics like Solow for bullying them into becoming bullies).

I don't know how true that story is. But I do think there's another potential explanation that's much more about methodology and less about personalities. One thing I still notice about macro, including the papers Reis cites, is the continued proliferation of models. Almost every macro paper has a theory section. Because it takes more than one empirical paper to properly test a theory, this means that theories are being created in macro at a far greater rate than they can be tested.

That seems like a problem to me. If you have an infinite collection of models sitting on the shelves, how does theory inform policy? If policy advisers have an endless list of models to choose from, how do they pick which one to use? It seems like a lot of the time it'll come down to personal preference, intuition, or even ideology. A psychologist once joked that "theories are like toothbrushes...everyone has one, and nobody wants to use anyone else's." A lot of times macro seems like that. Paul Pfleiderer calls this the "chameleon" problem.

It seems to me that if you want to make a field truly empirical, you don't just need to look at data - you need to use data to toss out models, and model elements like the Euler equation. Reis' suggestion that journal editors stop forcing young authors to "waste time on irrelevant theories" seems like one very good way to reduce the problem of model proliferation. But I also think macro people in general could stand to be more proactive about using new data to critically reexamine canonical assumptions (and a few do seem to be doing this, so I'm encouraged). That seems like it'll raise the chances that the macro consensus gets the next crisis right before it happens, rather than after.

9 comments:

  1. Do the models reliably predict? If not, does any of that other stuff matter?

    ReplyDelete
    Replies
    1. I think once it doesn't reliably predict and it is acknowledged that it doesn't reliable predict. Then the model becomes a moral argument by its proponents and something to shape our economy to fit.

      Delete
    2. "Do the models reliably predict?"

      More important than whether they predict is whether they explain the differences between successful growth and failed economies. Econometrics will give you lots of "good predictions" most years based on theories that say that 1929 is impossible and that perform otherwise almost, but not quite as well as predicting that this year's growth will be the same as last year's.

      A better test is, do the models say we were doing everything right from 1920-1932 in the US and do they say that South Korea did everything wrong from 1940-1980 or so? Any theory that holds that US 1920-1932 policy is better than South Korea 1940-1980 policy, or even better than US 1933-1945 (when we tripled in a dozen years) needs to be thrown out with prejudice.

      An economic theory that predicts well "except when you need it" should be viewed in the same light as the humorful US strategic grain reserve policy (it contains cash and not grain - grain reserves are there unless you need them).

      Delete
  2. I believe new methods in academic macro do not address the problem of predicting the next financial crises. I think the solution to predicting the next crisis is Michael Burry style analysis which academic macro clearly doesn't do.

    ReplyDelete
  3. Imagine if you write 100= 1+1+1...+1, how many significant digits you are going to have! Thousands of economists will toil for decades analyzing the possibilities.

    ReplyDelete

  4. Blah Blah Blah I’m just a dilettante sprouting nonsense probably but I have come around quite a bit on DGSE and how useful it is for theory and discussing those theories. I am even more convinced that Agent-Based Models (ABMs) will eventually supplant the DGSE paradigm for talking about policy (rather than theory) because they are superior models of the real world. An easy logical basis for this statement is as follows:

    Every DGSE model can be translated/rewritten as an Agent-Based Model (ABM).
    IOW every DSGE model is a subset of the possible ABM models.

    Any important economic interaction that can be modeled by a DGSE model can be integrated into a ABM model.
    Potentially important economic interactions that cannot be modeled within the DGSE framework can also be integrated into a ABMs.

    Therefore ABM’s are generally superior models in their ability to replicate real world aspects of the economy.

    Even though ABM’s are superior as a model doesn’t mean they are superior for the purposes which models are used. Interpreting the results of an ABM are more difficult than a DGSE because of the non-linearities allowed within the model. If something is true only within a set of initial conditions or a limited range of instances then it is much more difficult to tell a coherent story or derive fundamental rules about an economic system. The linear approximations are adept at constraining the outputs of a model in such a way that it is much easier for people to interpret the results. Generalized axioms can be developed because linearity restricts the complexity of the outcomes. However, it must be said that the danger lies when a DGSE model results or gives evidence for theories that are only true because the relevant outputs are smoothed away by the requirement of linearity.

    In the long run ABMs can be an important part of discerning when the rules, axioms, stories, or lessons that a learned from a DGSE model are actually relevant to the real world or an artifact of confining a model to linearity.

    ReplyDelete
    Replies
    1. Anonymous9:11 AM

      Superiority of replicating the "real world" (I think "the data" is a better term, by the way), is not necessarily a good thing. A vector autoregression (VAR) does a very good job at replicating the data, and an overfitted VAR does an even better job.

      Lastly, there is no "L" in "DSGE" -- that is, they are not, by definition, linear. It is true that many developers of DSGE models use linearization techniques to solve them, but far from all. If a model displays interesting nonlinearities, they are most often explored.

      Delete
  5. What is REALLY wrong with macro
    Comment on Noah Smith on ‘Ricardo Reis defends macro’

    Ricardo Reis asks in a recent paper “Is something really wrong with macroeconomics?” and gives the answer “everything is wrong with macroeconomics.” But there is no need to worry because “Every hour of my workday is spent identifying where our knowledge falls short and how can I improve it.” And then he goes on to show that most of the heterodox critique of macro is unjustified.

    This is indeed the case but from the silliness of heterodox critique does NOT follow by simple reversion that Orthodoxy is — with some obvious corrections, of course — on the right track. Heterodoxy traditionally misses the decisive point as already Hahn noted: “The enemies, on the other hand, have proved curiously ineffective and they have very often aimed their arrows at the wrong targets.” This is why Orthodoxy is chirpy.

    Among the wrong targets is the whole issue of prediction/forecasting and of political bias.#1 The sole criteria to judge a theory/model is material and formal consistency. And the simple fact of the matter is that standard macro is provable false on both counts. Therefore, it is scientifically worthless.

    The silliness of traditional heterodox critique has the paradox consequence of guaranteeing the survival of Orthodoxy. Ricardo Reis plays this easy game by frankly admitting undeniable blunders, by countering misplaced critique at great length, by cheerfully promising betterment for the future, and by giving the impression that all is under control and on the right track: “Not a single one of these bright young minds that are the future of macroeconomics writes the papers that the critics claim are what all of macroeconomic research is like today.”

    The vacuousness of the whole orthodox-heterodox wrestling circus ensues from the fact that macro is axiomatically false. Methodologically speaking, axiomatically false is the death sentence for a paradigm. Traditional Heterodoxy, though, never spotted the lethal defects at the core but exhausted itself with the repetitive debunking of non-lethal defects on the surface. The ultimate failure of traditional Heterodoxy is that it never had a superior alternative to offer: “The problem is not just to say that something might be wrong, but to replace it by something — and that is not so easy.” (Feynman)

    There is only ONE effective critique of a false paradigm and this a new paradigm. The rest is hand waving.

    What is REALLY wrong with macro is that it is microfounded. The Walrasian axiom set is given with: “HC1 economic agents have preferences over outcomes; HC2 agents individually optimize subject to constraints; HC3 agent choice is manifest in interrelated markets; HC4 agents have full relevant knowledge; HC5 observable outcomes are coordinated, and must be discussed with reference to equilibrium states.” (Weintraub)

    The Walrasian axiom set contains three NONENTITIES (HC2, HC4, HC5) and is forever unacceptable. The false microfoundations have to be fully replaced by true macrofoundations.#2 This is what a paradigm shift is all about.

    Ricardo Reis defends the indefensible.

    Egmont Kakarot-Handtke

    #1 See ‘ICYMI Prediction/Forecasting’
    http://axecorg.blogspot.de/2016/10/icymi-predictionforecasting.html
    #2 See ‘From false micro to true macro: the new economic paradigm’
    http://axecorg.blogspot.de/2016/11/from-false-micro-to-true-macro-new.html

    ReplyDelete
    Replies
    1. RE: HC1-HC5.

      Personally I don't see HC2 and HC4 as being huge problems (Well, it depends on exactly how HC4 is applied - For example, is a hobson's choice a failure of HC4 or a failure of something else?). I see huge problems with HC5. The biggest issue is not whether error is created but whether the error term is mathematically convergent or divergent.

      The other big question is, what is being claimed? For example, much of conservative economic theory can be fixed if we simply say "We are not trying to solve for the maximum GDP growth rate, but rather for an economy that meets our criteria of 'fairness'". Then their theories can lead straight into Great Depression II and still be a successful outcome, by their definitions and values.

      Delete