Now, of course, it is easily possible to point out a bunch of flaws in this approach. How will the set of models be chosen? How different are the models from each other? How would the inclusion of an utterly spurious model be detected and prevented? Is there publication bias at work? How can the database's users know what it means to give the models various weights? And so on. But in general, I think that this database is a good thing to do. Anything that makes model uncertainty explicit is good, and I think Wieland et. al. have executed this well.
However, I would like to point out specific error that I think Wieland et. al. may be committing here. In fact, Robert Waldmann has already pointed it out, in reference to the modeling approach used by the Bank of England. (Robert Waldmann does not write with a layperson audience in mind, so let me paraphrase instead of quote.) Basically, the Bank of England has what they call a "core" model, which is a microfounded DSGE model that contains only parameters that are assumed to be policy-invariant - in other words, a model Bob Lucas would approve of as "structural". Then it lumps that "core" model in with a bunch of other non-structural, ad-hoc type models to make a "hybrid" model. Then it uses the "hybrid" model for policy analysis.
Waldmann's point is this: Why bother with the "core" model in the first place? As soon as you lump it in with stuff that isn't "structural," it becomes pointless. If you're going to have a non-structural model, just use a non-structural model and don't bother with the DSGE thing.
The same problem applies to the database constructed by Wieland, et. al. It appears to contain a mix of microfounded, potentially "structural" DSGE models and non-structural ad-hoc models. But these two types of models don't mix.
As I always say, there are two things we might want from macro models: 1. forecasting, and 2. policy analysis. If all we want to do is forecast, we don't need to worry about whether our models are policy-invariant. In fact, making them policy-invariant will probably make them worse at forecasting, since it will severely shrink the number of variables in the model. So if forecasting is what we want, we don't need the structural models, and might as well toss them out, since they will probably just add noise to the consensus forecast.
But if policy analysis is what we want, then we need to answer the Lucas Critique. In this case, we don't want the ad-hoc models. We want only the policy-invariant models. The ad-hoc models should be tossed out. (Actually, in practice, it's hard or impossible to know if a parameter is really structural. But let's put this issue aside for now.)
So the full Wieland database is not the right mix of models for policy analysis, and it is probably not the right mix of models for forecasting either. To really use this database as it ought to be used, one should be able to easily toggle "only structural models".
Update: Waldmann has his own critique of the Wieland approach, which is different from (and partly contradicts!) mine.
What about the DSGE-VAR approach?
ReplyDeletehttp://www.ecb.int/events/pdf/conferences/schorfheide.pdf
It kind of merges the two approaches in a meaningful way.
Thanks for the link. I very much agree with one thing you wrote here " (Actually in practice ... ."Thus I very strongly disagree with something else you wrote "But let's put this issue aside for now.)". I am typing on an iPad so I will be brief (no applause or sighs of relief are needed). I see no reason to believe that the so called "structural models" are policy irrelevant. In fact I am absolutely sure that they are vulnerable to the Lucas critique.
ReplyDeleteFrom the first sentence in parentheses, I am also sure you agree. The fact that a model is internally consistent and includes only rational maximizing agents does not imply that it is policy irrelevant. This would be true if the assumed objective functions and constraints correspond to the actual real world objective functions and constraints, that is, it would be true if the model weren't a model but rather the truth the whole truth and nothing but the truth.
A model in which we assume that agents have different desires than their true desires can fit the data OK for one policy regime, but not at all for another. This is obvious. A simple example, I observe the sexes of legally married couples and conclude that gay people aren't interested in marriage. My forecast of the effect of policy changes will be way off. I think this is a perfectly fair comparison vs say estimates of the effect of fiscal stimulus on employment based on models in which the labor market clears and estimates of labor supply elasticities from cross sectional data.
Such models are ( finally) out of fashion. However, I think the empirical failures of new Keynesian models are different in degree but not in kind. I confess ignorance as usual and report my guess as to what is in the literature I haven't kept up with.
As far as I know, consistent micro founded models can fit moments of the data using roughly one free parameter for every phenomenon to be explained. As far as I know, the interaction of the models and the data is about what one would expect if the models were fundamentally wrong. If they are, then they are not useful for poicy analysis.
The reason I care about forecasting performance is not that I aim to forecast, but that it is the only possible way to determine if a model has any scientific value at all. Policy invariance is just one of the desirable features a model might have. I see no reason to seriousely entertain the conjecture that existing micro founded macromodels (very definitely including the one's on which I am currently working) have any desirable features.
Isn't "et al" short for "et alii"? I think "et. al." is one period too many.
ReplyDeleteYou make a good point, TED.
Delete