So far, we don't seem to have gotten a heck of a lot of a return from the massive amount of intellectual capital that we have invested in making, exploring, and applying [DSGE] models. In principle, though, there's no reason why they can't be useful.
One of the areas I cited was forecasting. In addition to the studies I cited by Refet Gurkaynak, many people have criticized macro models for missing the big recession of 2008Q4-2009. For example, in this blog post, Volker Wieland and Maik Wolters demonstrate how DSGE models failed to forecast the big recession, even after the financial crisis itself had happened:
This would seem to be a problem.
But it's worth it to note that, since the 2008 crisis, the macro profession does not seem to have dropped DSGE like a dirty dishrag. Instead, what most business cycle theorists seem to have done is simply to add financial frictions to the models. Which, after all, kind of makes sense; a financial crisis seems to have caused the big recession, and financial crises were the big obvious thing that was missing from the most popular New Keynesian DSGE models.
So, there are a lot of smart macroeconomists out there. Why are they not abandoning DSGE? Many "sociological" explanations are possible, of course - herd behavior, sunk cost fallacy, hysteresis and heterogeneous human capital (i.e. DSGE may be all they know how to do), and so on. But there's also another possibility, which is that maybe DSGE models, augmented by financial frictions, really do have promise as a technology.
This is the position taken by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide of the New York Fed. In a 2013 working paper, they demonstrate that a certain DSGE model was able to forecast the big post-crisis recession.
The model they use is a combination of two existing models: 1) the famous and popular Smets-Wouters (2007) New Keynesian model that I discussed in my last post, and 2) the "financial accelerator" model of Bernanke, Gertler, and Gilchrist (1999). They find that this hybrid financial New Keynesian model is able to predict the recession pretty well as of 2008Q3! Check out these graphs (red lines are 2008Q3 forecasts, dotted black lines are real events):
I don't know about you, but to me that looks pretty darn good!
I don't want to downplay or pooh-pooh this result. I want to see this checked carefully, of course, with some tables that quantify the model's forecasting performance, including its long-term forecasting performance. I will need more convincing, as will the macroeconomics profession and the world at large. And forecasting is, of course, not the only purpose of macro models. But this does look really good, and I think it supports my statement that "in principle, there is no reason why [DSGEs] can't be useful."
Remember, sometimes technologies take a long time to mature. People thought machine guns were a joke after they failed to help the French in the War of 1870. But after World War 1, nobody was laughing anymore.
However, I do have an observation to make. The Bernanke et al. (1999) financial-accelerator model has been around for quite a while. It was certainly around well before the 2008 crisis. And we had certainly had financial crises before, as had many other countries. Why was the Bernanke model not widely used to warn of the economic dangers of a financial crisis? Why was it not universally used for forecasting? Why are we only looking carefully at financial frictions after they blew a giant gaping hole in the world economy?
It seems to me that it must have to do with the scientific culture of macroeconomics. If macro as a whole had demanded good quantitative results from its models, then people would not have been satisfied with the pre-crisis finance-less New Keynesian models, or with the RBC models before them. They would have said "This approach might work, but it's not working yet, let's keep changing things to see what does work." Of course, some people said this, but apparently not enough.
Instead, my guess is that many people in the macro field were probably content to use DSGE models for storytelling purposes, and had little hope that the models could ever really forecast the actual economy. With low expectations, people didn't push to improve the existing models as hard as they might have. But that is just my guess; I wasn't really around.
So to people who want to throw DSGE in the dustbin of history, I say: You might want to rethink that. But to people who view the del Negro paper as a vindication of modern macro theory, I say: Why didn't we do this back in 2007? And are we condemned to "always fight the last war"?
Update: Mark Thoma has some very good thoughts on why we didn't use this sort of model pre-2008, even though we had the chance.
Update 2: Some commenters and Twitter people have been suggesting that the authors tweaked ("calibrated") the parameters of the model in order to produce the impressive results seen above. The authors say in the paper (p. 13, section 3.1) that they did not do this; rather, they estimated the model using only data before 2008Q3.
Which is good, because calibrating parameters to produce better forecasts is definitely something you are not supposed to do!! There is a difference between "fitting" and "pseudo-out-of-sample forecasting". The red lines seen in the picture above are labeled "forecasts". To do a "pseudo-out-of-sample forecast", you train (fit) the model using only data before 2008Q3, and then you produce a forecast and compare it with the post-2008Q3 data to see how good your forecast was. You should never fiddle with the model parameters to make the "forecast" come out better!
From Section 3.1 of the paper it seems fairly clear that del Negro et al. did not make this mistake. But I think the authors should explain the forecasting procedure itself in greater detail in the next iteration of the working paper...just in case readers worry about this.
In some sense it seems inevitable that macro would "always fight the last war" for all sorts of reasons, call them the Lucas critique or what you like. However this seems to only be true in the limit, in the current case we have fought and especially won so few wars that this problem SHOULD be managable right now.ReplyDelete
The Franco-Prussian War was in 1870-71, not 1873. But you knew that, right?ReplyDelete
I think the model predicted 1870-71.Delete
Add frictions to your model to predict the last crisis? Seems great right? And Larry Summers loves it — http://delong.typepad.com/sdj/2013/04/reconstructing-macroeconomics-exchange-mervyn-king-ben-bernanke-olivier-blanchard-axel-weber-larry-summers.htmlReplyDelete
Well, I can come up with investment allocation models that could give me market-smashing gains over the past x years. In fact I have done precisely that, and I've been tracking the outcomes, and the market is smashing me (at least for now). Overfitting.
I predict that each new set of frictions that are added to DSGE to explain the last crisis will fail in the face of future crises. Too many oversimplifying and wrong micro-assumptions. As you noted yesterday, autoregressions have performed better, and my sense is that that will remain the case because they autoregressions are not (badly) micro-founded. Until we get proper behavioural microfoundations we're barking at a wall.
La'O and Bigio paper and Brunnermeier are also promising modelsReplyDelete
Just a former math student over here but:ReplyDelete
Assume that the state of the economy can be expressed in terms of state variables expressed as probability distributions of numerical quantities. Further assume that there exists an (unknown and unknowable) oracle function which, given the existing state and numerical parameters describing the economy and policy choices, will accurately predict the probability distributions of the state of the economy at a future point in time.
1) All attempts at modelling economic outcomes are approximations to the oracle function
2) The better two attempts each are at approximating the oracle function the closer those two attempts are to each other (in some metric the triangle inequality will apply).
3) All useful attempts at modelling the oracle must have some underlying algebraic equivalence.
'to me that looks pretty darn good!'ReplyDelete
I don't know about you but they don't compare their model's predictions against any other forecasting model- so how can we possibly say. No RMSEs, or other criteria to assess how well this model forecasts relative to other models. It all looks pretty ad hoc to me bolting on financial frictions to a real model. We have plenty of econometric models that can handle finance relatively well. The old school Cowles Commission models a la Fair for a start. Not cutting edge stuff, but they sort of work if you think that finance is important.
"Why was the Bernanke model not widely used to warn of the economic dangers of a financial crisis?"ReplyDelete
How much we should be worried about financial crises, according to the Bernanke et al./Christiano et al. models, depends on the standard deviation of risk shocks. Maybe before 2008Q3, this standard deviation was estimated to be small. Del Negro et al. show the model can predict subsequent inflation and growth pretty well, *conditional* on the large 2008Q3 shock; this doesn't mean the model would have warned you that Lehman-sized shocks could occur, if you had estimated it on pre-2008Q3 data. (Maybe it would have. I don't know.)
Maybe before 2008Q3, this standard deviation was estimated to be small. Del Negro et al. show the model can predict subsequent inflation and growth pretty well, *conditional* on the large 2008Q3 shock; this doesn't mean the model would have warned you that Lehman-sized shocks could occur, if you had estimated it on pre-2008Q3 data.Delete
OK, let's assume that's true. I think there's two basic arguments you could then make.
Argument 1: "Financial DSGE models couldn't anticipate the size of the crisis shock before 2008Q3."
I think this might be correct. Maybe not, since there were lots of other countries to observe, and many had had big financial crises before 2008. But if you restrict yourself to U.S. data, then yes.
BUT, notice that standard DSGE models failed not only to predict the size of the 2008 shock, but the size and duration of the recession that followed. The ability in 2008Q3 to predict a big recession and long stagnation would still have been very important! Compare the 2008Q3 forecasts of the models Wieland samples with the forecasts from the del Negro paper. Big and important difference!
Argument 2: "Since pre-2008 financial shocks were small, no one thought it was important to make models in which financial shocks played a big role."
Though I think this probably does describe the thinking of a bunch of economists pre-2008, I think it's completely wrong to think this way. There were financial crises in many countries before 2008; even if you don't calibrate your model on the data from those countries, you should use those countries' experiences to motivate the inclusion of financial frictions in your model. Also there was the Great Depression...
I agree on both counts.Delete
"BUT, notice that standard DSGE models failed not only to predict the size of the 2008 shock, but the size and duration of the recession that followed. The ability in 2008Q3 to predict a big recession and long stagnation would still have been very important! Compare the 2008Q3 forecasts of the models Wieland samples with the forecasts from the del Negro paper. Big and important difference!"Delete
Noah, what would old, dusty, pre-DSGE models have predicted, if you plugged in the biggest financial crash in 70 years? A massive recession?
I don't fully agree that DSGE + financial fractions is a success story. If you look at recent papers with financial frictions (for example some work by Gertler, Kiyotaki, Dib etc.) you will see that they introduce financial frictions on the supply side. They do this by modeling bank expropriation. This is used to introduce a budget constraint on the bank and in this way have a friction on the supply side. But this expropriation and other related stuff in the model is not directly observable. So you are free to calibrate it as you wish or as the first person who published a paper on the topic wished to. And if you are free to calibrate it as you wish, you can explain anything you want as you choose the parameters appropriately.ReplyDelete
So you think they tweaked the parameters to make the pseudo-out-of-sample forecast fit the data?Delete
That would be cheating, if they did. Inexcusably poor understanding of "forecasting" vs. "fitting" at best, academic dishonesty at worst.
I hope this is not what they did.
I am no expert on this (so I could well be wrong), but my impression is that their modeling approach requires calibrating a bunch of unobservable variables. So there is nothing that restrains them in how they do it. so i don't think it's cheating; it's part of the whole exercise of trying to calibrate your model. if you don't observe some variable, you still need to put some value on it and all the other magnitudes in your model will depend on what value you chose. but i've not read those papers too carefully...Delete
They estimated their model (using a Bayesian prior). They didn't calibrate it. And they fitted it to match the pre-2008Q3 data only...so the forecasts you see are "pseudo-out-of-sample" (i.e., they pretend that 2008Q3 and everything after it was the "future", for the purpose of seeing how well the model did after being "trained" on pre-2008Q3 data).Delete
This is what they should have done, so I don't think they fudged it.
Using a Bayesian estimation procedure does not necessarily imply that there was no calibration involved. If your priors turn out to be important (I haven’t spotted a prior-posterior comparison in the paper), i.e. you put a lot of weight on the priors, it’s just another fancy way of calibrating the parameters...Delete
Yes, that's right...I predict that this will be a big achilles heel of the new "good at forecasting" New Keynesian DSGE model literature, including Smets-Wouters and these financial-accelerator models.Delete
As to whether the prior was fiddled until the forecasts came out right...if Del Negro et al. did that, they know it was bullshit, so my "Update 2" will serve as a gentle reminder... ;-)
Wow you were impressed ! I am feeling very very cranky, partly as i feel abandoned on anti-DSGE Island. I haven't read the paper, so my comment isn't worth reading.ReplyDelete
That said, I will now imagine a superficially convincing but invalid article with that title and abstract.
1) Recall the absolutely key fundamental insight "making predictions is difficult, especially about the future" Yoghi Berra. What we have is 4 years after the event a working paper in which a DSGE model manages to fit the now long known data. Fitting is not forecasting. It is easy to be wise after the fact.
2) I claim DSGE models generally need one arbitrary constant per phenominon to explain. In this case it is the correlation between a quality premium and subsequent output growth. Scientific success occurs when the number of stylized facts fit or correct predictions made exceeds the number of degrees of freedom. A failed approach is neck and neck with the data. DSGE seems not to have pulled away with this paper.
3. The number of exogenous unexplained random variables keeps growing. 2008 was extraordinary, because a large but not post WWII unprecedented downturn was associated with huge huge yield spreads. I am pretty sure that they are ascribed to something unobservable. In Bernanke Gertler Gilchrist 1999 quality premia change endogenously with GDP. Ascribing 2008 to a financial shock will not shock anyone. The question is whether we can predict such financial events. A model with exogenous shifts is the variance of ability across entrepreneurs does not do this.
4)is there anything added by the D or the GE ?
In Bernanke Gertler Gilchrist 1999 the effect of investment decisions by other firms on sales by this firm and so the interest rate it must pay to borrow are modelled. There is something genuinely general equilibrium there. I concede that it is a good paper in the (fairly recent) theoretical macro literature (I think this may be my only such concession except for related papers by over-lapping sets of authors).
But you can get a good fit to recent investment with an accelerator and an even better fit if you toss in "loan officer report on lending conditions"
Consumption is always well modelled as a backward looking smoothed series of aggregate disposable income. Real output is well modelled as aggregate demand. I think inflation is well modelled with an adaptive expectations augmented Phillips curve.
Models which are, by now, ancient, fit the data very well certainly needing no more fiddling than the DSGE model which fits OK.
What is the point of DSGE ? The idea is that a micro founded model makes useful predictions following extraordinary events (so just looking what happened last time isn't OK). That sure didn't work for DSGE in 2009. Or also that they work across changes in policy regimes.
So a multiplier accelerator Phillips curve model would make the same predictions whether or not the FED were attempting extreme non standard monetary policy. Uh it did. and it fit the data. Fancier models made it possible to fantasize about manipulating expectated inflation and therefore getting higher investment than one would guess with an accelerator. The linked study shows that this was a step away from reality.
Fitting is not forecasting. It is easy to be wise after the fact.Delete
Yes, sure. BUT, keep in mind that pseudo-out-of-sample forecasting, while not the same thing as true out-of-sample forecasting, is also not the thing as fitting.
I claim DSGE models generally need one arbitrary constant per phenominon to explain...The number of exogenous unexplained random variables keeps growing...DSGE seems not to have pulled away with this paper.
I'd say instead that it's not clear that DSGE has pulled away. A new parameter has been added to explain a previously ignored phenomenon. But that doesn't mean that the model will then fail to "pull away" by explaining more phenomena in the future without the addition of yet more parameters. I'd say that the good pseudo-out-of-sample fit of this model is encouraging.
The question is whether we can predict such financial events.
No, I do not think that that is the question. I think that is A question. I think that another, also-very-important question is whether given a financial shock, we can predict the real economy's response to that shock.
I think inflation is well modelled with an adaptive expectations augmented Phillips curve.
Didn't Hall (2011) show that this was not the case?
hmm I can't resist arguing about grammar. "DSGE seems not to *have* pulled away" is in some past tense. I didn't make a claim about what will "then" happen in the future. Well uh tenses grammar. I stand by my claim.Delete
I think that pseudo out of sample forecasting can be substantially the same as fitting.
The ways this can happen are
A)one tries something find it doesn't fit pseudo out of sample then one tries something else. This is fitting. the Computer isn't doing all of the work but the process is trying different things till one gets a good fit.
B) Pseudo explaining an ad hoc reduced from model which fits the data. First fit. Then figure out a model which implies the coefficients of the reduced form model. In this case, step 1 is look for dramatic extraordinary behavior of time series in 2008 (many are well know and were discussed a lot in 2008-9 especially including huge quality premia).
Step 2 toss one in a VAR along with the DSGE state from some sort of filter. This is fitting not forecasting
Step 3 motivate the variable as an indicator of an otherwise hard to observe rare shock and again filter so that the deep disturbance in the problematic period is basically that newly introduced shock.
I think we have a case B and that step 2 is here
my thought from 4/16/12
"'The other difference between this model and SWπ is the use of observations on the Baa-ten-year Treasury rate spread, which captures distress in financial markets.'
is a red flag. How is this incorporated ??? I think it is an ad hoc add on based on regressing outcomes on DSGE forecasts and the spread."
Didn't Hall (2011) show that this was not the case?Delete
Kids these days keep up with the literature. I don't know which of two Hall (2011) s you are citing (and have read neither).
I think "The Long Slump" NBER and AER. If so, the issue is that the Phillips curve is a curve and we are at an unusual point with low inflation and high unemployment. My view is that there is strong downward nominal rigidity so that the slope of the Phillips curve (graphed wage inflation on the y axis) is very low at low inflation rates.
one tries something find it doesn't fit pseudo out of sample then one tries something else. This is fitting.Delete
Yes. Depending on how much you let yourself do this, it could be exactly the same, or only slightly the same.
is a red flag. How is this incorporated ??? I think it is an ad hoc add on based on regressing outcomes on DSGE forecasts and the spread."
Yeah, it's not a structural model, is it? But if I were trying to forecast, I'd use stuff like that too, and say to hell with structural-ness.
My view is that there is strong downward nominal rigidity so that the slope of the Phillips curve (graphed wage inflation on the y axis) is very low at low inflation rates.
So Bernanke didn't use his own model? He certainly saw nothing on the horizon that troubled him. At least what he said publicly.ReplyDelete
In Nov 2008 Michael Biggs made a pretty good prediction of the plunge and subsequent rebound in output based entirely on the expected path of the "credit impulse". No need for fancy DSGE models , nor even for knowledge of the size of any pending fiscal response , just an awareness that private debt flows are the major driver of economic activity when that debt stock is almost 3x bigger than gdp or public debt. See fig 6 , here :ReplyDelete
This was a true before-the-fact prediction , unlike the after-the-fact fudgefest described here.
Why do you think that linear models have the capability to model the complex nature of the economy which is punctuated by non-linearities? All you are doing as Willem Buiter said is to "strip the model of its non-linearities" and map them "into well behaved additive stochastic disturbances"ReplyDelete
Noah, I still wonder why would you support this kind of modelling in any grounds given the assumptions made.ReplyDelete
The basic model assumes the economy would be in perfect equilibrium if certain set of conditions hold, of course this is empirically baseless as the assumptions have never happen in the real world in which we happen to live, then you start adding ad hoc supplementary assumptions to make the models fit the data.
Instead it seems to me that the scientific way of doing this is starting by having a basic model that has the real world as a possible state, then you improve upon that model.
It seems remarkably like a Ptolemaic project what is currently being developed by most economists.
So what's your take in this kind of position? First, you don't need to model using obviously false microeconomic assumptions, you can start modelling at a higher level, like classes, at least as long as the state of micro is as poor as it currently is, then you model stuff in a way that has the world as a possible state, and your micro foundations most certainly will need to match what we observed on planet earth.
It seems remarkably like a Ptolemaic project what is currently being developed by most economists.Delete
Yes. Definitely. But note that Ptolemaic models are pretty good at forecasting! Better than Copernicus' heliocentric circular-orbit model (though less parsimonious). Sure, Ptolemaic models were massively misspecified. But they were damn good at predicting eclipses!
Someday, hopefully, we'll get macro models that are not horribly misspecified, but in the meantime, we need to do forecasting as best we can, so...keep adding epicycles!
That's kinda seems like a reasonable position. However, we know that, eventually, a Ptolemaic system will get it wrong, in astronomy, the only penalty was not being to explain Venusian phases, but in economics the price has been, and will continue to be, absurdly high.Delete
Also, by the way, Heliocentric models with circular orbits also used epicycles to explain away its forecasting problems, it just needed less than the geocentric model!Delete
Also, by the way, Heliocentric models with circular orbits also used epicycles to explain away its forecasting problems, it just needed less than the geocentric model!Delete
That's kinda seems like a reasonable position. However, we know that, eventually, a Ptolemaic system will get it wrong, in astronomy, the only penalty was not being to explain Venusian phases, but in economics the price has been, and will continue to be, absurdly high.
Yes. BUT, the problem is that if there is a conceptual breakthrough equivalent to heliocentrism, we haven't found it yet. Or maybe we've found it but we don't recognize its importance.
Sure. But we know that those microeconomic assumptions are wrong, yet, most economist dismiss any intent of modelling the economy by not making use of them (I'm not saying that models by non-mainstream economist such as those of MMT or PK are right, I'm just saying they're entirely dismissed for being incompatible with a bunch of false assumptions).Delete
I kinda feel Marx's approach of starting by modelling using classes is closer to a scientific approach than what is being used today (of course, Marxian ideology is wrong).
But we know that those microeconomic assumptions are wrong, yet, most economist dismiss any intent of modelling the economy by not making use of themDelete
In which I agree with you:
"...keep adding epicycles!"Delete
I know that pointing out mistakes makes me a tiresome pedant. Nevertheless, I must point out that the "adding epicycles" metaphor is not based on sound history. The alleged additions never actually happened, except in the minds of heliocentric moderns.
Noah: "Yes. Definitely. But note that Ptolemaic models are pretty good at forecasting! Better than Copernicus' heliocentric circular-orbit model (though less parsimonious). Sure, Ptolemaic models were massively misspecified. But they were damn good at predicting eclipses!"Delete
But the whole point of this controversy is that the DSGE models were lousy at predicting the crash! You're excusing unreality in the model on the part of nonexistent excellent predictive power.
You'll have far more success with your models if you start discussing modelling complex systems with major meteorological departments.
Your current mathematical tools are comical. The DSGE model is limited. If Newtonian physics resulted in a model of equal usefulness, then it wouldn't have made it out of the eighteenth century!
Actually, finance departments often talk to meteorologists... ;-)Delete
Agh, this is such a frustrating debate. The reason why we had the Great Recession is not because of the financial crisis; it's because of the demand that was lost from the housing collapse. There's no logical reason why a financial crisis should destine a country to years of sluggish growth. Dean Baker has made this point like a thousand times: http://www.cepr.net/index.php/op-eds-&-columns/op-eds-&-columns/revisiting-the-second-great-depression-and-other-fairy-talesReplyDelete
So why are the theoreticians drawing the wrong conclusion? We don't need more financial frictions in macro models; we need the models to better monitor imbalances, so that we can prevent bubbles from occurring in the first place. Macroeconomists set the bar too low if the only thing they can do is predict a financial crisis right before the crisis happens.
The reason why we had the Great Recession is not because of the financial crisis; it's because of the demand that was lost from the housing collapse.Delete
Maybe Dean Baker is wrong?
Where is he wrong? What is the mechanism that connects "financial crisis" to "five-year slump"?Delete
perhaps I am being ignorant but isnt the mechanism simultaneous deleveraging?Delete
OK, I can definitely think of many mechanisms. But suppose I couldn't.Delete
Would that show that Dean Baker is right?
Nobody is proving anybody anything. That's the way economics works, unfortunately.Delete
But supposed you did say "simultaneous deleveraging." I then ask: What is it about a financial crisis that causes the gov't sector to deleverage?
Joe: "The reason why we had the Great Recession is not because of the financial crisis; it's because of the demand that was lost from the housing collapse."Delete
Noah: "Maybe Dean Baker is wrong?"
Read your Krugman, Noah. He's repeatedly pointed out that the right has worked very, very hard to ruin the response to the crash as much as possible: limiting the size of the stimulus, making sure that the quality of the stimulus was wrong, and ('50 Little Hoovers') making sure that there was a massive cut in state and local level spending, which IIRC dwarfed the federal stimulus.
Thx Barry. I'd really like to push Noah on this point. The reason why we have a depressed economy is because the gov't failed to adequately fill the demand that was lost when the housing bubble collapsed. How is the financial crisis (or financial frictions in general) in any way related to this failure?Delete
Here is a forecast any human being could have knocked out with Ray Fair's macro model.ReplyDelete
Equation 11, firm output (Real GDP). Estimated up until 2007q1- dynamic forecast after that.
Here's the same exercise estimated to 2008q3, dynamic forecast from 2009q4. The RMSE drops from .016 to .012.
So the model has overestimated growth during the great recession- but you would expect it might given that the estimation period is 1954-2008ish. This is without any historical data covering a large recession. Overall the traditional macro model forecasts a considerable recession with reasonable accuracy.
Cowles-era economics seems to have performed well before and during the crisis without needing any ad hoc/ ex post facto adjustments. This seems to back up Robert J. Gordon's claim that 1978-era macro has done better than modern macro during the crisis.
Can the finance-augmented DSGE's beat these predictions? Can they beat simple autoregressive models?
I don't know because the authors don't benchmark their models against other methods. This to me seems unsatisfactory.
Don't get me wrong a lot of the DSGE stuff is cutting edge and worthwhile as an academic exercise in and of itself. However for practical policy evaluation and forecasting purposes we probably have existing models that will continue to outperform DSGE models for a while yet.
I am actually quite impressed as well. I would be even more impressed if they produced some estimates about when the output gap will go back to zero and when (if?) unemployment goes back to around 5%. If those estimates are in the ball park of what ends up happening, then I would say there is some there there.ReplyDelete
I think Mark Thoma is 100% correct in his analysis!ReplyDelete
production networks and financial frictionsReplyDelete
Thanks, Anon. Which one of La'O's papers do you like the best, in this area?Delete
This survey on forecasting with DSGE modelsReplyDelete
by some of the same authors makes similar points about forecasting in the great recession, and compares DSGE forecasts with Blue Chip(BC) consensus. Given that consensus forecasts are typically than any individual forecasts, and the professional forecasters in BC combine model forecasts and judgement (which is usually thought to improve on pure model based forecasts), this seems like a pretty competitive benchmark.
The Fair model is probably one of the best of the large old style Keynesian models. Its gdp growth forecasts (table 4) during the great recession are not too spectacular I'd think,
(though again, would be nice to also know more about the uncertainty bands surrounding these forcasts and whether or not say the 70% or 90% bands would have included actual outcomes starting from September 2008).
Lao and her coauthors like Marios Angeletos represent another line of DSGE research that could completely change our perception of demand shocks. Thinking in particular of papers like,
and for a search/matching frictions based approach to aggregate demand,
As for achieving some sort of synthesis between DSGE models (with heterogenous agents) and ABM's in terms of modeling decision making, I think the sparsity based bounded rationality of Gabaix may be promising,
We appear to be on exactly the same page...I saw Gabaix present his sparsity stuff at a conference, and thought it was quite cool. And I really liked the Angeletos and La'O paper a lot.Delete
Thanks for the link to the forecasting survey!
I note that the Fair Model, and the financial frictions modified DSGE forecast well with 2012 vintage data.Delete
How well do they go with real time data?
As Daniels points out, the Fair model didn't do too well with real time data available at the time forecasts were made.
Poor real time data quality may be an issue for all forecast approaches. This is the Orphanides critique I guess.
Hey guys, this is very nice and remarkable. Very good result, despite limited to two economic series, gdp and inflation. The rest of the series are not explained well and there is room for improvement.ReplyDelete
However I would like to mention that the authors in the paper condition their forecast on the federal funds rate and the spread of the 4th quarter which, s far as I now, should be available only in January 2013 because is an average of the 3 previous months of 2009. Unless they don't use the actual rate and spread, the latter though, in time of crisis is very volatile over the three months since the central banks keep on reducing the rate and hence the spread often. This means that the policy maker can estimate the gdp series either with considerable lag (3 months) which is little gain in terms of time on actual estimates given by statistical divisions. Or with considerable uncertainty due to the volatilty of the Fed rate in time of credit crunch.
I'm late here Noah, but I'm a bit disappointed by your willingness to get behind a model with such limited support. You have previously referenced 'curve fitting' in a pejorative manner, yet this is what you do here.ReplyDelete
I'd actually be more interested in whether or not the mechanics of the model match up with the mechanics of the crisis in the real world, than whether its behaviour looks more or less like a series of data points which are likely to be highly vulnerable to chance events. This is why I prefer something like Keen's model, even though the cycles are not as close to the data (afaik).
I'm late here Noah, but I'm a bit disappointed by your willingness to get behind a model with such limited support.Delete
I'm not "behind" it - I still think it's probably 99% misspecified, and that even the "curve fitting" success will turn out to be cherry-picked. But I like the emphasis on matching DSGE models to data. Even curve fitting is better than just herping derp.
This is why I prefer something like Keen's model, even though the cycles are not as close to the data
But the mechanics of Keen's models look absolutely nothing like real-world crises!