Sunday, July 15, 2012

Steve Williamson explains modern macro


...Well, not actually all of modern macro. Just the way that modern macroeconomists define the "business cycle". But since the "business cycle" is the phenomenon that macroeconomists want to explain, this is really key, and it's something that doesn't get talked about a lot. So go read Steve Williamson; the post is really quite excellent, you will learn more about modern macro from reading it than from days of reading a graduate textbook. After you are done reading, then come back and continue this post.

What is a "business cycle"? It's not immediately obvious what it is. We have this idea that sometimes the economy does well, and sometimes it doesn't do well. Sometimes jobs are hard to find and don't seem to pay very well, sometimes jobs are easy to find and just throw perks at you. Sometimes lots of new buildings are going up, sometimes not many are. Etc.

Unlike seasons, these business "cycles" seem not to all last the same amount of time, or come at regular intervals. So maybe the "cycle" is really just randomness. Sometimes something happens to make the economy go well, sometimes something happens to make the economy not go well.

BUT, here's the thing...the economy seems to do steadily better and better over time. Only rarely do things get so bad that we actually produce less stuff than the year before. So people often think of the economy as containing a "trend" - some underlying force making us do better and better - and a "cycle" - some random thing that makes the economy do even better than the trend, or even worse, for a short while before going away.

The Hodrick-Prescott "Filter" (or H-P Filter) does not clean your water supply. It is a method for turning a time-series - say, GDP - into a "cycle", by subtracting out the "trend". If you are a "business cycle theorist", what you do for a living is basically this:

Step 1: Subtract out a "trend"; what remains is the "cycle".

Step 2: Make a theory to explain the "cycle" that you obtained in Step 1.

The H-P Filter is just a method for doing Step 1. You take a jagged time-series and you smooth it out, and you call the smoothed-out series the "trend". That's it. Whatever is left you call the "cycle", and you make theories to try to explain that "cycle".

But how much do you smooth? That's a really key question! If you smooth a lot, the "trend" becomes log-linear, meaning that any departure of GDP from a smooth exponential growth path - the kind of growth path of the population of bacteria in a fresh new petri dish - is called a "cycle". But if you don't smooth very much, then almost every bend and dip in GDP is a change in the "trend", and there's almost no "cycle" at all. In other words, YOU, the macroeconomist, get to choose how big of a "cycle" you are trying to explain. The size of the "cycle" is a free parameter.

Now, let's think about those explanations. One such explanation is the original RBC ("real business cycle") model, invented by the same Prescott who invented the Hodrick-Prescott Filter. This model won Prescott a Nobel Prize in 2004. I've criticized the RBC model, but let's forget about that criticism for now. How did Prescott show that his model explained the business cycle? What he did was this: First, he chose some values for the parameters in the RBC model that seemed reasonable to him ("calibration"). Then, he simulated an economy with the RBC model, and measured the size of the simulated fluctuations that it produced. Finally, he compared the size of those fluctuations with the size of the "cycle" that he got out of an H-P Filter, and decided that the two were pretty close in size. Thus, he concluded, the "cycles" of economic activity that we see in the real world could be generated by the RBC model, and hence the RBC model was a good one.

Now, I've criticized this method of validating models (which is called "moment matching"). But let's put aside that criticism for now, and think about the H-P Filter. Remember that we get to choose how much to smooth the time-series. The less we smooth, the smaller the "business cycle" becomes. So, up to a point, by choosing how much to smooth, we can choose to make the business cycle as big or as small as we like!

So what Prescott did was:

A) Chose how big of a "business cycle" he wanted to model,

B) Built a model of business cycles that produced fluctuations of about the size he chose in step A, and

C) Claimed to have explained the business cycle.

Now this may sound like a big fat hoax, but it's not quite. The amount of smoothing in the H-P filter is a free parameter, but if you choose it too big or too small, people will be skeptical. After all, we have other measures of recessions, like the "NBER recessions". If you smooth so much or so little that your "cycles" don't coincide at least roughly with those recessions, people won't buy your theory. They will say "Aww come on, really?" And in fact, some people responded that way to the RBC theory when it came out. But a critical mass of people gave it credence, which is why it became the basis of most subsequent models of the business cycle, and won a Nobel Prize.

What I want to point out here is how many judgment calls there are in modern macro. There is the judgment call of how big you think the "business cycle" is compared to the "trend". There is the judgment call of the parameters you think are reasonable to stick into your model. And there is the judgment call of whether you think the simulated fluctuations produced by your model are "close enough" in size to the real fluctuations. Actually, there are more judgment calls I haven't even talked about, such as the judgment call of whether a "shock" (a random thing that causes "cycles") is "structural" or not (see here if you want to learn more about that).

Now, some modern macroeconomists will tell you that all these judgment calls are fine. A theorist's conclusions, they will tell you, follow from their assumptions. Judgment calls are just assumptions.The job of a theorist, they will tell you, is to make a theory that is internally consistent. The purpose of the mathematics is to show that the conclusions - the policy recommendations, the forecasts - flow logically from the assumptions, or judgment calls. As for which judgment calls are appropriate, well, that is what academics spend their time arguing about, using their common sense to guide them.

Doesn't it seem to you that this way of doing "science" is a little too vulnerable to cultural/political biases among the body of practicing macroeconomists? It seems that way to me. But maybe I am wrong.

56 comments:

  1. It's kind of like applying linear interpolation to random numbers until you get a shape you want and then investing the real work in making it seem complicated.

    ReplyDelete
  2. How the model works is not that important. Calibration is not a bad idea in itself (although I doubt that RBC is the best use that can be made of that approach).
    What's important is the question the model is trying to solve, and the answer it delivers.

    If I recall correctly, RBC asks whether cycles occur "naturally" in a free-market economy, and the answer is yes, and it is optimal to let cycles occur without bothering with fiscal or monetary policies.

    The problem is not that the answer is debatable, it is that it's the wrong question.
    The question should be "are cycles socially acceptable?", or more precisely, "to which extent can cycles be socially acceptable?"
    One may consider that such question is politically biased. Fair enough. But one must admit that not asking is even more biased.

    ReplyDelete
    Replies
    1. "If I recall correctly, RBC asks whether cycles occur "naturally" in a free-market economy, and the answer is yes."

      Well, yes, but only if you assume "nature" continually creates highly-persistent random shocks to "productivity" (both positively, and negatively). That is, if you assume that nature creates random positive and negative movements, then positive and negative movements occur naturally. That makes the whole thing sound really fishy.

      But, I don't think you've quite worded the RBC "question" correctly. The question is probably more, like, "Given that I assume that economic activity can be broken up into trends and cycles, can I create a model that creates similar cycles? And the answer is, "To an extent."

      The part not being mentioned in this post is that it's not just one business cycle that's being explained. That "cycle" is also being broken up into it's GDP components: consumption, investment, etc. which also have their "cycles." So the question isn't really whether or not it can match the main business cycle. In a sense, that part is often entirely fudged by picking stochastic shocks of hand-picked variances, frequencies, and persistence until the two match.

      It's not so much, "Hey! My model matches the business cycle!" as it is, "I've created a business cycle by tailoring a random shock data generating process to match the observed cycles as I've defined them given by de-trending procedure. Given that, when I break down that cycle into it's components (consumption, investment, etc.), do the components in my model match the components as measured by the US gov't?" And here, what's being measured is the relative size of each of these components' variances and their correlations.

      Some of them, like the comparisons of consumption and investment to each other (and to GDP as a whole), are pretty decent. Other factors, like unemployment, were really lackluster in that original model. The early international models (ie, more than one country), had/have problems as well regarding cross-country correlations.

      In essence, they said, "Given that we've purposely re-created simulated economic fluctuations that mimic observed fluctuations, can a model built on micro behavior re-create some a few of the general stylized facts we observe?"

      The answer wasn't great, but it was pretty good...enough for people to keep playing with it.

      But, it does indeed suck for many predictions. The model's fluctuations are fundamentally based on random shocks. By definition you can't predict randomness. Fancier models have endogenous shocks (ie, changes in monetary or fiscal policy, etc.), but the majority of the underlying cycle is still just a random generating process.

      If the question is, "given that a big thing just happened, can we model how people will react?" then the model does OK, but not great. But if the question is, "Why do big things suddenly happen?" these sorts of models won't tell you the answer because they essentially just say, "well, we assume that big random things sometimes happen."

      Delete
  3. Based on your description (only) it seems like the metric for quality is only generic fit between the statistics of fluctuations between empirical times series (after smoothing) and the time series generated by the model. You don't say how the fit is measured, it could be looking at the spectrum or just some scalar summary value. Unless the fit measurement is very powerful, this seems like a very weak metric to me.

    Every theory is going to have some free parameters. The norm needs to be that the behavior of the model needs to be much more dictated by matching the data than by the free parameters. This does not seem to be true in the case you describe (again, unless the fit metric is much better than it sounds).

    From other descriptions of RBC I have gotten the impression that its practitioners don't identify actual sources of shocks in the data. Often the descriptions make it sound like they are just reverse engineering putative shocks from the output behavior, and don't (or can't) validate those with economic data. In fact often the putative shocks sound extremely unsubstantiated. Your description is consistent with this but I'd like to know if my interpretation is correct.

    If RBC theorists are serious about doing science, they'd have to validate their model output against empirical data (time series) GIVEN empirical inputs. In other words the timing and magnitudes of the results have to match up with some empirical source of shocks.

    If they do that over some reasonable period (20 years?), the number of free parameters you describe should not be a serious problem, since they'll have quite strong metrics for degree of fit.

    Conversely, if they don't they aren't, in my opinion, trying to do scientific work, they are practicing fancy rhetoric. If they *can't* (too hard for one reason or another) then they should pick a different problem or a sub-case of this problem that's more tractable -- and if they won't, my opinion is the same.

    I thought I was used to the low standards of economic discourse, but your description brings them back into focus for me in a way that still has the power to shock...

    ReplyDelete
    Replies
    1. Anonymous5:05 PM

      "If RBC theorists are serious about doing science, they'd have to validate their model output against empirical data (time series) GIVEN empirical inputs. In other words the timing and magnitudes of the results have to match up with some empirical source of shocks."

      Isn't what you described what VAR modelling essentially is? And by the way, I don't think RBC is actually much of modern macro, given that I didn't even get taught it in my post-graduate macro course...

      Delete
    2. You don't say how the fit is measured, it could be looking at the spectrum or just some scalar summary value. Unless the fit measurement is very powerful, this seems like a very weak metric to me.

      Yep. They just eyeball it. The sample variance of x will be 2.2 (after some normalization), the model will yield a variance of 1.7, and people will say "See, my model gets it pretty close." I'm not even kidding. I have seen this in macro seminars dozens of times; it is the standard procedure.

      From other descriptions of RBC I have gotten the impression that its practitioners don't identify actual sources of shocks in the data. Often the descriptions make it sound like they are just reverse engineering putative shocks from the output behavior, and don't (or can't) validate those with economic data. In fact often the putative shocks sound extremely unsubstantiated. Your description is consistent with this but I'd like to know if my interpretation is correct.

      Your interpretation is 100% correct.

      I thought I was used to the low standards of economic discourse, but your description brings them back into focus for me in a way that still has the power to shock...

      Sorry, but it had to be done...

      Delete
    3. Anonymous12:21 PM

      Again, let me just stress that I have literally never seen this in any macro seminar or lecture I have attended, and RBC is only one chapter in my advanced macro textbook, which wasn't even taught to me.

      Delete
  4. With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

    -John von Neumann

    ReplyDelete
  5. I wish I could "Like" Absalon's comment.

    Anyways, Noah, I have two noobish question from someone with a weak econometrics background:

    What is the advantage of using the HP filter over , say, de-trending via estimating a time trend and subtracting (and seasonally adjusting using dummies)?

    Also, is the HP filter the preferred method for cycle identification?

    Thanks!

    ReplyDelete
  6. @Jefftopia

    The line from Von Neumann is one of my favorites. I also like Von Neumann's line to the effect that you never understand mathematics, you just get used to it and Box's comment about all models (theories) being wrong but some being useful. Add Axel Oxenstierna "Do you not know, my son, with how little wisdom the world is governed?" (1648) and you pretty much exhaust my supply of useful quotes.

    ReplyDelete
  7. Sumner asked a great question: why are there no mini recessions?

    The other point, is that i can fit a distribution to the time between UE peaks (or troughs) that fits *very* well the same distribution used for failure analysis in engineering, and the parameters suggests that there is an ageing process and the failure rate increases with time and then something "breaks." hmmm, but what...

    ReplyDelete
    Replies
    1. Anonymous2:33 PM

      You cannot detect anything smaller than your scale of resolutions. Most meaningful economic data is on the scale of quarters; some is on the scale of months. You can't go smaller than that.

      Delete
  8. Orange148:43 PM

    I used to think Shakespeare's line, "The first thing we do, let's kill all the lawyers," was the quote for our times. I'm rapidly developing the firm belief that we should substitute "economists" for "lawyers" and we might be better off.

    @Absalon - I love the Oxentierna quote! I just finished reading C.V. Wedgewood's book on the 30 years war and feel that I'm reliving some of it right now.

    ReplyDelete
  9. Orange14

    If we are going to talk Shakespeare then there is one I like about confronting challenges/threats head on:

    King Henry The Fourth, Part 2
    Act II. Scene III. Warkworth. Before Northumberland`s Castle.


    "But I must go and meet with danger there,
    Or it will seek me in another place,
    And find me worse provided."

    ReplyDelete
  10. Blue Aurora2:16 AM

    According to Dr. Michael Emmett Brady, the standard decision theory upon which modern economics is implicitly or explicitly based upon and taught at business schools and economics departments is Subjective Expected Utility. Mainstream models tend to be based upon S.E.U., which itself is a creation of Frank Ramsey and Leonard J. Savage.

    Dr. Daniel Ellsberg's decision theory and his eponymous paradox point a way to a more general theory of decision-making - and to J.M. Keynes. For a paper that criticizes mainstream decision theory, I recommend this paper by Dr. Carlo Zappia.

    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2103094

    Incidentally, that working paper above cites Dr. Brady's paper on George Boole and J.M. Keynes's approach to probability, which itself alludes to a more general theory of decision-making.

    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1546726

    ReplyDelete
    Replies
    1. Anonymous12:39 PM

      The easiest way to identify someone who doesn't know much about economics is if they keep dwelling on "what ______ REALLY said/meant," where _____ is your choice of dead economist.

      Delete
    2. Blue Aurora9:42 PM

      Dr. Michael Emmett Brady is anything but ignorant. He has a good grasp of the decision theory literature, and the history of economic thought.

      Delete
  11. I think Jed's comment is the critical one: making judgements is an essential part of the business of doing science, so the key question is really whether the judgements are good ones.

    ReplyDelete
  12. Anonymous5:36 AM

    To be fair, Noah, RBC theorists also look at correlations between macro variables at business cycle frequencies. It is hard to change the correlations significantly by tempering with the HP parameter, especially if you one want to match a couple of them. At the very least, positive correlations will not become negative and vice versa by choosing a number of your own choice. Then, as you mention, NBER dating also imposes quite a bit of structure, so it's not really as ad hoc as it may seem.

    ReplyDelete
    Replies
    1. You're right! On the other hand, RBC theories have big problems matching a lot of those correlations...

      Delete
    2. Anonymous11:47 AM

      Sure! Yet still makes the procedure a bit more "scientific". :)

      Delete
  13. the HP-filter is a good illustration of Stigler's law of eponymy. Similar filters had been suggested decades earlier e.g. Leser 1960 http://www.jstor.org/stable/2983845

    ReplyDelete
  14. Anonymous7:53 AM

    It is a mistake to single out BigG spending from other corporate spending. Corporations are groups of individuals that invest for the good of the group. BigG is a group of all individuals in a nation, that invest in the nation for the good of the group. A corporation is accountable to its shareholders. A nation is accountable to its citizens.

    The BigG super-corporation incurs social costs when unemployment is too high. The BigG super-corporation incurs lower profits when workers are idle and not producing goods and services. All finances are within the BigG so money out of the pocket of BigG used to provide public goods and services contributes to BigG assets, social goods and revenue stream (through taxes). BigG can maximize GDP by hiring slack labor during periods of high unemployment and can promote private sector growth by being mindful of crowding out at low rates of unemployment.

    It does not matter if Business cycles are natural or not. What matters is having policies in place such as infrastructure banks and automatic stabilizers to address worker dislocation for the good of the whole economy. When BigG is captured by special interests, policy is mis-directed into maximizing short term profits of a few elites at the expense of the whole.
    -jonny bakho

    ReplyDelete
  15. 1. None of this business cycle stuff comes close to passing the Feynman smell test for being a science.

    I will say this especially about Prescott: it is not science unless it can make predictions. Thus, his work isn't science.

    2. All other factors being equal, economic growth is positive because a few of us are smart enough to learn and to lead the rest. This is just a variant of the "Great Man" theory of history, but is self-evidently true and has been known throughout the ages, for "Where there is no vision, the people perish."

    3. Given the randomness in the change of inputs, variations in growth are implicit.

    4. Economics has never given serious thought to the insights of Planck on the energy loss of charged particles and why we don't see such when a child swings in a play ground. What appears to be smooth is in fact always erratic, on closer inspection.

    Col. John Boyd did a marvelous job of explaining how these principles of energy loss apply to human life, culture, and society, but who hear regularly reads or considers what he has to say.

    Similarly, economists have never given serious thought to Metcalfe's law which states that the value of a human network is proportional to the square of the number of connected users of the system (n2), which law was first formulated in this form by George Gilder in 1993. Yet, this simple law explains why big law firms are more powerful than small ones and why the Euro was a death trap for Greece, Spain, Italy, etc., all of whom have firms whose networks are smaller and less powerful than Germany and of firms within Germany. No economist has found a way to measure the macro effect of death or movement, yet I live in a major midwest city whose decline can be marked to when its most important citizen up and moved to New York, taking his network with him (and literally de-wiring the place).

    5. Since we built the Pyramids, the only question has been, How do we finance that project?

    6. Confidence, on which fiance and all other forward contracts depend, and which is the key driver of what appears to be "cycles," is likely totally or nearly totally independent of economics.

    If you had coffee with a good historian, he would tell you that the state of mind in the South about Slavery was a negative headwind to Confidence starting in 1820, at the latest.

    The best history book on WWI, Tuchman's The Guns of August, begins with a detailed study about the 40 years before the war from the mid 1870s until August 1914) everyone in Europe knew that Germany was going to start a war which was long to be long and horrible because of modern technology. Talk about a head wind to investment!!!!!!!!!!

    Bugliosi's Reclaiming History: The Assassination of President John F. Kennedy begins with a very thorough and complete discussion of how the nuts who say there was a conspiracy to kill Kennedy fundamentally destroyed the confidence of the American people. Any economist who has not read Bugliosi 1600 pages will have a very hard time understanding the headwinds against which the government and economy now has to operate. No tools exist for measuring either the scope and extent of the damage or the strength of the headwind we now face, but attend any Tea Party rally or watch Fox News and you will quickly see the force and power of Bugliosi's insights.

    Last, let us not forget the Greeks, who taught us that "Character is fate." When at least 50% of your large firms are run and managed by sociopaths (Brody. The Corporate Psychopaths Theory of the Global Financial Crisis) is the irrational world in which we find ourselves "fate."

    ReplyDelete
  16. Anonymous11:33 AM

    Here's my (not quite perfect)analogy for RBC:

    You're a guy who likes to build robots. You watch some film of tightrope walkers. You become fascinated with how they wobble and flail and regain their balance after a mistep. You wonder if you can build a robot that does that. So, you make one and set it up on a tightrope in your lab. You make it a little too perfect in the sense that it's entirely still when perfectly balanced. You know that real people wobble a little bit, their hips and arms sway a teeny bit, even when perfectly balanced. Since you're interested in the movements while recovering from something "bad" (a mistep, a gust of wind, etc.) you decide you need to first "cancel out" the normal small wobbles and movements real people make when you measure their wobbles after they lose their balance. This is essentially what the HP filter does. It's subtracting out that base level of wobbles. Noah has done a good job of explaining the significance of that.

    Back to our robot. So, to see if you're robot is any good, you shake the tightrope in a way that mimics what a gust of wind or a mistep may do to a human tightrope walker. You see how your robot flails as it regains balance and you compare that to the video of humans. It looks a little off, but, for the most part, the movements look fairly "human." You declare it a success. Word spreads and you're praised and you become famous. Other people are impressed by the work and try to improve on it.

    Now, someone comes up to you and says, "Hey, this guy is going to try to do a tightrope walk tomorrow, will he fall?"

    You CAN'T answer that! You can say, "If you tell me the size and direction of the gust of wind that knocks him off-balance, or which way he'll mistep, I can tell you how he'll likely recover (or not recover) based on how my robot reacted," but that's it. You can't predict those misteps or those gusts of winds because, in your setup, you are the one purposely creating them when you grab and shake the tightrope. You're doing nothing to predict the what, when, why, or how of the "natural" disturbances in the real world.

    For people interested in reactions, this is fine. For the tightrope walker though, this is of little use and little comfort. It's a fancy way of saying, "I dunno." Anything could happen. You can say a bit how people will react if something does happen, but not whether not something will happen.

    ReplyDelete
    Replies
    1. Anonymous1:33 PM

      Very nicely done!

      Delete
    2. Well, it might be a better analogy if A) we didn't know whether wind even existed and couldn't detect evidence of the existence of wind, B) we couldn't see the tightrope at all, and C) our robot needed lots of superhuman powers and some weird programmed-in limitations... ;)

      Delete
  17. Anonymous12:50 PM

    You could use a simple quadratic or cube trend instead of the HP filter and you still get pretty much the same features of the business cycle. While there are issues in the RBC framework to be criticized, I don't think the main argument posted is one of them.

    ReplyDelete
    Replies
    1. No matter what kind of filter you use, you still have to take a stand on what is the "cycle" and what is the "trend", and that in my opinion is the issue; there is simply no agreement on cycle vs. trend, and no agreed-upon way to empirically separate the two.

      Delete
  18. How does your criticism not apply to the validation of ANY theory through statistics? Why should an estimator minimize ordinary least squares? Isn't that a subjective decision? Why use a 95% confidence interval? Why not a 10% or a 1%? I am not taking position on whether this is right or wrong. But I am beginning to suspect that your issues with macro are personal. Why else would you critisize macroeconomists for practices that are common for most economists and other scientists?

    ReplyDelete
    Replies
    1. Anonymous2:38 PM

      One word: robustness. You should get the same answer regardless of the method, OLS,3-stage nonlinear LS, AIC, etc. A standard should not be more than a convention.

      Delete
    2. 1. Who says that Kydland and Prescott's results are not robust to different smoothing techniques?
      2. Even robustness tests involve a limited number of estimators with "desirable" properties. And of course there is no robustness for why results should be significant at the x versus y level. It is a subjective call.

      Finally, the Hodrick-Prescott filter appeared in a 1997 paper. The Kydland-Prescott RBC model appeared in the 1983 paper "Time to build and aggregate fluctuations". So I am not sure that this is how Prescott smoothed the series. In any case it wouldn't matter because his model was evaluated based on how well it matched not only the behavior of the cyclical component of the series but also of the trend. Things that a macroeconomist would have known!

      Delete
    3. How does your criticism not apply to the validation of ANY theory through statistics? Why should an estimator minimize ordinary least squares? Isn't that a subjective decision? Why use a 95% confidence interval? Why not a 10% or a 1%?

      Naturally, all science involves judgment calls. That doesn't mean that any amount and type of judgment call in any theory is perfectly OK. The scientific standards of a profession are set by people's opinions, and people can disagree about how much judgment they think is acceptable to put in theories. If I think that RBC (and DSGE in general) contains too many judgment calls, that's my opinion!

      Why else would you critisize macroeconomists for practices that are common for most economists and other scientists?

      Are these practices common in other sciences?

      Finally, the Hodrick-Prescott filter appeared in a 1997 paper. The Kydland-Prescott RBC model appeared in the 1983 paper "Time to build and aggregate fluctuations".

      Suggestion: Go read Williamson's post. It will inform you that the H-P filter, which was developed in 1980 or earlier, was not published for quite some time.

      In any case it wouldn't matter because his model was evaluated based on how well it matched not only the behavior of the cyclical component of the series but also of the trend.

      No. The trend was just assumed in the model, and the assumption was based on observed data.

      Delete
    4. Noah,

      you make three accusations:

      1. That KP (1983) could smooth the series so as to fit the behavior of the cyclical component (free parameter). Here is a quote from page 1359:

      "The test of the theory is whether there is a set of parameters for which the model's co-movements for both the smoothed series and the deviations from the smoothed series are quantitatively consistent with the observed behavior of the corresponding series for the U.S. post-war economy."

      The behavior of the simulated smoothed series relative to the ones derived from the smoothing technique then proceeds on page 1366. So where is the free parameter?

      2. You accuse macroeconomists of judgment calls. As I have pointed out, all empirical tests involve judgment calls (this is the common practice I mentioned). If eyballing or minimizing a loss function (HP-filter) are not OK but minimizing least squares and using 95% confidence intervals are then I would like to know the basis for that judgment. Unless your call has to do with the personality or political beliefs of the people who introduced these techniques.

      3. You claim that these judgment calls lead to political biases in Macro. Really? So what about the deterring effect of the death penalty? I can think of at least three studies with different conclusion depending on methodology, and claim the same thing about econometrics in general (in fact many people have). There is no more political bias in the HP-filter itself than there is in any statistical estimator. People who use it to detrend series do not arrive at "conservative" conclusions more than they arrive at "liberal" conclusions.

      Delete
    5. On the point about calibration--Noah, you didn't give any discussion about the alternatives to calibration and moment matching, which is outright estimation using maximum likelihood estimation. This technique picks the parameters that make it most likely to observe the real world data assuming that the model is correct, but then also spits out a probability with which to test the overall likelihood that this is the correct model. It would have been worth noting that economists like Lucas and Prescott rejected this method precisely because their models were producing likelihoods much too small to be credible.

      CA: you are right that the choice of 95% confidence level is a judgment call. There is a trade-off between statistical size and power, and it just comes down to what type of error you are more interested in avoiding. But your criticism misses the point--when you explicitly declare a confidence level, you are setting an objective criteria that the model can be measured by and that scientists can debate and compare across models. When you just say that your simulated data series qualitatively looks like the actual time series, which is all we are doing when we say that some of the simulated moments "match," you are making a subjective statement that can't be measured and debated, nor compared across alternative models.

      Delete
    6. It would have been worth noting that economists like Lucas and Prescott rejected this method precisely because their models were producing likelihoods much too small to be credible.

      Yup, I have mentioned this before! I don't have the actual numbers from those failed tests though...I'd love it if someone could dig those up.

      Delete
    7. Matthew, I agree. But what does this have to do with the HP-filter? Moreover, methods of fit more formal than eyeballing have been developed, for example by Watson (1993).

      Delete
    8. Just to clarify, Watson (1993) suggests formal measures of fit for calibrated models.
      For the criteria of selecting the smoothing parameter lambda see for example
      http://www.econ.upf.edu/docs/papers/downloads/588.pdf
      http://discovery.ucl.ac.uk/18641/1/18641.pdf

      Delete
  19. And thus why people like Williamson shouldn't look down their noses at people like Evan Soltas.

    Btw, I found this much more readable than Williams's post, which took a long time to get to something of interest to the lay reader (which in fairness was probably not the intended audience).

    ReplyDelete
  20. This post makes a very important point. Judgment calls in macro means the results of theoretical analysis express mainly the prejudices of the analysts. This is clear from the plain fact that the results always correspond to the ideology. Personally, I choose the results I want first, then find the assumptions from which they follow. I have repeatedly confessed this on the web. Actually I haven't had any theory papers accepted since writing those posts.

    It is easy to avoid making the H-P filter parameters judgment call. One can write a whole model of both the stationary and non-stationary stochastics and run both the data and the simulated data through the same H-P filter. This is the only proper procedure if one is interested in the stationary part. Insufficient smoothing of the trend eliminates both the variance in the historical data and in the simulated output and doesn't make a low variance stationary theoretical dynamic correspond to what's left of the cycle in the data after the cycle has been removed.

    Any other approach is obviously fraudulent.

    ReplyDelete
    Replies
    1. "One can write a whole model of both the stationary and non-stationary stochastics and run both the data and the simulated data through the same H-P filter."

      This is exactly what KP did!

      Delete
  21. "Doesn't it seem to you that this way of doing "science" is a little too vulnerable to cultural/political biases among the body of practicing macroeconomists? It seems that way to me. But maybe I am wrong."

    Maybe you are wrong. But I feel the same way. The main think is that we learned that the record number of Stephen Williamson posts he can write in a row that don't digress at some point into Krugman bashing is definitely 2 and that record is unlikely to be broken any time soon.

    ReplyDelete
  22. It's also great to know from SW's post that we are already at optimum output for the economy.

    As an unemployed person I guess that should give me comfort.

    ReplyDelete
    Replies
    1. Anonymous8:24 PM

      And misrepresenting other people's views (he never said that) will not increase your likelihood of becoming employed.

      Delete
  23. Anon I haven't misrepresented his views. This is what he says:

    "Whether real GDP is above or below some trend measure, or above or below CBO potential output is currently irrelevant to how the Fed should think about "maximum employment." We're there."

    http://newmonetarism.blogspot.com/2012/07/hp-filters-and-potential-output.html

    ReplyDelete
  24. Anonymous8:37 PM

    He was referring to the limitations of monetary policy. If the Fed has exhasusted its arsenal then the trend indeed represents the best the economy can do from the Fed's perspective. Nowhere did he say that this level of economic activity is "optimal" in any sense. And of course he did not claim that we shouldn't seek for alternative policies that are however beyond the Fed's control.

    ReplyDelete
  25. Anon, there's no "of course." There are people who believe we should do nothing but let the market sort it out. I don't see anything in this post that SW showed he differs from that view.

    ReplyDelete
  26. For a phenomenological explanation of the business cycle, as opposed to a what? post-phenomenological theory? see Money as Debt II <a href="http://www.youtube.com/watch?v=lsmbWBpnCNk>http://www.youtube.com/watch?v=lsmbWBpnCNk</a href> He goes into (his theory of) the origins of the business cycle about 44 minutes in, but watch the whole movie.

    ReplyDelete
    Replies
    1. Oops. Messed up the link. Cut and paste:

      http://www.youtube.com/watch?v=lsmbWBpnCNk

      Delete
  27. What gets me here is the selective (non-)use of the Lucas Critique. It seems to apply only when Keynesian theories are in question, but by my reckoning this kind of post-hoc parameter selection based on observed correlations in past data is exactly what Lucas was criticising.

    ReplyDelete
    Replies
    1. That bothers me too. I think of the Lucas Critique as a gun that only fires left.

      Delete
    2. This is because the HP-filter has no theory behind it whatsoever. It derives the smoothed series by penalizing every time the smoothed observation deviates too much either from the actual observation or from last year's observation. It is a reasonable way that someone can go about it, but it is by no means the only one. It is much better than the 10-year moving averages that were used in the past.

      Delete
    3. I understand the H-P filter is purely an econometric exercise and that is fine. But when we are applying this to RBC, I feel the Lucas Critique applies.

      Delete
  28. Anonymous3:14 PM

    Couple remarks.
    1. log-linear growth is a good approximation for GDP, For GDP per capita, data from 1870 show rather a linear growth line and quite a different story of fluctuations. The inclusion of population term in GDP is difficult to justify since different countries have different population time history.
    2. There is a joke from physicists that one can describe any process with a countable number of parameters. This is to replace formal denial.

    ReplyDelete
  29. You're right of course; this isn't science, this is just futzing about with math.

    ReplyDelete
  30. Oh. And if you apply the H-P filter on a grand scale, getting the log-linear curve, you discover a *major* problem.

    No exponential process can continue forever in the real world.

    So, when do we hit the big wall? Global warming will probably do that if we don't stop it. If not, some other consequence of unlimited population growth will do it to us.

    Ecology studies boom-bust cycles. Without the added complication of predation, they're exponential on the up curve (exponential population growth) and almost vertical on the down curve (starvation happens very, very quickly).

    It seems to me that the cycles of economics tend to look exactly like ecological boom-bust cycles. There are boom-bust cycles on multiple scales in ecology, too, including ones on ecosystem scale (the bust side is a mass extinction).

    If you want to explain economic boom-bust cycles, I suggest looking for insights in ecology, which is run like an actual science. One insight is that there are often a number of critical elements which must all be present in sufficient quantities to allow growth; overuse of one causes a sudden collapse when it runs out.

    Sounds like an economic cycle to me.

    ReplyDelete