This is a guest post by "Dr. Phil of Economics", which is the pseudonym of an econ grad student whose identity is probably not that hard to figure out if you look hard enough.
The title is a reference to an internet meme popular on the support group website Econ Job Market Rumors. For those of you who don't speak meme, it's how you'd type "Time series econometrics no good :p" if you were typing quickly and carelessly. Thus, we can expect this post to be a critique of time-series econometrics, at least the way it is commonly employed in the economics literature. After reading it several times, I'm fairly sure that's what it actually is. Anyway, I'll let Phil take it from here:
***
Time Series
Econometrics No Giod P
Thank you
for joining us today. I’m Dr. Phil of Economics and I want you to take a deep
breath, find a safe place, sit right down, and think hard about how you’ve been
using your time series econometrics. I want you to take a good long look in the
mirror and ask yourself if you’ve been responsible with your implicit
assumptions and if you’ve been honest with yourself about the strength of your
proxies.
The Lucas Critique: A Refresher
What’s the
Lucas Critique? In 1976, Economics Nobel Laureate (1995) Robert Lucas, Jr. told
the lot of us to hold our horses when it comes to drawing policy
recommendations from econometric conclusions. The intuition behind the argument
is easy enough: historical relationships are determined jointly by measured
variables and institutions. It’s erroneous to assume that changing the
institutions will have no effect on the relationship between the variables. The
rise of microfoundations in macroeconomics was a response to, inter alia, Lucas’s warning.
It’s high
time we faced facts here, people. The Lucas Critique is really just the
beginning of the problem. With all due respect to the esteemed Robert Lucas, implicit
policy parameters lurk in the left-hand variables just as much as in the right.
It’s time to get real and consider that the act of measuring something might
well inflate its importance, justly or unjustly.
What do I
mean by this? It’s the econometric version of dietary gluten. Think back ten
years ago. Did you have the first idea what gluten was in 2004? Did you count
your carbs in 1994? Did you wring your hands over NGDP in 1984? Or how about
stratospheric ozone in 1974? These things all existed, but the act of measuring them made them more than merely relevant—their
measurement created categories. No longer is it just “trade” anymore, it’s
either “domestic” or “international” trade. Right or wrong, national income
accounting (to pick a handy example) encourages analysts to consider national
borders as if they have some special economic significance. And if the pencil
pushers quantify and measure it, you can bet someone’s career is eventually
going to be on the line for it, one way or another.
Your Dependent Variable Matters
Responsibility
is what separates adults from children. One of your duties as a parent is to
let your kids know about the dangers of convenience sampling. You already know
this, but what you may not know is that when you choose your dependent
variable, you’re doing so for very similar reasons to pollsters who hang out on
street corners, accosting passersby. The data are there and easy to get. Why
not just go ahead and use ‘em?
“Why not” is
because data are not “given” by an impartial spectator. Data are gathered by
folks just like you or me or Stevie Nicks, folks who had to make the hard
choices of what to record, of how frequently to record them, and of how to
market them. Even something as seemingly foundational as GDP is still the
result of deliberate choice. Some forgotten (just kidding: it was Simon
Kuznets, a household name if you’re in the fold) economist wanted a nice
shorthand way to measure national production, so BAM, we get national income accounting. A few years later, GDP is
adopted as an official metric. Before long, political leaders are running
contests with each other using the darn thing as a finishing line in a race
that never ends.
I don’t mean
to pick on GDP or how it’s metastasized from metric to target. We all know
about this, and we all accept it without too much fuss. Endogeneity is a
non-diversifiable risk. Have the integrity to accept that and the courage to
continue in the face of adversity. This is a safe place—I won’t judge you, but
you have to conquer your fears before your fears conquer you. But let’s face
facts here, people. The more you pretend that the act of measuring something
won’t alter the way it’s measured, the more you’re going to look like a fool. You
have to respect you before anyone else will.
Your Sample Frequency Matters
Measurement
frequency is a choice too, and it’s a choice that can unfortunately make
irrelevant results look meaningful. Let me tell you a story of something I saw
once. I won’t mention the name of the article or the journal so that I don’t
embarrass the author, but let’s just say that the primary focus of the journal
is one part law, and two parts economics and the piece made a sad attempt to
demonstrate price fixing in retail gasoline markets in some Canadian backwater
(forgive the pleonasm). After some questionable data composition, the authors
concluded that the local cartel was a rousing success to the tune of 2 cents (Canadian)
a gallon. Now, I’m a curious guy. So I asked myself, “Phil, is two cents a
gallon really all that much?” Like a curious cat on a hot tin roof, I pulled up
a website that lets you watch real-time updates to retail gasoline prices. Can
you guess what I found? Those prices were jumping around like a one-legged cat
trying to bury turds on a waffle iron. Now it’s true that in the peer-reviewed
study I looked at, the authors adequately defended the decision to take weekly
price snapshots (it was for practical restrictions), but think about what
you’re doing to your family and your profession when your treatment yields an
effect that gets snowed in by ordinary noise at a quick, casual glance.
Even if you
don’t follow the empirical economics literature, you’ve seen this yourself
every dang time you turn on a report about the stock market. Those Dow Jones
Industrial Average numbers you hear? If the closing bell has already rung,
that’s just the index as it happens to be once trading stops for the day. When
a reporter tells you the Dow is up 100 points at the end of the day, how do you
interpret it? Is that a lot? Should you believe the tag line that follows? It’s
almost never “the stock market moved today”, it’s always “the stock market
moved today on the release of some report you didn’t think to care about until
we just now mentioned it.”
Settle down
now. Collect your thoughts before you do something that will hurt
somebody. It’s bad, but it’s not that
bad.
Your Model Assumptions Really Matter
Even modest
little ARIMA requires that you get your dummy right. But how do you know for
sure if it’s the consumer durables report or the U3 numbers or the forward
guidance or whatever that’s actually driving the change in your output variable
that might just be plain old statistical noise anyway? Just because someone
writes down “this is the treatment effect” don’t make it so. Just because some
loud television personality barks it at you doubly don’t make it so (take it
from me). So what do we do? We look to theory to guide us. We think there’s
some good reason for interest rates to be linked to the amount of money in
circulation (to pick something totally non-controversial…) and there you go: M1
is our big independent variable of interest. We’ve got a model that tells us
what to look for.
What else
does the model do? The model gives you something to compare your results to. In
a laboratory setting, researchers take great pains to set up a control group
that is statistically identical to the treatment group with the single
exception of the one element under study. The world is not a laboratory, so to
answer one of the cardinal rules of econometrics “compared to what?,” you
better darn well make yourself a convincing counterfactual. Once you futz past
all the gassy accusations, this is one of the big reasons Macro No Giod P: the
underlying Platonic capital-T Truth alternative state of the world is forever a
flickering shadow on the wall of our cave. But hey, that’s the best we’ve got,
so just go with it, okay? Integrity means admitting your shortcomings and still
making the best of a tough situation.
That’s a
whole bushel of low-hanging fruit right there, maybe more than you want to chow
down on if you haven’t already thought about it before. Just try to remember
that there are a few questions you need to answer for your econometrics to be
worth a tinker’s damn. First and foremost: who cares? Why is the thing you
measure interesting? Second: are your results salient? That price fixing paper
I reviewed had “statistically significant” results, but that term of art is
very different from its colloquial use. I assure you that if you want the cold
shoulder on Valentine’s Day, all you have to tell your statistician sweetie is
that he’s your statistically significant other. The two cent price fixing
scandal was statistically significant (enough to get past a few friendly
referees anyway), but it was hardly salient. Third: please reject alternative
hypotheses. I’d venture a weak guess that the reason my kind host here has no
ahpinion is because this last one is spectacularly difficult, even with the
finest GARCH specifications and discontinuity analyses available.
Honesty is a
virtue. Have the honesty to admit the limits of your time series methods. You
owe it to the people who love you, and that includes me.
You can
follow Dr. Phil of Economics on Twitter at @DrPhilofEconomi
No relation
to Dr. Phil McGraw or Peteski Productions.
Wait a minute, Deirdre McCloskey is not a graduate student.
ReplyDeleteOH SNAP
DeleteThat's racist.
Delete+1
ReplyDelete"Even if you don’t follow the empirical economics literature"
ReplyDeleteWhat kind of people are these heathens you imagine here?
Those of us who know that following "empirical economics literature" means only having to read one or two papers a decade?
DeleteI always thought that the p in "macro no giod p" was a contraction of OP, or original poster.
ReplyDeleteMy background is statistics rather than econometrics but it's even way worse than than what you describe. See Prof. Judea Pearl's paper on the critique of six econometrics textbooks and you'll know what I mean.
ReplyDeleteGot link?
Deleteftp://ftp.cs.ucla.edu/pub/stat_ser/r395.pdf, but as Chris Auld points out, some are better: http://chrisauld.com/2013/10/08/remarks-on-chen-and-pearl-on-causality-in-econometrics-textbooks/
DeleteIncoherent.
ReplyDeleteNoah's guest blogger does not believe in correlation. What a doubly inflated arrogance?!
ReplyDeleteRegarding the following: "After some questionable data composition, the authors concluded that the local cartel was a rousing success to the tune of 2 cents (Canadian) a gallon."
ReplyDeleteIf the paper is the one I think it is, the number is 1.75 cents per LITRE, which translates to roughly 7 cents per GALLON -- which is a lot more than "2 cents (Canadian) a gallon".
The difference between our Fuel City and Exxon is 14¢ and 1/4 mile. I'd like to see the rest of the data before ruling on salient.
DeleteThis essay certainly understands part of the problem, needs to go much
ReplyDeletefurther in relating the fundamentals of the data and its limitations to
economics as a discipline.
Economists should read David Hackett Fisher's "Historians' Fallacies"
because economics data is a subset of historical data. Historians are
far more sophisticated than economists in their interpretations of their
data and have major advantages in doing so.
Correlation is all that can be extracted from historical data, models
and sophisticated math not withstanding. In any complex system, there
are many causes for each effect. In open complex systems, new causes
may not be obvious. In evolving complex systems, the causes of effects
will change with time due to feedback effects.
As there are an effectively-infinite number of correlations to be had
from an open, evolving, complex system, the problem of interpretation is
an infinite block for economics. Thus, extracting the 'cause and effect'
signal from an open, evolving, complex system such as the economy is
impossible.
Science has only progressed by using experiments that isolate one independent
variable at a time.
I think economics is in a very primitive state, e.g. why are prices the
major dependent variable? They supposedly reflect the flow of
information and decisions made, thus more fundamental, with prices a proxy
with a complex and changing relationship to them. There may well be
better proxies for any of them than have been detected.
It is a puzzle to me why economics is so highly valued by policy makers
as compared to history. Historians are greatly superior in their ability to
extract meaning from historical data, having the advantages of so many
very different sources with which to do their cross-correlations and all
actions done by humans individually and in their institutions. Those
are well-studied from many points of view.
In contrast, all concepts in economics must be derived from the
correlational data and are therefore human interpretations, hypotheses
about how to best interpret the data. There are few related areas of
science for economics research to use to become more than a search for
interpretations that bolster and extend these human-concensus hypotheses.
This process is quite different from non-experimental sciences such as
astronomy or geology, whose fundamental concepts are grounded in
the deepest experimental science and confirmed from many arenas of
experimental science.
This primitive state is reflected in the fact that many reputable economists
persist in delusion, as one can select sets, often overlapping at even
a single point in time, of economists to support and oppose nearly any
real-world action. Based entirely on their profound economic
understandings, of course.
Human understandings are indeed models and all models are limited by the
quality, epistemological and practical, of their data. It should not
suprise us that some models are more useful than others, nor that
economics models aren't generally very useful.
Using a lot of words to say nothing does not change the meaning.
DeleteNext time I want to take apart some dodgy econometrics, I know who to ask.
ReplyDelete