(Part 1 of my ongoing review of Big Ideas in Macroeconomics, by Kartik Athreya, can be found here.)
Is Macroeconomics a science?
In the first section 1.2.1 of Big Ideas in Macroeconomics - yes, the sections are numbered like that - Kartik Athreya writes:
I will not go into the sterile - and crashingly boring - discussion of whether economics is a science or not, since relabeling it would change neither the questions we asked nor how we approached them.This is a little odd, since he spends the entire section discussing exactly that question. His answer to the question of "Is econ a science?" is basically: "No, and that's fine."
For example, he writes:
My view is that a part of what we do is "organized storytelling, in which we use extremely systematic tools of data analysis and reasoning, sometimes along with more extra-economic means, to persuade others of the usefulness of our assumptions and, hence, of our conclusions...This is perhaps not how one might describe "hard sciences"...
[E]conomics is replete with "observational equivalence" whereby two (or more) sets of assumptions match a given set of data equally well. The paucity of data [with the power] to winnow the set of assumptions...is a huge problem.
One important difference between economics and physical sciences is that we economists have a hard time verifying the closeness of our standard assumptions to reality...Economists also lack axioms that closely approximate conditions in the real world the way that, for example, Newtonian axioms for projectile motion seem to. Seen this way, it is actually the collection of assumptions that we "like most" which constitutes our understanding of the world.
There is a second, even more substantial difference from the physical sciences: for most important macroeconomic questions, macroeconomists cannot conduct controlled experiments.To recap, Athreya sees the following major differences between macroeconomics and "hard sciences":
1. Unlike hard scientists, macroeconomists must spend considerable effort persuading people of the likability of assumptions. The assumptions that macroeconomists "like most" are the ones around which consensus forms.
2. Because of uninformative data, macroeconomics can't offer the kind of robust, definitive answers to real-world questions that hard scientists can often provide.
That is basically my view of how things stand as well (though I'm more annoyed by the situation than Athreya is). By most people's definition, this would mean that macroeconomics isn't a "science". Athreya says that discussing that simple fact is "crashingly boring"...but one thing Athreya's book clearly demonstrates that he is not a man who is easily bored. More likely, Athreya realizes that our society has an unfortunate tendency to sneer and look down its nose at any academic discipline that is not labeled a "science". He realizes that relabeling macro a "non-science", while seemingly a pointless semantic question, will cause the field to lose prestige in the eye of the public.
I do think that the cultural importance of the "science" label is counterproductive. It's not the fault of historians, for example, that Francis Bacon's method offers only limited clues to history's mysteries.
But unlike Athreya, I'm deeply bothered by this "persuasion" thing. Athreya says macroeconomists' job is to persuade others of the "usefulness" of assumptions - basically, that macroeconomists are sort of halfway between scientists and lawyers. But persuasion, when employed on a mass scale, can turn the "wisdom of crowds" into the madness of herd behavior. People have an instinct for social conformity. And we tend to be overconfident in our beliefs, especially about complicated and difficult subjects.
Suppose - and I'm pulling this purely from my lower digestive tract here - that the data tell us that it's 63% likely that recessions are caused by financial market disruptions, and 37% likely that recessions are caused by productivity slowdowns. That's the best the data can do for us - there's no "falsification" here. The rational thing is for everyone to hold that 63/37 split in their minds as a Bayesian belief. But the rational thing is incredibly hard for real humans to do. Our brains urge us to turn our belief into a 100/0 thing - to decide that one assumption is right and the other is wrong. After we make up our mind, we try to persuade other people to do the same. Herd behavior and conformity work their magic. Political bias and other personal biases may come into play. Consensus forms. But the consensus has a big chance of being dead wrong.
Which is better - a firm consensus with a 37% chance of being wrong, or a distribution of beliefs with large confidence intervals, centered on the best possible guess? It depends, I guess. Sometimes, when you need quick action, and when human indecision is holding you back, the firm consensus might be better. But for the slow, ponderous task of figuring out how the world works, I'd prefer the latter. Consensus too often sends us down blind alleys.
I think a lot of people instinctively agree with me, and this is behind the calls for macroeconomists to be more "humble", or to "admit their ignorance", etc. It's probably one reason that macro doesn't often reach a general consensus. But I kind of worry that the amount of consensus that does get reached is too much.
But I feel like Athreya doesn't fully share my distrust of consensus. Section 1.3.1.5 of Big Ideas (yes, the book has subsubsubsections) is entitled "It Takes a Model to Beat a Model". This is a line I've heard before from a number of macroeconomists. The idea that models "beat" other models seems to indicate a preference for 100/0-type beliefs, at the individual level or at the group level.
Mathematics in Macroeconomics
Big Ideas has some other bits of philosophy that might interest people. Chapter 4 has a section (Subsubsection 4.2.4, to be precise) defending the use of mathematics in macroeconomics. The main reasons for math, Athreya says, are that it allows us to be quantitative (which we need in order to make policy), and it allows us to be precise about our meaning. That's pretty similar to what I wrote in this post a while back. And I still think these are good reasons to use math whenever possible.
Athreya makes an important point: Without math, discussions about theory often degenerate into arguments about "what Keynes really meant", or "what Hayek really meant", etc. Like prophets and philosophers, "literary" economists will inevitably get reinterpreted, misinterpreted, and conflictingly interpreted. Spend some time talking with those "heterodox" economists who believe that econ should be a purely literary field, and you'll see what I mean.
But Athreya goes further. He doesn't just defend the use of math in macro; he frowns heavily on any economic discussion that doesn't use math. From Subsubsubsection 4.2.4.2:
The unwillingness [of economists] to couch things in [mathematical] terms (usually for fear of "losing something more intangible") has, in the past, led to a great deal of essentially useless discussion.
The plaintive expressions of "fear of losing something intangible" are concessions to the forces of muddled thinking.That's an interesting claim coming from a book that contains no equations...
But anyway, I think Athreya overlooks something important, which is the role of non-mathematical discussion in idea generation. Even when you use math to make a model, you don't start with the equations - you start by thinking about the concepts. Sometimes you need a lot of thinking before you come up with concepts that are interesting enough to model formally. Sometimes that thinking can't be done by just one person, and instead requires a discussion between people.
Sometimes, hidden among the reams of useless discussion, is a great idea that ends up turning into a great mathematical model. If you always start with the math, you can crank out models, but I think you might miss some of those deeper, bigger insights.
Anyway, in Part 3, I'll discuss the main "Big Idea" in Big Ideas: Arrow-Debreu equilibrium. Stay tuned!
Noah, thanks for this series. It's off to a great start. This was a shocking line from Athreya:
ReplyDelete"Seen this way, it is actually the collection of assumptions that we 'like most' which constitutes our understanding of the world."
(!!!)
I had no idea it was that bad.
http://informationtransfereconomics.blogspot.com/2013/04/the-philosophical-motivations.html
The lack of controlled experiments in macro is a red herring. Astronomy does just fine. The main problem with the lack of controlled experiments is that the empirical work appears to be done with axioms economists "like best" per the quote above. That is the difference. Astronomy done with axioms people "liked best" gave us astrology.
http://informationtransfereconomics.blogspot.com/2014/05/a-starry-eyed-aside-on-methodology.html
The lack of controlled experiments in macro is a red herring. Astronomy does just fine.
DeleteThis, this. There are other examples in the natural sciences: geology, paleontology, and meteorology --admittedly not as empirically successful, but still evincing the feasibility of observational science in general.
I would also question the extent of paucity of macroeconomic data since, after all, national series come from huge sets of aggregated sectoral and regional input-output data. To be sure, the total economy is a fundamental level for many reasons (monetary policy, for starters); but there's still a lot of exploitable spatial heterogeneity left out there for clever quasi-experimental designs. I guess this is an area where people like Ethan Kaplan or Suresh Naidu will end up eating the old guard's lunch.
Much of the microdata behind aggregate series are not accessible to researchers, or have high access costs. But there is a lot of work along this line. See, eg, http://www.aeaweb.org/econwhitepapers/white_papers/John_Haltiwanger.pdf and Haltiwanger's entire CV. In my view, this is the future of macro. Note, though, that it is not incompatible with the DSGE approach, which can be generalized to allow for a lot of heterogeneity.
DeleteComputational limitations used to be a hurdle for the spread of microdata macro, but that is increasingly not the case. The main remaining hurdle consists of problems inherent in the use of administrative data--confidentiality issues (for which there is a large body of law, most of it probably necessary), slow data release cycles, and generally data messiness.
data sets much bigger and the system dynamics much simpler. To see how vexing complexity is try predicting earthquakes.
DeleteAstronomy does not really have as limited data as economics. We have a good understanding of the fundamental laws of physics, which we can apply to astronomy. For example, the laws of thermodynamics apply in outer space just as much as they do on Earth. In other words, the "assumptions" in astronomy are derived from physics, which has plenty of data.
DeleteOn the other hand, we do not have a good understanding of the fundamental laws of preferences and choice. If we did, then the only difficulty in macro would be aggregation.
@Joshua Yes, exactly. If your assumptions lack firm grounding, then it doesn't matter how much or how little data you have.
Delete@Krzys Earthquake prediction is in a better situation since the underlying physics assumptions are reasonably well grounded. We can say it is a complex or unpredictable system with meaning (e.g. materials failure can be unpredictable in the lab). We can't really call an economic system complex or unpredictable because we don't have the underlying assumptions on which to base that claim -- it's complexity and unpredictability would have to be assumed from our ignorance. Another way, earthquake prediction is provably hard. We can't prove economics is unpredictable yet.
Ok but what about other relatively data-scarce, mostly observational sciences like Medicine or Paleontology that also cannot straightforwardly derive their core theories directly from well-established experimental sciences?
DeleteLike Jason points out, this trope of constantly blaming the appalling state of modern macro on unproven "complexity spirits" smacks of begging the question and is getting old. Of course science is hard: we would have figured it out millenia ago otherwise. An altogether different matter though is whether a given phenomenon, like earthquake fractures or the three-body problem, is provably hard to measure/predict, even approximately. And macro is remarkably silent on this.
To be fair, macro does have some hardness theorems: the Lucas critique, for one; or the most recent work on the NP-completeness of market clearing. But on concrete questions of statistical power and information-theoretical model inference, the discipline falls remarkably short. Cosma Shalizi tried unsuccessfully for some time to convince macroeconomists to formalize and quantify what they mean by "we don't have enough data for our complicated models", to no avail so far.
But hey: why change the status quo if once you're in all it takes to be a "macroeconomist" today is play around on the blackboard with some half-baked frictions, log-linearize around the steady state, and match some dubious moments or, if feeling adventurous, "confirm" your empirical win by some poorly-reasoned IV on 60 data points over the weekend... Plus you get to wear a tie unironically (since, chances are, you'll be male) and be quoted on the WSJ, etc.
@Jason But, with better data, we would have better foundations. All foundations come from data; otherwise we are just human brains clouded in darkness and silence. Astronomers can do well without lots of data because they do actually have lots of data on similar phenomena. If macro had more data, then, like physicists, economists could develop good macro models based on consistent foundations.
Delete@Ryan,
DeleteThank you. John Haltiwanger definitely gets it. The "manifesto" you linked should be required reading in every econometrics course [1]:
Here is the vision. A social scientist or policy analyst [...] is investigating the impact of the “great” recession and anemic recovery (as of September 2010) on businesses and workers. The analyst begins by exploring the latest aggregate data showing economy-wide, sector-level and broad regional-level variation in terms of business productivity, output, capital investment, prices, wages, employment, unemployment and population. The data on employment changes can be decomposed into hiring, quits, layoffs, job postings, job creation and destruction. The data on unemployment can be decomposed into gross worker flows tracking flows into and out of unemployment. The data on workers is linked to measures from household data tracking income, consumption, wealth, consumer finances and household composition. The data are high frequency (monthly or quarterly) and timely (data for the most recent quarter or month). The data are available not only for the present time period but historically for several decades permitting analysis of both secular trends and cyclical variation.
[...] The analyst can conduct empirical studies at the economy-wide, broad sectoral and broad regional level with data broken down by all of these dimensions. In addition, the analyst can drill down to the individual and firm level creating a longitudinal matched employer-employee data set with all of this information at the micro level. This permits panel data analysis using rich cross sectional and time variation data tracking the outcomes of businesses, workers and households. These outcomes can be tracked at the very detailed location (Census block or track) and detailed characteristics level. The drilled down data aggregates to the national key indicators that receive so much attention.
The analyst can ascertain, for example, is it really the case that it is small, young businesses that normally would be creating jobs given their productivity and profitability who can’t get credit that is accounting for the anemic recovery as of September 2010. The analyst could track what
type of financing has especially decreased relative to other economic recoveries. The analyst could analyze the impact of policy interventions historically and how they have or have not had influence on different types of businesses and in turn on the workers employed by these businesses.
[1] http://www.aeaweb.org/econwhitepapers/white_papers/John_Haltiwanger.pdf
@Jason
Deletethe earthquake example shows you that even if you know the underlying dynamics perfectly (which we don't for econ), we still can't understand the aggregate behavior.
The medicine example is almost too perfect here. You should acquaint yourself with works of John Ioannidis.
DeleteMath is extremely overrated in the economics profession; I would treat economics as a philosophical field rather than a scientific/mathematical one. Knowing the history of economic thought is way more valuable than being good at mathematical modelling and econometrics. Didn't Piketty say that most economists are economically illiterate due to their narrow focus on rigorous mathematics (or something like that)?
ReplyDeleteI agree with the overrated bit especially with regard to e.g. DSGE. I do think math serves one invaluable purpose Athreya articulates:
DeleteThe unwillingness to couch things in [mathematical] terms (usually for fear of "losing something more intangible") has, in the past, led to a great deal of essentially useless discussion.
Math can be a useful backstop against BS :)
Math is just a precise language. I get annoyed with Math mysticism like this. I think Noah has it right here, Math is not enough (you need to think first about the concepts), but it really does help with clarity.
DeleteHaving conceptual clarity is invaluable, but math is extremely useful in helping you ask better questions of data. philosophical fields often degenerate into empty arguments over what the prophet really meant. A particularly pathetic waste of time
Delete"macroeconomists are sort of halfway between scientists and lawyers"
ReplyDeletebut that's a good thing right? lawyers often have to argue a reality that is low probability (say 37% innocent) versus the higher probability (63% guilty). I suspect lawyers might even be better Bayesians by disposition than most physicists. so what better mix for economists? you seem to be "bothered" by persuasion and the status quo but you don't really offer an alternative.
in any case, thanks for taking so many posts to work through Athreya's book ... nice to see it discussed in depth here.
A lawyer dressing up in scientist's clothes isn't halfway between a lawyer and a scientist. They are a charlatan.
ReplyDeleteMacro economics works with accounting totals. It's hard for me to imagine that abstracted "entities" like accounting totals could be causing anything. Except, perhaps, when people modify their behaviour en masse in response to disseminated "news" about those accounting totals.
ReplyDeleteAnyway, in Part 3, I'll discuss the main "Big Idea" in Big Ideas: Arrow-Debreu equilibrium.
ReplyDeleteWhen a local bookshop went bust I got Mas-Colell & al for a few euros. That has all the Arrow-Debreu I'll ever want. So please tell me: is there anything at all in Big Ideas that I won't find in Mas-Colell? Or does Big Ideas in Macro consist entirely of well-worn ideas in Micro?
It has references to interesting papers. I'm not sure if you'll find those citations in Mas-Colell. But I think if you're reading Mas-Colell, you won't learn anything new from Athreya's book.
DeleteThanks, I don't think I'll be buying Athreya's book unless another bookshop bites the dust. (One does not simply read Mas-Colell....)
DeleteI think that discussions on maths in economics tend to miss out the big problem with actually *using* maths in economics.
ReplyDeleteThis problem is intractability.
Quite often, one is constructing a model and the maths simply becomes far too complex for any tractable analytic solution to be formed. Of course, one can use approximations or simulations but these can rarely be generalised as much as an analytic result and are not easily used in other theories.
This means that there is a prejudice in favour of tractable, analytic results over approximated or simulated results. This can skew the models which are actually chosen. Even worse, it is hard to argue against the tractable results if the alternative you are suggesting is intractable, particularly with the level of maths-worship suggested by Athreya.
I would argue that this is important because it is systematic. Rational Expectations is simpler (more simplistic?) than the alternatives but it gained momentum at least partly because it is easily manipulated (i.e. tractable). In micro, the interest in stochastic evolutionary game theory is based around a very specialised view of stochastic shocks. Alternative views have made little progress because their formulations are less tractable
See the literature. Computational macro occupies most of it these days, in some form or another--that is, most macro models are solved/simulated in a computer. It's probably not accurate to say that "there is a prejudice in favour of tractable, analytic results."
DeleteComplexity is your enemy when dealing with limited data sets. Trying to make more "realistic" (meaning complex) models is a way to nowhere: tractable and simple models are the only way.
DeleteIf a model is too hard to solve, then it is probably unclear what will happen to various variables of interest after a shock. For example, suppose my agents have some complicated utility functions, so, after the labor tax is increased, it is difficult to determine whether aggregate labor supply goes up or down. Or, maybe it is difficult to determine whether average productivity will go up or down. In that case, there are multiple forces at work, and I can predict whatever I want without solving my model; I can say "this force will drive down productivity when you increase taxes" or I can say "this force will drive up productivity when you increase taxes". My model (literary or mathematical) is untestable and not useful.
DeleteNoah's right that non-mathematical models can be useful for exploring ideas. But, ultimately, you can't get much policy advice or concrete predictions without a solvable mathematical model (solvable analytically or through calibration and simulation).
Ryan Decker- Fair point but I would point out that a lot of the building blocks of these models are based around "tractable" elements. Furthermore, this trend is rather late in the day- economics is based on 70+ years of analytical solutions which informs the foundations of the discipline and ignores the "hard stuff". Analytic solutions, incidentally, still rule the world in micro and I suspect that there is still quite a lot in macro.
DeleteJoshua Weiss- I'm not sure of your point here. My point is that there is a systematic bias in favour of *analytical* solutions to models and any models that cannot be *analytically* solved tend to get thrown out (or at least given lower weight).
A tractable model may give you nice, precise predictions but if they are wrong then they are pretty useless. Being vaguely right is better than being precisely wrong.
Anonymous - You are definitely right about the bias. My point is that, whether you use math or not, the bias is there. If your literary model can give you useful results (beyond idea generation that is only indirectly useful), then you should be able to write it down mathematically and solve it. If you can't, then I'm skeptical that your logic is sound. It is almost always more difficult to spot logical flaws or assumptions when math is not used. Of course, this does not mean that a non-math model is bad. It just means that, when confronted with it, I'll say "let's try to write it mathematically and see if it still makes sense."
DeleteIt's a social science because it deals with people and how they interact with each other and their environment. Hard science deals with things and ho they interact with other things.
ReplyDeleteThat's irrelevant. People are things, too.
DeleteAnother way to think about this: Economics is not a science, but one can still accumulate data, think about it, apply mathematics, theorize, and propose solutions in a rational and scientific way.
ReplyDeleteThe problem in macro is extremely hard. The data effectively precludes our ability to resolve the noise. The problem is that instead of trying to extract the little information that is available, current macro develop lots of strategies to avoid any confrontation with data. Lucas critique is a prime example of a useful idea, which degenerated into an avoidance strategy. Instead, we have an orgy of mathematical complexity in service of glorified curve fitting.
ReplyDelete"There is a second, even more substantial difference from the physical sciences: for most important macroeconomic questions, macroeconomists cannot conduct controlled experiments."
ReplyDeleteHistorical geology has this constraint, and manages to be pretty science-y.
Again, how well are the earthquake predictions going?
DeleteSo you are saying geology isn't a science?
DeleteNo, I am saying both geology and econ are sciences.
DeleteKrzys, geologists may not be able to predict the exact timing of earthquakes but they do know (roughly) where they are most likely to occur, at what frequency and magnitude. They understand why and how they happen. This means that people can at least prepare for them or be aware that they're a possibility in a given area.
DeleteI don't see how economists have achieved anything comparable in macro.
Excusing macro as "hard" is like excusing the hubris of Marxists and their dialectic systems because modeling future history is hard. Suppose Freud had expressed the Oedipus complex as
ReplyDeleteOe = Dm - Fc
where
Oe = degree of Oedipal predilection
Dm = the son's sexual desire for his mother
Fc = fear of castration by his father
Does the use of mathematical language, by itself, legitimize unfalsifiable theories and tautologies by giving them "precision of meaning"? How much weight should we give to equations made of unmeasurable free parameters like utils, propensities, and expectations?
That's such a great example of the pointlessness of attaching mathematical magnitudes to unquantifiable, incommensurable properties.
DeleteAs someone whose economics degree is more than four decades old (but who earned a living from the subject for most of my career), I've found it increasingly difficult to keep up with what counts as "progress" in the discipline. It seems to me that the parts that are mathematical are not very useful, and the parts that are useful are not very mathematical.
ReplyDeleteIn science there is the concept of "significant digits". Don't report a result to the nearest hundredth if your ruler only measures to 64ths. Economics uses math and in doing so ignores the reality that it should report results with zero significant digits.
ReplyDeleteThe debate here will be used by future philosophers of science as a case study: this is what it looks like inside the old paradigm, right before the shift. The poor creatures have no idea.
Meanwhile, complex systems analysis correctly predicted the path of Superstorm Sandy. But that's an entire field working under a productive model. The tiny minority in Econ doing the same are just now picking up steam.
What would a macro comment thread be without "complex systems" mysticism? Nowhere, that's where.
DeleteIt's very well using maths to try and be precise and clear about your ideas, but if the properties in the equations are not directly observable (utility, TFP) then I don't see how your ideas are actually linked to the real world. In other words, you're being precise about absolutely nothing.
ReplyDelete