Friday, April 07, 2017

Intuitionism vs. empiricism


A few weeks ago, I disagreed with a Russ Roberts post about empirical economics. I wrote that if you don't rely on empirical evidence or formal theory, you're going to just end up relying on intuition gleaned from half-remembered simplistic theory and non-careful evidence:
[One] option [for making policy predictions] is to rely on casual intuition - not really theory, but a sort of general gestalt idea about how the world works. If we're of a free-market sort of persuasion, our casual intuition would tell us that minimum wage is government interference in the economy, and that this is bound to turn out badly. Russ seems to be advocating for this... 
As I see it, the fourth option is by far the worst of the bunch. Theories can be wrong, stylized facts can be illusions, and empirical studies can lack external validity. But where does casual intuition even come from? It comes from a mix of half-remembered theory, half-remembered stylized facts, received wisdom, personal anecdotal experience, and political ideology. In other words, it's a combination of A) low-quality, adulterated versions of the other approaches, and B) motivated reasoning.  
If we care about accurate predictions, motivated reasoning is our enemy. And why use low-quality, adulterated versions of theory and empirics when you can use the real things?
In a recent episode of EconTalk, Russ demonstrates this "intuitionist" approach, as applied to the question of the minimum wage. Russ is interviewing Andrew Gelman about problems with statistics in empirical research. Andrew explains things like p-hacking and the "garden of forking paths" (his colorful term for data-dependent testing decisions), which make reported confidence bands smaller than they should be and thus lead to a bunch of false positives.

Russ uses Andrew's explanations as reason to be even more skeptical about empirical results that don't match his intuition. Because I can't find an official transcript, here is an unofficial transcript I just made of the relevant portion (Disclaimer: I don't know the standard rules for making transcripts of things; I edited out pauses and other minor stuff):
ROBERTS: But let's take the minimum wage. Does an increase in the minimum wage affect employment, job opportunities for low-skilled workers - a hugely important issue, it's a big, real question. And there's a lot of smart people on both sides of this issue who disagree, and who have empirical work to show that they're right, and you're wrong. And each side feels smug that its studies are the good studies. And I reject your claim, that I have to accept that it's true or not true. I mean, I'm not sure which...Where do I go, there? I don't know what to do! I mean, I do know what to do, which is, I'm going to rely on something other than the latest statistical analysis, because I know it's noisy, and full of problems, and probably has been p-hacked. I am going to rely on basic economic logic, the incentives that I've seen work over and over and over again, and at my level of empirical evidence that the minimum wage isn't good for low-income people is the fact that firms ship jobs overseas to save money, they put in automation to save money, and I assume that when you impose a minimum wage they're going to find ways to save money there too. So it's not a made-up religious view, I have evidence for it, but it's not statistical. So, what do I do there?
I think several things are noteworthy about Russ' monologue here.

First, Russ claims that minimum wage studies are "full of problems" and have "probably been p-hacked". As far as I can tell, he doesn't have any evidence for this, or can even name what these problems are. He doesn't seem to know, or even consider, how precise the authors' reported confidence intervals in any of these studies even are in the first place - did they report p-values of 0.048, or 0.0001?

As for p-hacking and data-dependent testing, the basic test of a minimum wage hike's effect on employment is pretty universal and is known and decided upon before the data comes in (including things like subsamples and controls). So while some analyses in any minimum wage study are vulnerable to p-hacking, the basic test of employment isn't really that vulnerable.

So Russ seems to have interpreted Andrew to mean that all empirical studies are p-hacked, and are therefore all unreliable. Did Andrew want to convey a message of "Don't pay attention to data, because all hypothesis tests are irrevocably compromised by p-hacking and data-dependent testing, so just go with your intuition"? I doubt that's the message Andrew wanted to convey.

Second, Russ' preferred method of analysis is exactly the "intuitionism" I described above. Russ states his intuition thus: Because companies do lots of things to try to lower costs, minimum wage must be bad for low-income people. But that intuition is not nearly as rich as even a very simple Econ 101 supply-and-demand theory. In the simple theory, if the elasticities of labor supply and demand are low, the minimum wage has only a very small negative effect on employment - consistent with the empirical consensus.

And if you model a firm, you find that there are a number of ways companies can save money in response to a minimum wage increase other than firing workers. They can raise prices. They can reduce markups. They can reduce the salaries of higher-paid workers. There are all kinds of margins for cost minimization not included in Russ' intuition, but which could explain the empirical consensus.

Also, a simple monopsony model shows that cost minimization can make a minimum wage raise employment rather than lower it. A monopsonistic company lowers costs by paying workers less and producing an inefficiently small quantity. Minimum wages, by raising costs, actually increase efficiency and raise employment in that simple model, which would also explain the empirical consensus.

But Russ' intuition doesn't include simple monopsony models, multiple margins of cost reduction, or inelastic supply and demand curves. The intuition is just one piece of a theory. That's why I think intuitionism, as a method for understanding the world, is strictly dominated by a combination of formal theory and empirical evidence.

Finally, by declaring empirical economics to be useless, Russ is condemning the majority of modern economics research. I'm not sure if Russ realizes this, but empirical, statistical research is most of what economists actually do nowadays:


Theory papers were down to less than 30 percent of top journal papers in 2011, and the trend has probably continued since then. By dismissing empirics, Russ is dismissing most of what's in the AER and the QJE. He's dismissing most of the work that economics professors are doing, day in and day out. He may not realize it, but he's claiming that the field has turned en masse toward bankrupt, useless endeavors.

That is a bold, aggressive claim. It's a much stronger indictment of the economics profession than anything ever written by Paul Krugman, or Paul Romer, or Brad DeLong, or any of those folks. I don't know if Russ realizes this.


Update

Russ responds in the comments. He notes that a slightly abridged transcript of the Gelman interview can be found at this link

David Childers notes in the comments that correcting for publication bias generally results in smaller estimates of the disemployment effects of minimum wage.

25 comments:

  1. Anonymous11:05 PM

    Lots of good evidence here. Shame it goes against my intuition, so I'll have to discard it. Good try though!

    ReplyDelete
  2. I thought this was a particularly funny example for the problems Gelman was trying to point out. It's absolutely true that publication and specification decisions based on p-values can induce a distorted representation of the weight of the evidence for any particular proposition, and so it is proper to somewhat discount published numbers and to weight the evidence based in part on theory. Gelman talks about the process as applying a "statistical significance filter" to the signal, and in some very cool recent work by Isaiah Andrews and Max Kasy, they show that this description as a "filter" means one can "invert" the filter to recover debiased estimates (not the original signal, of course, since significance mining entails strict loss of information). See http://scholar.harvard.edu/files/kasy/files/publicationbiasmain.pdf

    The interesting thing is, they apply it to meta-analysis of the minimum wage and do find evidence of notable bias; however this bias is in favor of stronger unemployment effects, exactly the opposite of the direction Russ conjectures. Theory can be valuable for disciplining noise and bias which exist in the research process, but it's important to incorporate it systematically and in a way which permits the evidence to have an impact.

    ReplyDelete
    Replies
    1. The interesting thing is, they apply it to meta-analysis of the minimum wage and do find evidence of notable bias; however this bias is in favor of stronger unemployment effects, exactly the opposite of the direction Russ conjectures.

      Yeah. I know of the study Menzie Chinn referenced, by Doucouliagos and Stanley. Do you know of any others?

      Delete
    2. I have to say I don't, except in the Andrews & Kasy paper which reanalyzes the studies in the meta-analysis by Belman and Wolfson (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2705499) mostly as an application for their new estimators which attempt to correct meta-analyses for publication bias. Not being a labor economist, I don't have a good enough sense of the details, or how representative the selection of studies or estimates, to fully account for precisely what is being measured. One of the main issues with this kind of study is accounting for effect heterogeneity: there's no reason to believe that all minimum wage changes have the same effect, or that all research designs are estimating the same thing, so at best one is getting a summary of a distribution of conceptually different objects. From a policy perspective, it's also not clear which of these elasticities we care most about, so that may reweight evidence. But all of this can (and should) be dealt with systematically, at least to the point of providing appropriate measures of the variability of our estimates.

      Delete
    3. One of the main issues with this kind of study is accounting for effect heterogeneity: there's no reason to believe that all minimum wage changes have the same effect, or that all research designs are estimating the same thing, so at best one is getting a summary of a distribution of conceptually different objects.

      Yep. Agree that this is the main problem with these studies. Any conclusion that "minimum wage hikes don't decrease employment" is going to be false outside of some domain that we don't really know the boundaries of.

      Delete
  3. Anonymous8:50 AM

    This seems like one of those cases which fascinates me where it is almost a meta argument: A debate about which method to use. One method uses criteria A to make judgements. Another method uses criteria B. Both methods are valid and both criteria are virtuous and good but the judgements they produce conflict. What is the criteria for choosing between two valid, virtuous first principles? In moral philosophy this is a truly vexed question but in econ it seems like the deciding factor should be which method makes the most accurate empirical predictions. Which is why I like the out your money where your mouth is approach. Time for some high profile public wagers. We have some cases where min wages policies are being implemented. I would like to see these debates include some specific predictions.

    ReplyDelete
    Replies
    1. Agree. But I did predict that the Seattle minimum wage hike would have only small or zero measurable disemployment effects, and that prediction turned out to be right!

      Delete
  4. Honestly I'm convinced at this point that learning macro modeling just makes you dumb. You can create a model that tells you anything. Just look at all the neo fisherism stuff that's getting published now. A century of previous theory and intuition has been eliminated in favor of the exact opposite recommendation.

    If we don't have empirical research are we even a science anymore? Being able up justify any silly policy with a theoretical macro model is a sign of academic hubris. Some models really need to die. With data we can at least tell which ones deserve to live

    ReplyDelete
  5. You keep saying that Russ is calling empirical research "useless" or that he "dismisses" all empirical studies. I haven't seen him say, write, or even insinuate that anywhere. His original post on the issue was quite clear. He's advocating to be more careful and less confident in empirical research.

    The fundamental problem I see with a lot of empirical data studies are the confidence with which results or conclusions are presented. At least with theory we generally know it's theory. With empirical data we get lulled into believing results are facts....until we see an empirical study saying something completely different.

    Maybe I have it wrong, but I think Russ is just trying to say slow down and be careful when using empirical data as the be all, end all. Stop putting words in Russ' mouth or at least show us where Russ is saying the things you say he is saying. So far, the blog post and your paraphrased transcript didn't say at all what you say it said.

    ReplyDelete
    Replies
    1. Yes, if you take his sentiment and don't read what he actually proposes, you might believe him that he just wants us to be more humble, gosh gee whillickers.

      Now if you actually read the interview, he says stupid things like "I have evidence for it. But it's not statistical." My only charitable intepretation is that he is so dumb and/or out of touch that he thinks other economists allow any given regression to move their prior in the same way, rather than actually evaluate the validity of the study...since he has no ability to do that, he just assumes nobody else can.

      Delete
    2. No, if you listen to the actual podcasts about these topics that isn't what he thinks. He thinks the don't change their priors much at all. He saying people dismiss study that disagree with. He talking about being upfront about this and being humble about it.

      Delete
  6. I'm sorry but Russ Roberts has been playing this tune for ages now and it's become tiresome. The solution to statistical debates is not to throw up your hands, quote Hayek, and then substitute your ideology. It's to learn statistics and figure out where who you disagree with went wrong.

    ReplyDelete
  7. Many thanks for exploring the implications of what I was suggesting in the conversation with Andrew Gelman.

    First, I am well aware (partly from you!) of the direction economics has taken in recent years toward becoming a branch of applied statistics. I think you also know that what the profession calls "theory" is not particularly my cup of tea, either.

    Second, I am well aware of monopsony theory and the implications for employment in the face of a minimum wage. I don't know of any evidence that suggests it is anything more than a theoretical curiosity good for exam questions.

    Third, while a firm facing an increase in the minimum wage might choose to try to raise prices, I do think firms react on all margins and in particular, will try to use less of something that gets more expensive. I sense and observe that firms do this often in response to other examples (firms move factories overseas, add automated checkout, etc) when wages rise; I don't see why the minimum wage would be any different.

    Fourth, the elasticity of demand is the relevant empirical measure and I think it is totally legitimate to support the minimum wage because of a belief that the elasticity is "small." Small is not zero however, a number I hear occasionally invoked as a reason to support raising the minimum wage as if that's a free lunch, at least for low-skilled workers. I would add that "small" still means that the least-skilled of the low-skilled population bears the brunt of the impact and I find that particularly painful.

    Fifth, the comment made by "The Donk" has captured my view pretty well. I would only add that because of my skepticism, I think all purveyors of sophisticated statistical analysis should present some measure of robustness/sensitivity alongside their findings. The work of Ed Leamer provides some specifics. The fact that his suggestions have been widely ignored is a sad statement about the state of our profession. When attending empirical presentations I now ask how many regressions were run to generate the estimate in the table and why is it that the one being presented is the one we are to take as true and not the others that are not being presented. I encourage others to take this approach when consuming empirical findings at workshops and conferences.

    Sixth, I especially appreciate you quoting me with an attempt to capture what I was actually saying. Happy to let you know that we have always produced what we call highlights of every EconTalk episode and that in recent years they are very close to transcripts. The Gelman one is here: http://www.econtalk.org/archives/2017/03/andrew_gelman_o.html

    Looking forward to continuing this conversation.

    ReplyDelete
    Replies
    1. >I think all purveyors of sophisticated statistical analysis should present some measure of robustness/sensitivity alongside their findings.

      As Dario Sidhu notes above, it is amazing how the host of the most popular podcast on economics is so deeply disconnected from modern economics as practiced that he doesn't know that sensitivity analyses have become increasingly present and (arguably excessively) demanded by reviewers.

      As for Leamer's specific suggestion of extreme bounds analysis back in 1983, there's a reason why modern studies don't do that, and it's discussed in Angrist and Pischke JEP 2010.

      It's the use of weaselly phrases like this that I find especially frustrating and deceptive, like implying we all somehow ignored Leamer for decades when he didn't (there's an entire JEP issue celebrating modern reactions to his '83 paper, so come on), or saying "I don't know of any evidence..." as a way of implying that there is no evidence, when in reality there's some evidence of monopsony power in a couple of studied labor markets (that apparently he has not bothered to seek out).

      These are all things that someone who actually keeps his finger on the pulse of the discipline he purports to represent might know of.

      Delete
    2. What information, on its own, does the number of regressions run give you? And do you apply this approach uniformly to both results that defy your intuition and results that confirm your biases?

      Delete
    3. Howdy, Russ! Thanks for stopping by!



      Yep. And I'm saying: "With neither of those things, what have we really got left?". I just don't trust intuitionism at all.

      Second, I am well aware of monopsony theory and the implications for employment in the face of a minimum wage. I don't know of any evidence that suggests it is anything more than a theoretical curiosity good for exam questions.

      What evidence would you accept, though? Any evidence would be statistical in nature, would it not? And if we can't trust statistics...

      I do think firms react on all margins and in particular, will try to use less of something that gets more expensive...I don't see why the minimum wage would be any different.

      Well, the available evidence mostly says that they do do this, just not very much.

      "small" still means that the least-skilled of the low-skilled population bears the brunt of the impact and I find that particularly painful.

      Yep. BUT, note that measured elasticities are generally small for low-education, low-wage workers, and zero for higher-wage workers.

      I think all purveyors of sophisticated statistical analysis should present some measure of robustness/sensitivity alongside their findings.

      Can you point me to a minimum wage study that did not do this, or did it less than you'd like?

      The Gelman one is here: http://www.econtalk.org/archives/2017/03/andrew_gelman_o.html

      Ahh, thanks very much!! I was looking for that.

      Delete
    4. Weird. The first thing I was replying to got cut off. I was replying to this:

      First, I am well aware (partly from you!) of the direction economics has taken in recent years toward becoming a branch of applied statistics. I think you also know that what the profession calls "theory" is not particularly my cup of tea, either.

      With this:

      "Yep. And I'm saying: "With neither of those things, what have we really got left?". I just don't trust intuitionism at all."

      Delete
  8. I am trying to reply to the commenter bravely going by the name of "?" who has replied to my longish response.

    I am happy to know that sensitivity analysis is increasingly required. I don't know how widely it is used but I'm happy to hear it that it is more frequent. I just pulled up "The China Shock" by Autor, Dorn, and Hanson, (http://www.ddorn.net/papers/Autor-Dorn-Hanson-ChinaShock.pdf) one of the most influential articles in recent years. It has sentences like:

    "Given average an- nual earnings of approximately $40,000 per worker, a reduction of employment by 0.78 percentage points (sum of columns 1 and 2) per $1,000 dollars of imports per worker lowers earnings per adult by approximately $312 per year (0.0078 × 40,000)."

    Or Figure 7, which looks at the increase in government benefits from a $1000 per worker increase in Chinese imports and comes up with the number $57.73. Very precise. I don't see any attempt to convey to the reader how reliable that number is or how sensitive it is to the many assumptions that went into it.

    I think this is standard operating procedure in many (most?) articles. Certainly when the media gets hold of timely studies, they only report point estimates.

    I would be interested in an example of a popular highly-cited article on a controversial topic that provides some measure of sensitivity to specification and other key assumptions.

    ReplyDelete
    Replies
    1. Many do, Russ. You should read the literature and see how many do, instead of assuming they don't!

      How about the Seattle Minimum Wage Study? They do a lot of alternate specifications. http://evans.uw.edu/policy-impact/minimum-wage-study

      That's the kind of thing Leamer likes.

      Delete
    2. The China Shock is not a terribly influential article in and of itself as it is just a literature review summarizing a body of research findings published in several papers that do display robustness checks and sensitivity analysis. The influential paper you're thinking of is the Chinas syndrome that contains tons of rob checks.

      Delete
    3. Sometimes all we need is Adam Smith style analysis. We don't need statistical analysis to tell us that a higher minimum wage will decrease demand for employment. As for predicting the future economy 10 years in the future, I think you have to do deductive analysis.

      Delete
  9. I have this faint memory of you and delong having a pizza challenge. Was there a conclusion?

    ReplyDelete
    Replies
    1. Yep! I lost the bet, and bought DeLong pizza last year. But that means I won my counervailing bet with Patrick Chovanec, on which I have yet to collect... :-)

      Delete
  10. Anonymous3:38 AM

    I'm just some moron, but I'm pretty sure you can't do controlled experiments in economics (at least for macro). So it's just for fun then right?

    ReplyDelete
  11. Jonathan Andrews11:13 AM

    Thank you all for this. My knowledge of both Economics and Statistics is limited to just a little more than I teach my pre-university students but it does seem to me that much more is claimed for statistical research than it necessarily merits. Of course there is the jelly problem and garden of fork paths and what seems never to be mentioned is the quality and reliability of the specification of a regression model (ohhh let's try logging linear). It seems to me skepticism is necessary but it can become a counsel of despair. If finding reliable statistical model is so hard, what can we do - well, you lot in the profession).
    I think Russ may well find his skepticism convenient to support his views but I really feeling when I listen to Econtalk that our host genuinely trying to understand stuff he doesn't understand.
    Economics is a tough study and I doubt we can learn very much to help us make the world a better place but that doesn't matter, I think progress comes obliquely and the process of trying figure out the Economy is likely to help us in ways we don't expect.

    ReplyDelete