Sunday, May 15, 2016

Russ Roberts on politicization, humility, and evidence


The Wall Street Journal has a very interesting interview with Russ Roberts about economics and politicization. Lots of good stuff in there, and one thing I disagree with. Let's go through it piece by piece!

1. Russ complains about politicization of macroeconomic projections:
He cites the Congressional Budget Office reports calculating the effect of the stimulus package...The CBO gnomes simply went back to their earlier stimulus prediction and plugged the latest figures into the model. “They had of course forecast the number of jobs that the stimulus would create based on the amount of spending,” Mr. Roberts says. “They just redid the estimate. They just redid the forecast."
I wouldn't be quite so hard on the CBO. It's their job to forecast the effect of policy. They have to choose a model in order to do that. And it's their job to evaluate the impact of policy. They have to choose a model to do that. And of course they're going to choose the same model, even if that makes the evaluation job just a repeat of the forecasting job. I do wish, however, that the CBO would try some various alternative models, and show the differing estimates for the different models. That would be better than what they currently do.

I think a better example of politicization of policy projections was given not by Russ, but by Kyle Peterson, who wrote up the interview for the WSJ. Peterson cited Gerald Friedman's projection of the impact of Bernie Sanders' spending plans. Friedman also could have incorporated model uncertainty, and explored the sensitivity of his projections to his key modeling assumptions. And unlike the CBO, he didn't have a deadline, and no one made him come up with a single point estimate to feed to the media. And some of the people who defended Friedman's paper from criticism definitely turned it into a political issue

So I think Russ is on point here. There's lots of politicization of policy projections.


2. Peterson (the interviewer) cites a recent survey by Haidt and Randazzo, showing politicization of economists' policy views. This is really interesting. Similar surveys I've seen in the past haven't shown a lot of politicization. A more rigorous analysis found a statistically significant amount of politicization, though the size of the effect didn't look that large to me. So I'd like to see the numbers Haidt and Randazzo get. Anyway, it's an interesting ongoing debate.


3. Russ highlights the continuing intellectual stalemate in macroeconomics:
The old saw in science is that progress comes one funeral at a time, as disciples of old theories die off. Economics doesn’t work that way. “There’s still Keynesians. There’s still monetarists. There’s still Austrians. Still arguing about it. And the worst part to me is that everybody looks at the other side and goes ‘What a moron!’ ” Mr. Roberts says. “That’s not how you debate science.”
Russ is right. But it's very important to draw a distinction between macroeconomics and other fields here. The main difference isn't in the methods used (although there are some differences there too), it's in the type of data used to validate the models. Unlike most econ fields, macro relies mostly on time-series and cross-country data, both of which are notoriously unreliable. And it's very hard, if not impossible, to find natural experiments in macro. That's why none of the main "schools" of macro thought have been killed off yet. In other areas of econ, there's much more data-driven consensus, especially recently. 

I think it's important to always make this distinction in the media. Macro is econ's glamour division, unfortunately, so it's important to remind people that the bulk of econ is in a very different place.


4. Russ makes a great point about econ and the media:
If economists can’t even agree about the past, why are they so eager to predict the future? “All the incentives push us toward overconfidence and to ignore humility—to ignore the buts and the what-ifs and the caveats,” Mr. Roberts says. “You want to be on the front page of The Wall Street Journal? Of course you do. So you make a bold claim.” Being a skeptic gets you on page A9.
Absolutely right. The media usually hypes bold claims. It also likes to report arguments, even where none should exist. This is known as "opinions on the shape of the Earth differ" journalism. This happens in fields like physics - people love to write articles with headlines like "Do we need to rewrite general relativity?". But in physics that's harmless and fun, because the people who make GPS systems are going to keep on using general relativity. In econ, it might not be so harmless, because policy is probably more influenced by public opinion, and public opinion can be swayed by the news.


5. Russ makes another good point about specification search:
Modern computers spit out statistical regressions so fast that researchers can fit some conclusion around whatever figures they happen to have. “When you run lots of regressions instead of just doing one, the assumptions of classical statistics don’t hold anymore,” Mr. Roberts says. “If there’s a 1 in 20 chance you’ll find something by pure randomness, and you run 20 regressions, you can find one—and you’ll convince yourself that that’s the one that’s true.”...“You don’t know how many times I did statistical analysis desperately trying to find an effect,” Mr. Roberts says. “Because if I didn’t find an effect I tossed the paper in the garbage.”
Yep. This is a big problem, and probably a lot bigger than in the past, thanks to technology. Most of science, not just econ, is grappling with this problem. It's not just social science, either - bio is having similar issues


6. Russ calls for more humility on the part of economists:
Roberts is saying that economists ought to be humble about what they know—and forthright about what they don’t...When the White House calls to ask how many jobs its agenda will create, what should the humble economist say? “One answer,” Mr. Roberts suggests, “is to say, ‘Well we can’t answer those questions. But here are some things we think could happen, and here’s our best guess of what the likelihood is.” That wouldn’t lend itself to partisan point-scoring. The advantage is it might be honest.
I agree completely. People are really good at understanding point estimates, but bad at understanding confidence intervals, and really bad at understanding confidence intervals that arise from model uncertainty. "Humility" is just a way of saying that economists should express more uncertainty in public pronouncements, even if their political ideologies push them toward presenting an attitude of confident certainty. A "one-handed economist" is exactly what we have too much of these days. Dang it, Harry Truman!


7. Russ does say one thing I disagree with pretty strongly:
Economists also look for natural experiments—instances when some variable is changed by an external event. A famous example is the 1990 study concluding that the influx of Cubans from the Mariel boatlift didn’t hurt prospects for Miami’s native workers. Yet researchers still must make subjective choices, such as which cities to use as a control group. 
Harvard’s George Borjas re-examined the Mariel data last year and insisted that the original findings were wrong. Then Giovanni Peri and Vasil Yasenov of the University of California, Davis retorted that Mr. Borjas’s rebuttal was flawed. The war of attrition continues. To Mr. Roberts, this indicates something deeper than detached analysis at work. “There’s no way George Borjas or Peri are going to do a study and find the opposite of what they found over the last 10 years,” he says. “It’s just not going to happen. Doesn’t happen. That’s not a knock on them.”
It might be fun and eyeball-grabbing to report that "opinions on the shape of the Earth differ," but that doesn't mean it's a good thing. Yes, it's always possible to find That One Guy who loudly and consistently disagrees with the empirical consensus. That doesn't mean there's no consensus. In the case of immigration, That One Guy is Borjas, but just because he's outspoken and consistent doesn't mean that we need to give his opinion or his papers anywhere close to the same weight we give to the copious researchers and studies who find the opposite. 


Anyway, it's a great interview write-up, and I'd like to see the full transcript. Overall, I'm in agreement with Russ, but I'll continue to try to convince him of the power of empirical research!

27 comments:

  1. As Angry Paul writes, there is the "center left" and there is the "left" who want different things.

    The Friedman episode was especially bad for the "center left."

    Friedman, a Clinton supporter, specialized in health care but wrote a paper on macro which was not commissioned by the Sanders campaign. Their policy guy pointed to it once in a media report which was a mistake. But it wasn't like Sanders was tweeting about it or that they had it at linked their campaign website. They weren't promoting the paper. It got a couple media mentions months after it was written.

    The reason it became widely know was b/c of Hillary supporters who kept talking about it, like today. And they seem to do it as a way of avoiding talking about the real issues.

    The heavy-handed CEA letter was great example of politicization and campaign smearing.

    This just seems like trolling and makes me dislike Smith, Krugman, DeLong, PGL, etc very, very much.

    ReplyDelete
  2. Joe T.7:59 PM

    Gah, Russ Roberts, climate denier (he says "skeptic").

    After listening to his interview of Piketty, where he kept banging him with questions that indicated Roberts never knew and couldn't imagine there would be any problems with inequality, I'd say he needs quite a bit of ideology purging.

    ReplyDelete
  3. I should probably not comment on this, but part of what has me going is that the one time I saw him he was as about as far from being humble as any economist I have ever seen. So, this may be the old pot calling the kettle black, or it takes one to know one. Yes, I agree that in general the profession could use more humility (and probably me as well), but Russ does not seem like the most appropriate spokesperson for this, triggering a lot of sniggering from at least some of us observers.

    There is also the problem that apparently most of the examples that Peterson brought up are ones that Russ has gone on about elsewhere. And, frankly, it does seem that most of the complaints are about people or groups that seem to be pushing or making or verifying left or liberal positions that get eviscerated or held up to scorn. Were there any in there that are supposedly badly done work by anybody clearly in his ideological camp, and he is most definitely very ideological? I did not see any.

    In that regard it is notable that while it was Peterson who brought it up we had the spectacle again of the much-debated Gerald Friedman study, with Russ just making a wisecrack, perhaps a witty and appropriate one. But why not one about some of the wacko studies about supposed effects of tax cuts and so on?

    Oh, and to Peter, while everybody describes Friedman as a Clinton supporter, the basis of this is that he once donated some money to her apparently some time ago. Maybe that makes him her supporter, but from all I hear from People Who Should Know But Will Remain Unnamed, it is an open secret that Friedman has been a Bernie supporter in fact for some time, although it has been convenient to leave this story in place that he is really a Hillary supporter. Maybe this "open secret" is actually false, but I would warn anybody from being too certain about or repeating as definite fact that he is a Hillary supporter.

    Barkley Rosser

    ReplyDelete
    Replies
    1. Let me note that despite Russ's certainty that all economists are just ideological ranters, presumably thereby justifying his own conduct without admitting that is what he is doing, I can name some who have been willing to criticize those who are more or less in their own ideological camp.

      Thus, there have been plenty of people on the left who have criticized Piketty for either theoretical issues or less frequently on empirical grounds.

      And in the case of Friedman, while some of the "left" ctitics of Friedman have been people at least defending Hillary, if not supporting her (Krugman, the Romers), many of whom have been roundly denounced by Bernie Bros as Wall Street tools trying to get a job in her admin, some have been supporters of Bernie, with Peter Dorman on Econospeak among the most recent, although he largely complained that Friedman has not provided his model for anybody to really study, which is true, of course.

      Delete
    2. "In that regard it is notable that while it was Peterson who brought it up we had the spectacle again of the much-debated Gerald Friedman study, with Russ just making a wisecrack, perhaps a witty and appropriate one. But why not one about some of the wacko studies about supposed effects of tax cuts and so on?"

      As in those Laugher Curves! Nicely put all around.

      Delete
  4. Noah, what's "confidence intervals that arise from model uncertainty"?

    ReplyDelete
    Replies
    1. Model uncertainty should widen the error bands of predictions of the effects of policy. If the world doesn't work the way you think, your prediction will be wrong; hence, predictions should allow for this if possible.

      Delete
    2. Among those studying this have been Brock and Durlauf out of Wisconsin.

      Delete
    3. I presume he means something like a situation where you have some prior distribution over the things that would be called models/approaches in a standard economics book.

      I would call the whole thing a model (hence my confusion) but I'm probably the one out of contact with the usual terminology.

      Delete
    4. The reason I asked is because unpacking "uncertainty" is high on the agenda of methodology right now. Confidence intervals express one specific form of uncertainty resulting from data being derived from samples rather than the entire population (including future periods) to which we want to generalize. CI's can be critiqued on the arbitrariness of the 95% cutoff, but however they're drawn the idea is a good one.

      The problem, however, is that sampling uncertainty has been elevated in a lot of research methodology to uncertainty per se. There are many other sources: potential data error and model uncertainty are two biggies. Publication protocols are in trouble at the moment because they've allowed sampling to obscure the others.

      These are qualitatively different types of uncertainty. Frequentist methods have some justification for sampling, but what can they say about the other types of uncertainty? OK, sometimes we have detailed knowledge of potential measurement error and can construct a distribution, but usually not. I haven't seen B&D on their attempt to quantify model uncertainty, but I can't imagine how this can be done in a general way, especially since there are multiple levels and dimensions of model selection. If there's an insight I'm missing, it would be nice if someone (Barkley?) could convey it.

      I realize this is a tangent, and I should not abuse the comment thread. Too late for that... But this technical issue bears on the bigger question of how economists should present their work to the general public.

      The solution, IMO, is not to have some fancier type of error band around your results, but to make the contingent character of your work explicit: *if* you make these modeling assumptions, here's what you get. Of course, this is going to require more sophistication on the part of journalists and politicians, so it's not so easy. But at least we should try to get the researchers on board.

      Delete
    5. If the world doesn't work the way you think, your prediction will be wrong ...

      Sure it will. But how are you going to quantify that with a confidence interval? You can put a distribution on your hyperparameters and integrate over that - in which case, I think you technically have a "credible" interval, not a confidence interval - but if the world doesn't work the way you think, it is your model that is wrong, not its parameters. How do you propose to integrate over the space of all models? Trying to quantify what you don't know how to quantify is the problem you were trying to solve in the first place!

      Delete
    6. Peter D.,

      Not sure exactly what Noah has in mind, but my memory of the Brock-Durlauf stuff, done some years ago and pubbed ina big paper by Brookings, was to lay out some competing models and then run a bunch of bootstrapping tests on all of them, thereby in effect artificially creating a data set on which one can impose confidence bounds. Obviously these will be wider than for just one model, and obviously this approach can be criticized on various grounds, which I shall not go on about. But that is the sort of thing they have done, whatever Noah may have in mind in this regard.

      Delete
    7. Thanks, Bark. That sounds like a worthwhile enterprise on the part of B&D, but it doesn't cope with the larger questions. (1) It's very labor intensive! You've got to set up all these models and run the procedure. It's not going to happen very often. (2) It's still going to be limited to a subset of potential models. Model selection is not just about choosing variables; it's also about modeling strategy, what equations to impose (in structural models), restrictions, selection of proxies, etc. I can't imagine how all of this can be concatenated.

      Delete
    8. Yes on all counts, Peter, and we do not see it being done on any sort of systematic basis, although some of the central banks may have some of their junior wonks trudging through these kinds of exercises without it being publicized too much.

      Delete
  5. There is a serious incentive problem with economists advising the public or policy makers with humility.

    There are, of course, the usual advantages of attention and confidence for being definite and confident. While humility may inspire greater confidence in listeners knowledgeable enough to distinguish reasonable humility and caution from mere wishy-washiness and reluctance to make hard decisions.

    The realities of politics make the situation even worse. An economic advisor who doesn't sound sure and certain in the media opens a politician up to attacks. The role of visible economic advisors is quite similar to that of a which doctor: by confidently assuring the people that some incomprehensible ritual actions will dispell any curse he offers hope and stops people from wondering if the curse is the chief's fault.

    Even private advice from a humble economist runs the risk of the details of that advice leaking out and the other side pouncing on the fact that "even this administration's own economists doubt the theory behind the president's actions." After all the people


    Saddest of all is that from an electoral prospective the leader who flips a coin to decide what economic interventions to boldly pursue probably does better than the careful leader who doesn't want to make the problem any worse so waits on further information. Most economic downturns correct themselves relatively quickly and any evidence that your intervention made the problem worse not better will be far too complex for the public, or the politician themselves, to digest.

    Perhaps in private, strictly off the record, extremely well trusted economists are recruited by insightful leaders to offer tempered information but I'm unsure

    ReplyDelete
  6. Noah sez:
    3. ...But it's very important to draw a distinction between macroeconomics and other fields here. The main difference isn't in the methods used (although there are some differences there too), it's in the type of data used to validate the models.

    It might also be in large part that ideological billionaires are funding a large part of the Austrian and monetarist camps, at places like GMU and UChicago. The weak data may be enabling this practice.

    ReplyDelete
  7. A lot of this comes off as sounding like Hayek's complaints about "scientism" something Roberts focuses on in his book about Adam Smith's Theory of Moral Sentiments.
    Roberts is a master at finding a "kinder, gentler" way of pushing the main elements of Koch brand Libertarianism.

    ReplyDelete
  8. "Unlike most econ fields, macro relies mostly entirely on time-series and cross-country data"

    False.

    "In other areas of econ, there's much more data-driven consensus..."

    False. No more consensus there than in macro.

    "Macro is econ's glamour division, unfortunately, so it's important to remind people that the bulk of econ is in a very different place."

    b.s. Write about what you know.

    ReplyDelete
    Replies
    1. You want me to go through a bunch of macro papers and highlight how many use time-series and cross-country data as opposed to other data types? Really? You know full well it's most of them. :-)

      As for the "glamour division", here is Chris House on the subject:
      https://orderstatistic.wordpress.com/2014/01/25/is-macro-giving-economics-a-bad-rap/

      Delete
    2. Anonymous6:56 PM

      I come to Noahpinion for Noah's take on interesting topics. I stay for Stephen Williamson's trolling in the comments.

      Delete
  9. BTW, while I criticized some elements of what Russ has to say and think that Noah is way too enthusiastic about this self-serving and somewhat hypocritical interview, I shall grant that some of it is on the money. Indeed, this business of having to make strong statements to get media attention is accurate and important. He ties that to the humulity issue, which is partly correct, althoguh again, I find him to be a weak spokesperson to be advocating humility by economists.

    ReplyDelete
    Replies
    1. Todd Kreider10:37 PM

      Russ Roberts certainly looks humble when next to Stiglitz, Krugman, and Piketty.

      Delete
  10. The CBO does now provide results from multiple models now whenever they engage in "dynamic scoring "- they model macroeconomic feedback separately from their main analysis and then provide estimates with and without the feedback. See for example their latest report on the effects of repealing the ACA. https://www.cbo.gov/sites/default/files/114th-congress-2015-2016/reports/50252-Effects_of_ACA_Repeal.pdf

    They've also released alternate models with respect to politics: their baseline analysis has to be based on current law, but they release "alternate" baselines when Congress is likely to change the law as they go (e.g. repeatedly renewing a popular program that is technically set to expire but everyone thinks it'll be renewed).

    They MUST release a single "main" estimate because Congress has created rules for themselves like "You can only use this legislative procedure for bill that doesn't increase the deficit," and then you need to pick one forecast to judge whether or not that rule is being broken. But -- between dynamic scoring and alternate baselines -- perhaps we're seeing a new willingness at the CBO to deviate from the One Model To Rule Them All.

    ReplyDelete
  11. Anonymous8:26 PM

    I have listened to a lot of Russ Robert's podcasts and even exchanged emails with him. Unfortunately, he always likes to bring "incentives matter" (duh!), government is inefficient, free markets are the solution to everything, 2008 was the government's fault, climate change data is dubious (he can go on for hours poking holes in the analysis and seems to enjoy it, though recently he has been changing his tune a bit and reluctantly accepts that climate change might be a possibility), etc.

    Whats incredulous is none of his topics or podcasts EVER talk about how to solve the problem from a practical point of view. The answer to every problem is get rid of the government or regulation. This is the exact reason why I enjoy Noah who's approach is fresh - he learns from data and presents objective solutions, and sometimes approaches that can actually be implemented :).

    ReplyDelete
  12. Anonymous3:58 PM

    I pretty much agree with Noah in his responses here.

    The effect of immigration on workers seems like it is dominating the posts recently, and there's good reason for that. In this case I'm not much for empiricism. I don't think the empirical results matter to the people that believe they are affected, and the insistence that we use the empirical results to Trump the feelings of the voting public is ill advised, I believe.

    Rather than make this a racial issue, let's just make it one about fairness and the law. I really don't think there are a lot of people in the US that are against legal immigration. I've yet to really meet a person that didn't believe in legal immigration. Perhaps I reflexively avoid such people...

    Anyway, I don't think it helps the situation by invalidating the feelings of the people in this regard with data. Illegal is illegal. It happens because we allow it to happen. It happens because everyone with power knows that their lives are actually bettered by illegal immigration.

    The solution is pretty simple: raise the minimum wage so that nobody feels the need to blame immigrants for their problems.

    ReplyDelete