[J]udging by the tendency of those writing economic papers to follow the latest fashion, a “herd” would be [the] best [collective noun to describe economists]. This year the hot technique is machine learning, using big data...
Economists are prone to methodological crazes...[N]ew methods also bring new dangers; rather than pushing economics forward, crazes can lead it astray, especially in their infancy...
A paper by Angus Deaton, a Nobel laureate and expert data digger, and Nancy Cartwright, an economist at Durham University, argues that randomised control trials, a current darling of the discipline, enjoy misplaced enthusiasm...
Machine learning is still new enough for the backlash to be largely restricted to academic eye-rolling.As its main piece of evidence for the faddishness of economics, the article presents the following graph:
To me, this graph (which is just for NBER working papers) shows the opposite of what the article claims. Looking at the chart, I see a bunch of more-or-less monotonically increasing lines. Remember that the y-axis here is percent of total papers, so if these techniques are fads, we'd expect these lines to mean-revert. Instead, almost all the lines just go up and up for 15 to 30 years. To me, that says most of these things are not overhyped fads - at least, not yet.
There are two possible exceptions. Lab experiments had a brief downturn for a few years starting in around 2002, though they shortly resumed their upward climb, and are now way above their 2001 peak. DSGE models have been decreasing slowly since 2010, though they're still strongly up over the last decade.
Given the seeming non-faddishness of the lines on this chart, a better hypothesis would seem to be that these new techniques are driven by new technology. The internet and computerization have made it much easier to collect, transfer, and analyze data. Processing power and software packages like Dynare have made it much easier to numerically solve DSGE models. These are factors that the Economist article does not consider.
If new technology, not academic herd behavior, is responsible for most of the methodological trends of the last 30 years, it implies that the changes are here to stay. New technology doesn't go away (unless you live in an RBC model, which we don't). It's possible that the boom in empirical methods in general is working through a backlog of old theories that were not testable until recently, and that the empirical wave will subside once that task is complete. But that's very different from empirical techniques being fads.
(I do think there is a possibility that DSGE is somewhat of a fad, and that the decline in the last 5 years is a new trend instead of a blip. This is partly because of definitions. OLG models are dynamic general equilibrium models, and many are stochastic, but they aren't called "DSGE". But I also think DSGE might decline because theory in general is declining.)
In any case, the Economist article does not marshal any strong arguments that machine learning has been overdone. Its only actual evidence comes from the book Weapons of Math Destruction. That book is about algorithmic decision-making can have unintended, morally dubious consequences for society. It has little to do with the question of whether machine learning techniques are useful for econometrics. The book itself is important and well-written, but the Economist article's reference to it seems random and out of place.
As for RCTs, the Economist's argument against them comes entirely from the famous paper by Angus Deaton and Nancy Cartwright. It will be interesting to see whether this argument eventually stems the tide of RCT usage. But I highly doubt that RCTs will go away any time soon, since for many questions there is simply no other technique in existence that can provide credible answers. RCTs have, importantly, never gone away in medicine.
So I don't think the Economist article gives us much reason to believe that machine learning and RCTs are faddish. Yes, it's true that economists (like everyone) don't generally use new tools optimally when they first come out, and learn better ways to use them as time goes on. Yes, it's true that methodologies can influence which questions get asked (the "streetlight" problem), and that open-minded economists should try to break out of the mental boxes their methodologies create. But it's not yet appropriate to conclude that new empirical techniques represent fleeting fads, as opposed to real progress.