tag:blogger.com,1999:blog-17232051.post742688547204669408..comments2018-04-26T21:20:33.300-04:00Comments on Noahpinion: Bayesian vs. Frequentist: Is there any "there" there?Noah Smithhttp://www.blogger.com/profile/09093917601641588575noreply@blogger.comBlogger55125tag:blogger.com,1999:blog-17232051.post-66553943728805728842013-09-30T13:14:36.741-04:002013-09-30T13:14:36.741-04:00I agree with walt. When a Bayesian talks about &q...I agree with walt. When a Bayesian talks about "real probability distribution", and "continued measurement", he/she IS a frequentist, at least a frequentist in my understanding.<br /><br />One impression of mine is that the Bayesians tend to be more aggressive than the frequentists, and frequentists tend to talk in a humble way.<br /><br />This is understandable, since, as the author of this blog (Smith) said in the post, the Bayesian is an approach which is likely to be used when the data is not great. It is no surprise that a person who is trying to "get something from nothing" tends to be ambitious, and, aggressive.<br /><br />But ultimately I guess there might be some differences in the brain structure between the frequentists and Bayesians. Anyone interested in testing this, using MRI for example?Fj_Duhttps://www.blogger.com/profile/06825663042037594562noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-25003871543831366112013-02-03T07:06:17.819-05:002013-02-03T07:06:17.819-05:00One area where the "crappy data" issue b...One area where the "crappy data" issue becomes extremely important is in pharmaceutical clinical trials. People tend to think that there are two possible outcomes of trials: a) the medication was shown to work or b) the medication was shown to be ineffective. In fact, there is a third possible outcome: c) The trial neither proves nor disproves the hypothesis that the drug works. In practice, outcome (c) is very common. For some indications, it is the most common outcome.<br /><br />This leads to charges that pharma companies intentionally hide trials with negative results. They don't publish all their trials! But it turns out to be really hard to get a journal to accept a paper that basically says, "we ran this trial but didn't learn anything." <br /><br />I forget the exact numbers, but for the trials used to get approval of Prozac, it was something like 2-3 trials with positive outcomes, and 8-10 "failed trials," ie trials which couldn't draw a conclusion one way or the other. This is common in psychiatric medicine. Its hard to consistently identify who has a condition, its hard to determine if the condition improved during the trial, and many patients get better on their own, with no treatment at all (at least temporarily).Davidhttps://www.blogger.com/profile/02167853931127184984noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-692544886163739282013-01-29T08:30:58.345-05:002013-01-29T08:30:58.345-05:00CA,
What you said, about referees challenging your...CA,<br />What you said, about referees challenging your assumptions, asking "what about this?" and "what about that?" rings true to me. You have experienced it as part of what sounds like responsible peer review of scholarly articles. I have experienced exactly what you describe when presenting my statistical analysis and recommendations for business and manufacturing work. If my results are contrary to precedent, I need to be prepared with examples of different specifications that I used, to establish robustness. Or be willing to run this or that to convince others that my findings are valid. Subject matter knowledge helps a lot, to be able to respond effectively. Yes, I realize that what I just said seems to justify suspicion of frequentists, and validate a Bayesian approach! But that will remain a problem, whether Bayesian or frequentist, when applying quantitative analysis to human behavior, especially complex dynamic systems.<br /><br />There seems to be an underlying mistrust of statistical analysis expressed in some of these comments e.g. the Mark Twain quote, or even the supposedly greater honesty of Bayesians and their priors. Statisticians are not inherently deceitful, nor are we practitioners of pseudo-science. I've used probability models for estimating hazard rates, and time to failure, for mobile phone manufacturing (Weibull distribution, I think). It works. Or rather, it is sufficiently effective to be useful in a very practical way. <br /><br />Economic models are a different matter than cell phones or disk drive reliability. Exogenous influences are important, and human behavior is difficult to quantify. Ellie Khttps://www.blogger.com/profile/11231840376889029260noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-28119580280233454042013-01-29T01:14:00.252-05:002013-01-29T01:14:00.252-05:00To deal with infinities Jaynes recommends starting...To deal with infinities Jaynes recommends starting with finite models and taking them to the limit. There is wisdom there. :)Minnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-85362415077560344732013-01-28T20:11:34.011-05:002013-01-28T20:11:34.011-05:00Thanks for the post. As a statistician, I think i...Thanks for the post. As a statistician, I think it's nice to see these issues being discussed. However, I think a lot of what has been written both in the post and in the comments is based on a few misconceptions. I think Andrew Gelman's comment did a nice job (as usual) of addressing some of them. To me, his most important point, and the one that I would have raised had he not done so, is this:<br /><br />"...non-Bayeisan work in areas such as wavelets, lasso, etc., are full of regularization ideas that are central to Bayes as well. Or, consider work in multiple comparisons, a problem that Bayesians attack using hierarchical models. And non-Bayesians use the false discovery rate, which has many similarities to the Bayesian approach (as has been noted by Efron and others)."<br /><br />The idea of "shrinkage" or "borrowing strength" is so pervasive in statistics (at least among those who know what they are doing) that it frequently blurs practical distinctions between Bayesian and non-Bayesian analyses. A key compromise is empirical Bayes procedures, which is a favorite strategy of some of our most famous luminaries. Commenter Min mentioned a "Hegelian synthesis." Empirical Bayes is one such synthesis. Reference priors is another.<br /><br />Which brings me to another important point. In the post and in the comments, it is assumed that priors are necessarily somehow invented by the analyst and implied that rigor in this regard is impossible. This is completely wrong. This is a long literature on "reference" priors, which are meant to be default choices when the analyst is unwilling to use subjective priors. An overlapping idea is "non-informative" priors, which are non-informative in a precise and mathematically well-defined sense (actually several different senses, depending on the details).<br /><br />Also, I want to note that it can be proven that Bayes procedures are provably superior to standard frequentist procedures, even when evaluated using frequentist criteria. This is related to shrinkage, empirical Bayes, and all the rest. Wikipedia "admissibility" or "James-Stein" to get a sense for why.<br /><br />Finally, the statement, "If Bayesian inference was clearly and obviously better, Frequentist inference would be a thing of the past," misses a lot of historical context. Nobody knew how to fit non-trivial Bayesian models until 1990 brought is the Gibbs sampler. This is not a matter of computing power, as some have suggested -- the issue was more fundamental.<br /><br />The great Brad Efron wrote a piece called "Why isn't everyone a Bayesian" back in 1986. Despite not being a Bayesian, he doesn't come up with a particularly compelling answer to his own question (http://www.stat.duke.edu/courses/Spring08/sta122/Handouts/EfronWhyEveryone.pdf). One last bit of recommended reading is a piece by Bayarri and Berger (http://www.isds.duke.edu/~berger/papers/interplay.pdf), who take another stab at this question.Unknownhttps://www.blogger.com/profile/07260379382601886709noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-21483824575157063672013-01-28T18:19:12.251-05:002013-01-28T18:19:12.251-05:00In practice, everyone's a statistical pragmati...In practice, everyone's a statistical pragmatist nowadays anyway. See this paper by Robert Kass: http://arxiv.org/abs/1106.2895Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-34976926306617168712013-01-28T15:31:44.074-05:002013-01-28T15:31:44.074-05:00Minor technical point on discussion of Shalizi. I...Minor technical point on discussion of Shalizi. Infinity can actually make things worse for Bayesians, particularly infinite dimensional space. So, it is an old and well known result by Diaconiis and Freedman that if the support is discontinuous and one is using an infinite-sided die, one may not get convergence. The true answer may be 0.5, but one might converge on an oscillation between 0.25 and 0.75, for example using Bayesian metnods.<br /><br />However, it is true that this depends on the matter of having a prior that it is "too far off." If one starts out within the continuous portion of the support that contains 0.5, one will converge.rosserjb@jmu.eduhttps://www.blogger.com/profile/09300046915843554101noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-78863554187962273892013-01-28T15:02:03.083-05:002013-01-28T15:02:03.083-05:00Whenever I've done Bayesian estimation of macr...Whenever I've done Bayesian estimation of macro models (using Dynare/IRIS or whatever), the estimates hug the priors pretty tight and so it's really not that different from calibration. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-60484777086188306432013-01-28T13:10:14.364-05:002013-01-28T13:10:14.364-05:00Forgot to mention that, however, referees do "...Forgot to mention that, however, referees do "demand" more robustness tests when the results go against their prior (e.g. against what theory predicts). So I am somewhat sympathetic to what you are saying. CAnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-30814486564335393322013-01-28T12:58:54.435-05:002013-01-28T12:58:54.435-05:00Scanned quickly to see if some one commented on yo...Scanned quickly to see if some one commented on your picture at the beginning. Ha Ha, In The Beginning! My reptile brain rang up Einstein immediately and his comment that God does not roll dice.<br />I don't know jack about statistics. Is God a Bayesian or a frequentist?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-90317078367643745522013-01-28T12:48:17.604-05:002013-01-28T12:48:17.604-05:00"How long did these authors work to make thei..."How long did these authors work to make their empirical model turn out just right so they could publish a great finding?"<br /><br />David, yes, I agree. I too am guilty of trying to find if any specification would work. BUT, when I write the paper I present all reasonable specifications based on theory. And if the results change when, say, I drop a variable, I try to explain why (e.g. that I only have few observations and the variable I dropped is correlated with another). In other words, I present the information and let the reader decide. Yet despite trying to be thorough, I still usually get suggestions from referees asking about this or that. If I add a lag, the referee wants to know why. If I detrend, the referee wants to know why I used this filter and not a different one. It is near impossible to publish in a decent journal if you have only tried a couple of specifications (even though you may present only a couple if your results are robust). CAnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-8009159700096345092013-01-28T12:20:33.473-05:002013-01-28T12:20:33.473-05:00CA, you are correct that they should come from a t...CA, you are correct that they should come from a theory, but in practice--I speak from experience--it is never that clean. Here is another way of seeing this. There are some empirical studies whose results fit a nice story, but whose results upon closer inspection are not robust (e.g. throw in an extra lag or drop one variable and results fundamentally change). How long did these authors work to make their empirical model turn out just right so they could publish a great finding? These authors are effectively just finding through trial and error empirical results to fit their own "priors". At least Bayesians are being upfront about what they are doing. This is what appeals to me about the Bayesian approach.<br /><br />I don't know enough to answer your second question, but my sense is that computational costs are now very low for doing Bayesian forecasts. For example, though I generally work with frequentist time series methods I can easily set up a Bayesian vector autoregression using RATS softwareDavid Beckworthhttps://www.blogger.com/profile/04577612979801459194noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-64373594578968750812013-01-28T11:36:31.380-05:002013-01-28T11:36:31.380-05:00"Frequentist effectively do the same thing as..."Frequentist effectively do the same thing as Bayesians, but pretend otherwise. they build empirical models and throw variables in and out based on some implicit prior which never get reported in their write ups."<br /><br />This is what I understood to be one of Silver's main points: be up front about your "priors," assumptions and context. Kind of like when bloggers and/or journalists say "full disclosure." The history is that the Frequentist tradtion wanted to be more mathematical and objective, whereas the Bayesians argued that one always has biases and it's better if you are conscious of what they are.Peterhttps://www.blogger.com/profile/08272747870634233567noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-8424620691582127812013-01-28T11:34:40.931-05:002013-01-28T11:34:40.931-05:00David, my understanding is that the variables freq...David, my understanding is that the variables frequentists throw in come, or at least should come, from a consistent theoretical model. Do Bayesians provide a justification for their priors? Also, in your opinion, are the forecasts of the Bayesiand models better enough to justify the computational costs?CAnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-8962608621977456392013-01-28T10:47:09.937-05:002013-01-28T10:47:09.937-05:00Noah, probably repeating some other comments, but ...Noah, probably repeating some other comments, but here goes.<br /><br />First, Bayesian methods have been around a long time but only recently have emerged because computing costs have come down. It would be interesting to see whether there are now proportionally more papers using Bayesian methods since computing cost went down and if that trend is increasing or stabilizing.<br /><br />Second, what Matthew Martin said above. Frequentist effectively do the same thing as Bayesians, but pretend otherwise. they build empirical models and throw variables in and out based on some implicit prior which never get reported in their write ups. <br /><br />Third, my understanding is that Bayesian models generally provide better forecast than frequentist's models. (Not that either are spectacular).David Beckworthhttps://www.blogger.com/profile/04577612979801459194noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-19354778912007986292013-01-28T10:46:05.844-05:002013-01-28T10:46:05.844-05:00Statistics are not science yet scientists use this...Statistics are not science yet scientists use this unscientific technique to test their theories as part of the scientific method. That makes no sense but, hey, if Mark Twain said so, it must be true! Stupidity-catchernoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-91840127902005577692013-01-28T09:54:53.469-05:002013-01-28T09:54:53.469-05:00Noah, statistics is not science, they are lies, th...Noah, statistics is not science, they are lies, the worst kinds of lies (Mark Twain: lies, damn lies, and statistics).<br /><br />Someone above, praising the Bayesian view, forgot to read the 2007 prospectus, which actually reads:<br /><br />"2 Recent Successes<br /><br />Macro policy modeling<br /><br />• The ECB and the New York Fed have research groups working on models using a Bayesian approach and meant to integrate into the regular policy cycle. Other<br />central banks and academic researchers are also working on models using these methods<br /><br />http://sims.princeton.edu/yftp/EmetSoc607/AppliedBayes.pdf, page 6<br /><br />Thank you very much but that track record says it all about statistics. No thanks.<br /><br />From the entire Western experience, statistics show only one thing about statistics---that statisticians lie, always selecting what they count, blah, blah, blah and how they analyze such to come up with the conclusion they were determined to reachAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-5318313053931110882013-01-28T08:49:07.505-05:002013-01-28T08:49:07.505-05:00I liked Nate Silver's book. An unexpectedly g...I liked Nate Silver's book. An unexpectedly good read. I received the book as a gift and didn't buy it myself, and I expected it to be a dry read, but it wasn't at all.<br /><br />I think the book has the potential for a broad appeal, and when you consider the fact that books that revolve around topics like statistical analysis and Bayesian infeference are usually pompous, inaccessible, and dull beyond belief, I think Nate should be commended for writing something that makes these concepts accessible to a wide audience.Lulz4l1f3noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-87239139839472167642013-01-28T08:48:29.827-05:002013-01-28T08:48:29.827-05:00The statement that frequentist statistics (maximum...The statement that frequentist statistics (maximum likelihood) is the same as Bayesian with a flat prior, that's true only is specific textbook examples, and only if the supposed Bayesian reports only the posterior mean. <br /><br />A significant difference between Bayesian and frequentist statistics is their conception of the state knowledge once the data are in. The Bayesian has a whole posterior distribution. This describes uncertainies as well as means.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-31550906162376804302013-01-28T08:41:38.422-05:002013-01-28T08:41:38.422-05:00Min is right. Bayesian probability has been aroun...Min is right. Bayesian probability has been around formally for at least 350 years, and the philosophical idea since before the days of Aristotle.<br /><br />I can tell you from experience that Bayesian probability is way more important in the areas of quality control and environmental impact measurements. Noah touches on the reason. You can't assume your process is unchanging or that new pollutants haven't entered the environment. You have to assume they can and eventually will. Essentially, every sample out of spec has to be treated as evidence of a changed process.Cameron Hoppehttps://www.blogger.com/profile/17817707262059114619noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-62600896287709009112013-01-28T07:17:08.938-05:002013-01-28T07:17:08.938-05:00(note - copy and paste with an iPad doesn't wo...(note - copy and paste with an iPad doesn't work)<br /><br />Noah, the example Cosma gives isn't a failing of bayesian technigues, but rather the fact that a bad model gives bad results, no matter how much data one has.Barryhttps://www.blogger.com/profile/04735814736387033844noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-12156162890893221152013-01-28T07:11:50.879-05:002013-01-28T07:11:50.879-05:00Noah, please google 'uninformative prior'....Noah, please google 'uninformative prior'.<br /><br />"Barryhttps://www.blogger.com/profile/04735814736387033844noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-83928956120466181072013-01-28T06:14:47.326-05:002013-01-28T06:14:47.326-05:00"Basically, because Bayesian inference has be..."Basically, because Bayesian inference has been around for a while - several decades, in fact"<br /><br />How about centuries? ;)<br /><br />The frequentist view was a reaction against the Bayesian view, which came to be perceived as subjective. What we are seeing now is a Bayesian revival. Since this is an economics blog, let me highly recommend Keynes's book, "A Treatise on Probability". Keynes was not a mainstream Bayesian, but he grappled with the problems of Bayesianism. Because the frequentist view was so dominant for much of the 20th century, there is a disconnect between modern Bayesianism and earlier writers, such as Keynes. From what I have seen in recent discussions, it seems that modern Bayesians have gone back to simple prior distributions, something that both Keynes and I. J. Good rejected, in different ways. Perhaps we will see some Hegelian synthesis. (Moi, I think that we will come to realize that neither Bayesian nor Fisherian statistics can deliver what they promise.)Minnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-64914893102090950402013-01-28T05:58:07.000-05:002013-01-28T05:58:07.000-05:00I really think people should take a look at this C...I really think people should take a look at this Chris Sims' text, found at http://sims.princeton.edu/yftp/EmetSoc607/AppliedBayes.pdf<br /><br />Also, the open letter by Kruschke is worth your while:<br />http://www.indiana.edu/~kruschke/AnOpenLetter.htmDouglas Araujohttps://www.blogger.com/profile/08328701397155859532noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-23955552792582814572013-01-28T05:19:50.202-05:002013-01-28T05:19:50.202-05:00I think a key missing element was discovered by Ma...I think a key missing element was discovered by Mandelbrot with his research on fractals, which I will get to. But first what if the events cannot be precisely measured? In this case a frequentist interpretation of “proof” is in principle impossible, and we then become Bayesian using subjective data and whatever additional data which we 'deem' relevant to elements of the analysis to form a “prior.” In a review of Mandelbrot’s The Misbehavior of Markets, Taleb offers an interesting formula that he says: “seems to have fooled behavioral finance researchers.” He writes: “A simple implication of the confusion about risk measurement applies to the research-papers-and-tenure-generating equity premium puzzle. It seems to have fooled economists on both sides of the fence (both neoclassical and behavioral finance researchers). They wonder why stocks, adjusted for risks, yield so much more than bonds and come up with theories to “explain” such anomalies. Yet the risk-adjustment can be faulty: take away the Gaussian assumption and the puzzle disappears. Ironically, this simple idea makes a greater contribution to your financial welfare than volumes of self-canceling trading advice.” The pdf of the review is here (http://www.fooledbyrandomness.com/mandelbrotandhudson.pdf)<br /><br />So my question is should we move beyond Bayesian and Frequentist when looking at probabilities and look at fractals otherwise we omit what Mandelbrot called “roughness.” In other words research focuses too much on smoothness, bell curves and the golden mean and if we look at roughness in far more detail will we will be able to provide greater insight to the matter at hand?Colin Lewishttp://www.palantirinstitute.comnoreply@blogger.com