tag:blogger.com,1999:blog-17232051.post1970095331501170044..comments2022-09-28T02:04:40.741-04:00Comments on Noahpinion: Why I like FrequentismNoah Smithhttp://www.blogger.com/profile/09093917601641588575noreply@blogger.comBlogger36125tag:blogger.com,1999:blog-17232051.post-17324166568873275452014-07-30T09:20:34.751-04:002014-07-30T09:20:34.751-04:00Following the common interpretation of the Bayes f...Following the common interpretation of the Bayes factor (http://en.wikipedia.org/wiki/Bayes_factor#Interpretation), we can easily get to the same sort of three valued logic: If K < -(3:1) then it is substantial (in the sense of having some existence) it is evidence against the hypothesis, if K > 3:1 then it is substantial evidence for the hypothesis, if it is in between there is no substantial evidence either way.<br /><br />The real problem with frequentist hypothesis testing is that the underlying logic of the test is subtle and widely misunderstood, which leads to errors of understanding. Bayesian methods are conceptually easy to understand (and hence harder to misunderstand), but hard to apply, whereas frequentist methods are easy to apply, but conceptually hard to understand (and hence easy to misinterpret).<br /><br />It is easy to find examples where the test is not biased in favour of H0, especially where someone is arguing in favour of H0 (e.g. "no significant global warming since 1998"). Most people don't understand that a lack of a statistically significant trend in global temperatures does not imply that there has been no warming (or even that warming has not continued at the same rate).Dikran Marsupialnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-17599474676783641072014-07-29T21:42:01.744-04:002014-07-29T21:42:01.744-04:00E.T. Jaynes, "Probability Theory: the logic o...E.T. Jaynes, "Probability Theory: the logic of science" is surely the most entertaining way to learn Bayesian methods.<br /><br />I prefer using the Bayes factor as a way of comparing two hypotheses, no bias toward either, given the data. One then uses AIC or even BIC to see if the difference between the two hypotheses is large enough. There are again three possible outcomes: H0 is better; H1 is better; equivalent explanatory power.David B. Bensonhttps://www.blogger.com/profile/02917182411282836875noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-31029918212820513872014-07-29T17:23:26.204-04:002014-07-29T17:23:26.204-04:00Noah has had this discussion with Gelman before, a...<a href="http://andrewgelman.com/2013/01/28/economists-argue-about-bayes/" rel="nofollow">Noah has had this discussion with Gelman</a> before, and <a href="http://rabett.blogspot.co.uk/2013/02/on-priors-bayesians-and-frequentists.html" rel="nofollow">Eli riffed off it with Socrates</a><br /><br />IEHO you need to have a pretty good idea about the answer to find a useful prior.EliRabetthttps://www.blogger.com/profile/07957002964638398767noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-58672987362606754432014-07-26T00:03:40.243-04:002014-07-26T00:03:40.243-04:00Come on, I expected a response or two. I read this...Come on, I expected a response or two. I read this blog for the comments (love the anonymity and smart readers). Am I wrong?RaStudentnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-59125092111987555922014-07-25T14:19:06.864-04:002014-07-25T14:19:06.864-04:00That's a useful point, Noah - I look at it tha...That's a useful point, Noah - I look at it that I'm more likely to take seriously people who find results that contradict their expectations. Most of us tend to (consciously or not) weight things that support what we like to believe. Some of us make a real effort to stay neutral, but few of us actually raise the bar on what we want to see. That way the normal human tendency to see what we prefer is often enough to lead us into error. It works the other way 'round, too, of course - many of us simply cannot see what we don't wish to. Supposedly, that's one thing statistics was invented to deal with. Being married to a statistician, however, I'm well aware that the human need to bend evidence to support a desired idea is perfectly able to handle statistics. That's why you need to see the raw data whenever you can. After all, trust, but verify...JohnRnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-70388048863440028422014-07-25T14:11:01.598-04:002014-07-25T14:11:01.598-04:00"It wasn't literally correct, but it was ..."It wasn't literally correct, but it was impressionistically correct."<br /><br />That may be one of the best descriptions of a lot of economic work that I've ever seen...JohnRnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-2890771568969624582014-07-25T13:46:08.622-04:002014-07-25T13:46:08.622-04:00There are more practical issues pointing toward Ba...There are more practical issues pointing toward Bayesian or frequentist approaches. Two negatives for frequentism (?):<br /><br />1. Frequentist hypothesis testing isn't robust against modeling errors. You ask: "What is the probability of getting this data if H_0 is true?" The answer, if the model is at all complex, is: "almost zero". H_0 is not a single number (mean zero), but a complex model.<br /><br />2. Frequentism gives little insight into error bars<br /><br />Pro Bayesian: sampling the posterior gives deep insight into remaining uncertainty, given the data.<br /><br />Anti Bayesian: the prior is completely made up -- "replacing ignorance with fiction".Jonathan Goodmannoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-21174744998521853202014-07-25T12:23:37.664-04:002014-07-25T12:23:37.664-04:00"Now, of course, the Frequentist social conve..."Now, of course, the Frequentist social conventions are weak, inadequate defenses against subjectivism and noise. They have drawbacks, like discouraging the reporting of negative results. And they are subject to being gamed by unscrupulous researchers. But at least they are something."<br /><br />The last sentence needs tweeking.<br /><br />"Say what you will about the social conventions of frequentism, dude, at least it's an ethos"<br /><br />http://www.youtube.com/watch?v=J41iFYO0NQAmarcelnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-56304296677181433992014-07-25T08:09:28.847-04:002014-07-25T08:09:28.847-04:00IMHO the Bayesian prior serves to quantify the res...IMHO the Bayesian prior serves to quantify the researchers bias, whereas the frequentist may go on acting as if they have none, all the while it lingers under the surface, undetected by the formal calculus. That bayesianism is subjective is a virtue because it acknowledges the way that most people actually form judgements and pulls all the subjectivity out into the open. Anonymoushttps://www.blogger.com/profile/05932496549711542374noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-32278739153703356802014-07-25T01:44:43.796-04:002014-07-25T01:44:43.796-04:00Frequentists' greatest triumph was when they p...Frequentists' greatest triumph was when they proved that all swans are white.Anonymoushttps://www.blogger.com/profile/02977684524676621423noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-66358459915332619812014-07-25T01:13:35.582-04:002014-07-25T01:13:35.582-04:00You may underestimate the problem of publication b...You may underestimate the problem of publication bias. If you believe the meta-analyses of minimum wage research starting with Card & Krueger's AER article, virtually all the findings of major job loss (3% per 10% minwage increase) are GIGO. Another econ area where publication bias seems serious is in the spillover effects from foreign direct investment.Kenneth Thomashttps://www.blogger.com/profile/05747704671007690674noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-17640982579205380472014-07-25T00:05:43.431-04:002014-07-25T00:05:43.431-04:00Thanks!Thanks!Tom Brownhttp://www.google.comnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-24633300225371939182014-07-24T21:34:43.335-04:002014-07-24T21:34:43.335-04:00Fisher did lots of statistics that is associated w...Fisher did lots of statistics that is associated with frequentist methods,but his writings were somewhat ambiguous as to his actual philosophical position was wrt statistical method. I would have chosen Neyman and/or Pearson. In any case it's not a terrible choice. I even liked your picture choice for the Star Trek brain worms blog. It wasn't literally correct, but it was impressionistically correct.Darf Ferrarahttps://www.blogger.com/profile/00396174363474177403noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-26325187798107712682014-07-24T20:57:35.130-04:002014-07-24T20:57:35.130-04:00It was here:
http://noahpinionblog.blogspot.com/2...It was here:<br /><br /><a href="http://noahpinionblog.blogspot.com/2013/04/the-reason-macroeconomics-doesnt-work.html" rel="nofollow">http://noahpinionblog.blogspot.com/2013/04/the-reason-macroeconomics-doesnt-work.html</a>Jason Smithhttps://www.blogger.com/profile/12680061127040420047noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-14923003377388505472014-07-24T16:28:31.973-04:002014-07-24T16:28:31.973-04:00BTW, for what it's worth, my very amateurish i...BTW, for what it's worth, my very amateurish impression is that the current score (which you can take to be over blog writers, since that's the only way I encounter economists), is about 1. I have very very little sense that this "kind of integrity" has been adopted by economists. I never see blog writer X do multiple posts on all the things that might be wrong with interpreting the evidence as supporting his hypothesis. Quite the opposite actually... no "leaning over backwards" whatsoever. Lol. Of course I'm sure there's much I'm unaware of too.Tom Brownhttp://www.google.comnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-40793655766626882122014-07-24T16:19:10.577-04:002014-07-24T16:19:10.577-04:00Noah, if I rewrite your Feynman quote above slight...Noah, if I rewrite your Feynman quote above slightly:<br /><br />"It's a kind of integrity amongst economists, a principle of economic thought that corresponds to a kind of utter honesty--a kind of leaning over backwards. For example, if you're evaluating a hypothesis, you should report everything that you think might make it invalid--not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other set of results, and how they worked--to make sure the other fellow can tell they have been eliminated. <br /><br /> Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can--if you know anything at all wrong, or possibly wrong--to explain it."<br /><br />On a scale of 0 to 10, how well can those modified paragraphs actually be applied to economists? (give your guess for an average "score" averaged over whatever group you like: economists who write blogs, all economists, whatever). Now Ideally, over this same group, where should that number be in your opinion? Offhand it seems to me like it should be at 10, but perhaps you've got a good reason why it shouldn't.Tom Brownhttp://www.google.comnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-22485549658348017132014-07-24T15:39:19.330-04:002014-07-24T15:39:19.330-04:00oops, I responded below rather than here.oops, I responded below rather than here.Studentnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-6444848652366873542014-07-24T13:22:45.611-04:002014-07-24T13:22:45.611-04:00I should obviously state then that most of the pro...I should obviously state then that most of the problems I solve on a daily basis are much more easily solved correctly with Bayesian analysis. There are certainly scenarios where problems are trivial in either method but very difficult in the other. But the misuse of frequentist p-values is rampant; most studies based on p-values of 5% are probably wrong. http://arxiv.org/abs/1407.5296<br /><br />I think there is a place and time for both methods. I think one should solve each problem with the simplest method possible, but not simpler (to paraphrase someone else's paraphrasing of Einstein). A significant fraction of frequentist analyses out there in academia today are just plain wrong; that's not to say that they're conclusions are wrong or that the effect isn't real, it's just to say that most of the time they have no real understanding of their error bars, which causes them to underestimate their p-value or over-estimate their significance.<br /><br />Gelman's stats package "Stan" is gaining popularity because of it's simplicity for the user to build a proper bayesian model. If someone writes a package that allows a user to properly build an arbitrary frequentist model, then it will also be widely used. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-9155558835701122802014-07-24T13:17:55.797-04:002014-07-24T13:17:55.797-04:00Thanks Noah: this is all new to me, and I find it ...Thanks Noah: this is all new to me, and I find it very interesting. BTW, yesterday Jason Smith referenced you when he wrote <a href="http://informationtransfereconomics.blogspot.com/2014/07/is-monetary-policy-best.html?showComment=1406165233306#c6485428165505949258" rel="nofollow">"macro data is uninformative." </a> Can you direct me to where you may have discussed that?Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-70655738192817098042014-07-24T12:57:12.063-04:002014-07-24T12:57:12.063-04:00first, above I meant "technically, frequentis...first, above I meant "technically, frequentist hypothesis tests do NOT test hypotheses, ..."<br /><br />I would say the most widely used are bayes factors, the deviation info criteria, probability intervals based on the marginal posterior. However, you could conduct hypothesis testing just like frequentists if you wanted by evaluating the posterior a certain way. The only difference would be that frequentists treat the probabilities as fixed long run constants while bayesians treat the probabilities as parameters conditional on the data. That aside, you could use non-informative priors compute psuedo p-vals and conduct hypothesis testing to conform to the frequentist conventions if you really wanted to. I just dont see whats so great about doing that. I mean significance at the 95% level is ok, but at the 94% and its a non-finding? I dont see much benefit in that.Studentnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-59849756878459406912014-07-24T12:54:21.927-04:002014-07-24T12:54:21.927-04:00This is just as broad as it is long; there are als...This is just as broad as it is long; there are also problems that are trivial to solve from a frequentist approach but for which it is extremely difficult to construct well-behaved priors. See Stone's Paradox, for example.Phil Koopnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-10198743975312890982014-07-24T12:50:34.996-04:002014-07-24T12:50:34.996-04:00Personally, I find the later essay by Gelman and S...Personally, I find the later essay by Gelman and Shalizi more illuminating (http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf.)<br /><br />This essay casts considerable doubt on the claim that Gelman does not in fact identify with "the other side". I think it would be more accurate to say that Gelman and Shalizi advocate Bayesian models to be employed by a "hypothetico-deductive" (aka "frequentist") modeler. In their own words, "the value of Bayesian inference as an approach for obtaining statistical methods<br />with good frequency properties."<br /><br />You might come to the same conclusion by reading the exchange between Mayo and Gelman, but reading Mayo always gives me a headache.Phil Koopnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-53984010203813642932014-07-24T12:48:17.973-04:002014-07-24T12:48:17.973-04:00Why I don't like frequentism: in almost all sc...Why I don't like frequentism: in almost all scientific applications, frequentist methods are harder to do correctly than Bayesian analyses. In most cases, this leads to systematically incorrect results. An extremely simple example is fitting a straight line to data where the spread (noise) in the x-values is comparable to the width of the real linear signal. In a frequentist approach you cannot get the right answer without marginalizing over the noise distribution of the x-values. In fact you won't even be close; the slope will be wrong by 20% or more unless you do an extremely difficult integral over the distribution that the x-values are drawn from. And this integral is extremely difficult even if you assume only a simple gaussian prior. If you throw this problem into a bayesian analysis, it is incredibly simple to marginalize over any arbitrary prior, and get the right answer for the slope. I don't like frequentism because most people do it wrong, and doing it right is harder than doing a bayesian analysis.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-17232051.post-7100144188886415842014-07-24T11:54:31.607-04:002014-07-24T11:54:31.607-04:00If you're going to write a post praising frequ...If you're going to write a post praising frequentism, Ronald Fisher is as good an illustration as any. Works for me.Karl Dickmanhttps://www.blogger.com/profile/14356032704587914010noreply@blogger.comtag:blogger.com,1999:blog-17232051.post-38005438375061107532014-07-24T11:32:38.254-04:002014-07-24T11:32:38.254-04:00What do you see as being the most widely used stan...What do you see as being the most widely used standards for hypothesis testing in Bayesian inference, just out of curiosity? Noah Smithhttps://www.blogger.com/profile/09093917601641588575noreply@blogger.com