Tuesday, June 14, 2016

The pool player analogy is silly

In a lot of debates about economic methodology, someone will bring up Milton Friedman's "pool player" analogy. The pool player analogy was part of Milton Friedman's rationale for modeling the behavior of economic agents (consumers, firms, etc.) as the optimization of some objective function. Unfortunately, the analogy is A) not that good in the first place, and B) frequently misapplied to make excuses for models that don't match data.

Here's the original analogy:
Consider the problem of predicting the shots made by an expert billiard player. It seems not at all unreasonable that excellent predictions would be yielded by the hypothesis that the billiard player made his shots as if he knew the complicated mathematical formulas that would give the optimum directions of travel, could estimate accurately by eye the angles, etc., describing the location of the balls, could make lightning calculations from the formulas, and could then make the balls travel in the direction indicated by the formulas. Our confidence in this hypothesis is not based on the belief that billiard players, even expert ones, can or do go through the process described; it derives rather from the belief that, unless in some way or other they were capable of reaching essentially the same result, they would not in fact be expert billiard players.  
It is only a short step from these examples to the economic hypothesis that under a wide range of circumstances individual firm behave as if they were seeking rationally to maximize their expected returns (generally if misleadingly called “profits”) 16 and had full knowledge of the data needed to succeed in this attempt; as if, that is, they knew the relevant cost and demand functions, calculated marginal cost and marginal revenue from all actions open to them, and pushed each line of action to the point at which the relevant marginal cost and marginal revenue were equal. Now, of course, businessmen do not actually and literally solve the system of simultaneous equations in terms of which the mathematical economist finds it convenient to express this hypothesis, any more than leaves or billiard players explicitly go through complicated mathematical calculations or falling bodies decide to create a vacuum. The billiard player, if asked how he decides where to hit the ball, may say that he “just figures it out” but then also rubs a rabbit’s foot just to make sure; and the businessman may well say that he prices at average cost, with of course some minor deviations when the market makes it necessary. The one statement is about as helpful as the other, and neither is a relevant test of the associated hypothesis.
Actually, I've always thought that this is kind of a bad analogy, even if it's used the way Friedman intended. Using physics equations to explain pool is either too much work, or not enough.

Suppose the pool player is so perfect that he makes all his shots. In that case, using physics equations to predict what he does is a pointless waste of time and effort. All you need is a map of the pockets. Now you know where the balls go. No equations required! Actually, even that's too much...since in most pool games it doesn't matter which balls go in which pockets, you don't even need a map, you just need to know one fact: he gets them all in. It's a trivial optimization problem.

But if really good pool players made 100% of their shots, there wouldn't be pool tournaments. It would be no fun, because whoever went first would always win. But in fact, there are pool tournaments. So expert pool players do, in fact, miss. They don't quite optimize. So if you want to predict which pool player wins a tournament, or why they miss a shot, you need more than just a simple balls-in-pockets optimization model. And you probably need more than physics - you could use psychology to predict strategic mistakes, biology to predict how arms and hands slightly wobble, and complex physics to predict how small random non-homogeneities in the table and air will cause random deviations from an intended path. 

The point is, if you use an optimization model to represent the behavior of someone who doesn't actually optimize, you're going to get incorrect results.

Of course, the pool player analogy wasn't Friedman's whole argument - the next paragraph is critical:
Confidence in the maximization-of-returns hypothesis is justified by evidence of a very different character. This evidence is in part similar to that adduced on behalf of the billiard-player hypothesis - unless the behavior of businessmen in some way or other approximated behavior consistent with the maximization of returns, it seems unlikely that they would remain in business for long. Let the apparent immediate determinant of business behavior be anything at all - habitual reaction, random chance, or whatnot. Whenever this determinant happens to lead to behavior consistent with rational and informed maximization of returns, the business will prosper and acquire resources with which to expand; whenever it does not, the business will tend to lose resources and can be kept in existence only by the addition of resources from outside. The process of “natural selection” thus helps to validate the hypothesis - or, rather, given natural selection, acceptance of the hypothesis can be based largely on the judgment that it summarizes appropriately the conditions for survival. 
That turns out to just be wrong. There are plenty of theoretical ways that non-profit-maximizing agents can stay around forever. Also, there are always new people and new companies being born and entering the system - there's a sucker born every minute, so as long as they drop out at some finite rate, there's some homeostatic equilibrium with a nonzero amount of suckers present. And finally, this argument obviously doesn't work for consumers, who don't die if they make bad decisions.

So Friedman's analogy was not a great one even on its own terms. Sometimes consumers, firms, and other agents don't perfectly optimize. Sometimes that's important. So you might want to model the ways in which they don't perfectly optimize.

But actually, everything in this post up to now has been a relatively minor point. There's a much bigger reason why the pool player analogy is bad, especially when it comes to macro - it gets chronically misused.

In pool, we know the game, so we know what's being optimized - it's "balls in pockets". But in the economy, we don't know the objective function - even if people optimize, we don't automatically know what they optimize. Studying the economy is more like studying a pool player when you have no idea how pool works.

In economic modeling, people often just assume an objective function for one agent or another, throw that into a larger model, and then look only at some subset of the model's overall implications. But that's throwing away data. For example, many models have consumer preferences that lead to a consumption Euler equation, but the model-makers don't bother to test if the Euler equation correctly describes the real relationship between interest rates and consumption. They don't even care.

If you point this out, they'll often bring up the pool player analogy. "Who cares if the Euler equation matches the data?", they'll say. "All we care about is whether the overall model matches those features of the data that we designed it to match."

This is obviously throwing away a ton of data. And in doing so, it dramatically lowers the empirical bar that a model has to clear. You're essentially tossing a ton of broken, wrong structural assumptions into a model and then calibrating (or estimating) the parameters to match a fairly small set of things, then declaring victory. But because you've got the structure wrong, the model will fail and fail and fail as soon as you take it out of sample, or as soon as you apply it to any data other than the few things it was calibrated to match.

Use broken pieces, and you get a broken machine.

This kind of model-making isn't really like assuming an expert player makes all his shots. It's more like watching an amateur pool player until you he makes three shots in a row, and then concluding he's an expert.

Dani Rodrik, when he talks about these issues, says that unrealistic assumptions are only bad if they're "critical" assumptions - that is, if changing them would change the model substantially. It's OK to have non-critical assumptions that are unrealistic, just like a car will still run fine even if the cup-holder is cracked. That sounds good. In principle I agree. But in practice, how the heck do you know in advance which assumptions are critical? You'd have to go check them all, by introducing alternatives for each and every one (actually every combination of assumptions, since model features tend to interact). No one is actually going to do that. It's a non-starter. 

The real solution, as I see it, is not to put any confidence in models with broken pieces. The dream of having a structural model of the macroeconomy - one that we can trust to be invariant to policy regime changes, one that we can confidently apply to new situations - is a good dream, it's just a long way off. We don't understand most of the structure yet. If you ask me, I think macroeconomists should probably focus their efforts on getting solid, reliable models of each piece of that structure - figure out how consumer behavior really works, figure out how investment costs really work, etc. That's what "macro-focused micro" is really about, I think.

So let's put Friedman's pool player analogy to rest.


Chris House (who was the first person to ever introduce me to the pool player analogy) has a response to this post. But as far as I can tell, he merely restates Friedman's (flawed) logic without addressing the main points of my post. 


  1. I'm totally with you on the main points. It also worries my how much we rely on the "models in our mind" without knowing the full details of the model, and what are "critical" assumptions or not. I guess the job of the social scientist is to eliminate models that make bad predictions, and offer new ones that improve predictions. But to do that we need to be serious about the limits of observation, and focus as much as possible on experimentation, in the lab or the field.

    My earlier views on this topic http://www.fresheconomicthinking.com/2013/10/economists-do-it-with-models-is-one-of.html

  2. Anonymous7:49 PM

    I think you may not be playing enough pool. What's being referred to is how pool players decide where on the cue ball to strike the cue, and which ball to strike in what direction. A map of the pockets isn't nearly sufficient to predict the shots that would be taken. But once you have a map of the balls after the break and a map of the pockets, you might be able to do a reasonable job of predicting the shots throughout the game. That may not help you predict who wins, but if you wanted to understand why good pool players try the shots they try, it might be a good place to start from assuming they're using geometry and physics to choose the right angles, and then insert some random error to their actual shots. That would probably be enough to generate quite a few losses. Once random error proves insufficient, then you move on to more systematic errors.

    Seems to me that a similar procedure should be applied to most economics. Start with optimization, move to optimization with trivial errors, end with behavioral biases and non-optimizing behavior. But the first thing is always to get the basic "physics" and state of the world right, if you don't have a map of the positions of the balls initially, you're not going to be able to predict anything. I'm not convinced that much of macro is even at that point yet, so jumping straight to inserting biases may be putting the cart ahead of the horse.

    Also, can't resist this: https://www.youtube.com/watch?v=cdVRvYnqj6w

    1. That sounds about right to me.

  3. This post is silly. These days the pool player argument isn't used as a defense of the notion that people optimize at all (which is what you seem to be attacking, bizarrely) - it's a defense of using mathematics at all. Because a common criticism is that people "don't have mathematical equations on their brain when interacting in a market place, so using complex equation is stupid" - and that is a really really stupid argument.

    In the same way, a pool player isn't figuring out quadratic equations when deciding where to pot the ball - but at the same time if the only information a scientist had, to predict and path the most likely trajectory of the cue ball and all other balls, was the location of the cue and other balls, at some point he would certainly need to use some kind of quadratic formula. This has nothing to do with optimization, even if you account for margins of error, randomness, bias - there is simply no way you can do this without using mathematical equations to predict the optimal path (assuming it is a good pool player).

    My second problem is that I've never seen the pool cue argument as a defense of incorrectly specified objective functions in macroeconomic models. That seems like a strawman to me.

    My third problem is this: " not to put any confidence in models with broken pieces". Unless I'm misunderstanding what you mean by a 'broken piece', this seems obviously impossible, and will forever be impossible.

  4. Id add that 99% of people are not good at playing pool at least at a professional level, and those who can are most likely the result of experience, situation and some luck besides knowing the basic fissics of the game, just like the economic actors.

  5. Fantastic post, Noah. Epistemology is crucial, no doubt. And there is plenty of bad epistemology in economics.

    1. Anonymous2:49 AM

      There is nothing less crucial than epistemology.

  6. Side note: Friedman distinguishes between "returns" and "profits." What difference is he trying to invoke? Is it just non-financial compensation?

  7. "The real solution, as I see it, is not to put any confidence in models with broken pieces"

    Models are never entirely right, the map is never the territory. Models are information theoretically useful to varying degrees of accuracy. You can still have a degree of confidence when they get things wrong.

    The pool analogy has little to do with models and more to do with natural selection.

    The real solution is to make models compete for accuracy at predicting complex unseen data. Machine learning communities regularly set up competitions where the data to predict is unseen. Don't put no confidence on flawed models, just measure the right confidence to use.

    Also I think I posted this here before: http://www.yudkowsky.net/rational/technical

    Modeling for rational agents also has an additional benefit. A system designed to cater to rational agents allows people to increase their welfare by acting simply more rationally. Otherwise, people will need to act in a kind of second order rationality where they take into account and compensate for all the ways the system expect them to be irrational.

  8. We think Dani Rodrik agreed with Friedman (without noticing, of course) in the "critical assumptions" thing. You know that your critical assumptions are correct if you can make good predictions using them. If your predictions are correct, that must mean that your theory has something worthwhile in it. But if your predictions fail, then your "critical assumption" must be wrong.
    (Of course, usually it is hard to notice which prediction works, especially in macro).

    Regarding the pool player analogy. We think Friedman point was not to describe the best player's game. Rather, his intention is to predict which will be the change in the final state of a system if a change in it is introduced. (The pool player analogy is not very good at illustrating that point, though.)
    Our guess is that the ontology assumed by Friedman, Lucas and many others, is that the decision makers behave adaptively. But, ¿how could we make predictive (and therefore refutable) theories? That cuestion refers to the methodology, and the F53 answer is to build final state equilibrium models, and test their implications.

    You should probably take a look at Lucas (1986), "Adaptative Behavior and Economic Theory". Here is a quote frome there: "We use economic theory to calculate how certain variations in the situation are predicted to affect behavior, but this calculations obviously do not refflect or usefully model the adaptative process by which subjects have themselves arrived at the decisions rules they use. Technically, I think of economics as studying decision rules that are steady states of some adaptative process, decision rules that are found to work over a range of situations and hence are no longer revised as more experience accumulates."

    So, it's important to emphasize the difference between the PROCESS of decision and the FINAL STATE of that process. Friedman's pool player refers to the second of them (as a predictive rather than descriptive theory).

    Greetings from Argentina.
    Diego Weisman and Santiago Hermo.

    1. Um, but in a lot of these models that final state is only asymptotically approached, never actually observed in reality. So, asymptotically people may use ratex, but in fact they never actually do as they are always in a transition to that state, and they are using some sort of adaptive ex in their effort to get there. Just a big mirage, but, hey, let us assume we are there for the purposes of our nice DSGE models.

      Barkley Rosser

  9. So Friedman's analogy was not a great one even on its own terms. Sometimes consumers, firms, and other agents don't perfectly optimize.

    We know something vastly stronger than this: if individuals are rational, then firms can't optimize. Holmström's theorem says that in a team production problem, there aren't any incentive systems which are simultaneously (a) budget-balancing, (b) Pareto-optimal, and (c) Nash equilibria.

    Of course, most economic production happens in firms....

  10. I think what your missing is what pool player stays near the top of the rankings the longest.

  11. "If you ask me, ..."

    That's just it. We're not asking you. What position are you in to tell us how to do our research? What's your advice? "Find out how things really work." What a revelation.

    1. Actually several times, macroeconomists have come up to me and asked "So what do you think we should be doing?" I started feeling guilty when people asked me that, because who am I to tell people how to do their jobs? But I thought about it for a while, and this post is basically my answer to their question.

      As for you, is someone clamping your eyelids open and forcing you to read my blog while Ode to Joy plays in the background? Because otherwise, if you care so little what I think, I can't see why you keep reading every post I write and taking the time to post in the comments! :-)

    2. Actually, I'm told Dr. Ludovico's new mandatory blog reading program for the St. Louis Fed staff is both draconian and inhuman.

  12. Noah,

    So, are you against using utility functions and optimization methods as descriptions of human behavior? What if it they include evidence from behavioral economics? If not, what else would you suggest?

  13. Kartoffel5:53 PM

    I can't quite understand your first objection - add a random factor to the pool shots and you could easily use the optimization model (with optimization under uncertainty)

  14. I think the strongest objection is that we don't know what people are trying to optimise most of the time, and no reason to believe that they are not changing their goals as they (imperfectly) assess the possibilities. One approach to this sort of issue is to treat it as a very big jigsaw puzzle - forget models, look at bits (not in experiments with 50 students, but in the field), gather and refine as much historical data as possible, test them against each other, join bits together, occasionally stand back to look at the result and argue about the frame..

    That's what historians and biologists and climatologists and sociologists do. It's what economics did in part up to 1930 or so. It might start doing it again after the next 4 or 5 big failures...

  15. Anonymous1:45 PM

    It parallels Anselm's ontological "proof" for the existence of God and is about as stupid. Further evidence that modern economics has been a school of rhetoric masquerading as science.

  16. i always understood friedman's point to be that models don't need to look like reality to describe reality well. much like a map doesn't have to reflect every detail of a city accurately to be useful.

    in linguistics, noam chomsky talked about a "language acquisition device" to model how people learn languages. i don't think anyone ever thought that there is an actual device to learn languages in our brains.

    in physics, we know that rutherford's model of the atom looks nothing like an actual atom, but it's still useful to understand what atoms are.

  17. to me, friedman's analogy simply means that a model doesn't have to look like reality to be useful. a map of the new york subway is not a good representation of the actual city, but it's very helpful to get around.

    in linguistics, noam chomsky talked about a language acquisition device to describe how people learn to speak. i don't think anyone has ever thought that this device actually exists.

    in physics, rutherford's model of the atom is not accurate, but it helps us conceptualize what's going on.

    of course, if the model doesn't work, all this discussion is moot because it makes no difference if the model was meant to look like reality or not. it's only when a model works that it makes sense to ask if the knowledge it gives us and the method by which is was discovered are legitimate.

  18. Heuristics. This is how almost all pool players operate.

    And sometimes they are wrong.

    Is this the right way to create economic theory? I'm not a fan of Friedman. In fact, I think he was a totally quack that ruined the idea and field of economics for generations of open minds.

  19. Anonymous10:25 AM

    I've always been told that Friedman's argument was a garbled account of ideas he learned from Armen Alchian:


    To me, Alchian's argument is closer to Stiglitz than what Friedman wants, but you know.

  20. The real problem that Friedman brought on himself was to select pocket billards (pool) rather than three cushion billiards (largely out of favor in the US but wildly popular in parts of Europe and Korea). There is a much better physics analogy with TCP as opposed to pool as every shot has a purer physics base compared to pool which is more about strategy (running a rack and setting up the break ball for the next rack). In pool expert players usually have much larger runs than TCB players.

    Back in the days when I frittered away time in the pool hall in grad school, I much preferred billiards to pool. But then I'm just a chemist and not an economist.

  21. Friedman is arguing that you use the model's predictions to test the model, not its assumptions. He would absolutely agree with you that bad assumptions should be tossed *IF* they lead to bad models that lead to bad predictions. Friedman was reacting to studies that sought to refute the marginal school by surveying business leaders and asking them if they optimized. Friedman was saying those surveys told you nothing. I agree the pool player analogy was not all that great, but his point was not what you are making it out to be exactly. He was arguing that such models of optimization were useful if they led to refutable hypotheses. Remember, he was a big fan of Karl Popper and was trying to make econ adhere to Popper's rules for doing science. I think you got his analogy right, but his reason for writing it wrong.

    1. Anonymous4:17 AM

      Friedman liked Popper, but didn't follow him. You can tell, because everything you just said about philosophy of science was completely anti-Popper. Popper believed that the description of scientific theories - their "metaphysics" - mattered.

      Friedman was a _logical positivist_ like his co-author Jimmie Savage and the other early Bayesians. He just wasn't a particularly good one (at the philosophy, he was great in practice).

  22. Your point about how macro should focus on specific areas, first, seems good. Also your point about what the objective function is for macro, tho a good argument for the objective function is to maximize the present value of all future GDP of the economy (national; or world? or state/region? or city?).

    But the pool player actions are a better analogy than most others -- he acts, on each shot, as if he was doing all the calculations. " acceptance of the hypothesis can be based largely on the judgment that it summarizes appropriately the conditions for survival." << (Milton, on maximization-of-returns hypothesis)
    Friedman was more concerned with real goods & services producing businesses and returns, often not including financial investors without explicitly excluding them.

    Noah, you say: " There are plenty of theoretical ways that non-profit-maximizing agents can stay around forever." But the paper referenced talks about noise-traders, and explicitly notes that such noise-traders might be able to make higher returns due to accepting the increased noise-trade risk that they themselves cause. My point being that the paper highlights how it might be possible for the noise trader to make ... higher returns, higher than the explicitly serious arbitrager investor.

    That makes the noise trader look more like a return maximizer, in practice, than the serious investor. So the actual (theoretical) results from the paper confirm acceptance of the max-return hypothesis, even for the noise trader. So you're wrong to claim (Uncle) Milton is wrong.

    tiny typo should be 'see' instead of 'he': "watching an amateur pool player until you he makes three shots in a row," <<

  23. Excellent article. I completely understand your frustration with economic models that completely ignore the more recent teachings of the behavioral school and keep harping on the fallacy of the consumer as the rational optimizer.

  24. What do you think about "The Mistrust of Science"?


    Science/Empiricism is after all, a learned thing, contrary to your priors ;).

  25. After reading this and pondering it for far too long, I wonder if the Swiss Cheese model used in accident causation with some tweaking might be a better analogy for models with multiple broken parts.

  26. Charrua12:42 PM

    In the following link, there is a nice work using data to solve a ten players and a ball optimization problem: http://grantland.com/features/the-toronto-raptors-sportvu-cameras-nba-analytical-revolution/
    What's even more interesting is that the guys who did the model then reflect on why is hard for players to follow the strategy that the model suggests: physical limitations, habits, etc.
    This seems to me a good method: look at the data at the most granular level possible, work out if and how are the players optimizing and discover what previously unseen limits or needs are driving behavior.

  27. Here is my take on optimization and the pool player analogy:

    In a model you find what the optimum is for the agents (people). The people may not go through the math you did in the model to find that optimum. They may use rules of thumb. They may use non-verbal, non-conscious, algorithms built into their brains by evolution, and combined with the data of personal experience. But, if those ways are pretty effective, then they may usually find the optimum, or, on average, get very close.

    The math tells you what the optimal pool shot is. The good pool player uses evolutionarily developed algorithms combined with experience to devise his shot. But the point is, if he's a professional pool player, the optimal shot will be a very good approximation of how he actually hits it, and so it's a very valuable thing to use in a model, and in devising related policies.

    You don't care that much, at least for many important questions, and real world applications and policies, whether people do it exactly like the optimization in the model, but you do care that that optimization leads to close to the end decisions people will actually make. And it looks like based on many of the things I have heard Freidman say in his life, he doesn't care about this second part, as it often makes his libertarian philosophy look horrible to the vast majority of people, and that get you nowhere in a democracy (a big reasons libertarians are very derisive to, and hostile to, democracy).

    And, just in general, when you do make highly unrealistic assumptions in a model, you, of course, think intelligently about what you can apply to the real world, and what you can't, and how it will differ when you relax the assumptions. You don't just automatically literally assume the real world will behave exactly the same, or very close. Many obviously know this, but ignore it to make their ideology look better, or their hard won modeling expertise more valued, expertise that gets them tremendous monetary, prestige, and power rewards. And this approach can be very effective for them, as their mathematical papers are incomprehensible to 99%+ of the population, and often to more than 90% even of other PhD economists who are not in their specialty.

    The big problem I see with Freidman's pool player analogy is that it's often applied when the pool player makes shots far from the optimum that comes from the formulas. For example, when the pool player has incorrect information that is highly material, like if someone had an adversarial incentive to fool him into thinking that he optimizes by avoiding the holes, and instead trying to get all of the balls touching the far end. And we see this kind of successful fooling all the time, sadly, in economics (and politics), from sales of horrendously bad and risky annuity plans to seniors, to for-profit universities, to CT scans that are unnecessary, harmful, and are priced 10 times what they are available for elsewhere. In such cases, the models have severe problems when fully interpreted literally to reality and government policy

    If the pool player uses a set of genetically programed algorithms, calibrated by experience that are very different than applying Newtonian equations, but they do lead to extremely similar shots, then it's a fine analogy. But in many extremely important cases that kind of thing is far from the truth.

    1. There's a good chess analogy.

      A computer chess program finds it's moves in a very different way than a human grandmaster, but the move selection will typically be very similar. There's just more than one way to find the optimum, or very close. So, the move found by a computer chess algorithm will be a very good approximation for the behavior of a grandmaster, in results, if not methods, but a terrible approximation for the behavior of the median person. In one case, a pretty literal interpretation of a chess-program-based model makes sense, in at least some ways. In another case, it doesn't, and your interpretations to reality, and real world policy have to be very much modified, and thought out, with your real world knowledge (Bayesian priors) in mind.

    2. Also, a must-read paper for any economist that discusses Friedman's pool player analagy is the 2014 "Chameleons" paper of Stanford economist Paul Pfliederer. Here's a taste:

      "An absolutely literal interpretation of Friedman’s claim that models should be judged only by their predictions leads to nonsense. As I will argue below, the only way the extreme Friedman view can be supported requires that we come to any prediction problem with completely agnostic Bayesian priors about the world. For example, assume that we are interested in economic models designed to predict the U.S. inflation rate in
      year T based on information known in year T-1. Let us take literally the notion that we should judge these models only by the success of their predictions and we should completely ignore the realism of these models’ assumptions. In this context we would simply be looking for the model with the best inflation prediction record. If we allow any assumptions to be made no matter how fanciful they are, we can use any data that are
      available at T-1 to predict the inflation rate in period T.

      The number of models we could develop that make (within sample) extremely accurate predictions is extremely large if not infinite. Friedman, of course, recognizes that within
      sample fit is meaningless in this context and states that the goal is to obtain “valid and meaningful (i.e., not truistic) predictions about phenomena not yet observed.”11 However,
      if we wait a few years and use the new out-of-sample data to “test” all of our contending inflation prediction models, it is not the case that the one “true” model will emerge victorious and all the “false” models will crash and burn. We will, of course, find that the performance of most of the models that we identified as stellar within-sample performers is severely degraded in the out of sample period, but an extremely large number will still survive. Many of the surviving ones will no doubt be based on fanciful and unrealistic assumptions. For example, we may find that in one of the many surviving models the
      inflation rate in year T is determined by a simple function of the average height of male
      Oscar award winners in year T-1. Of course, I am well aware this is argumentum ad absurdum, but I am taking Friedman’s claim that we should ignore the realism of the assumptions quite literally…

      …Since little or nothing in the Oscar Award model is in accord with our background knowledge about the world, we reject it. It does not pass through any sensible real world filter. This is because our Bayesian prior on the Oscar Award model is effectively zero and a track record of prediction successes does essentially nothing to change that prior. This Bayesian prior is based on our knowledge of the world and the only way to give the Oscar Award model any standing would be to ignore our knowledge of the world and have a “completely uniform” or “agnostic” Bayesian prior…

      …We cannot avoid the need to run models through the realworld filter. The literal interpretation of Friedman’s claims cannot be taken as an argument for allowing chameleons to bypass these filters.



  28. Anonymous8:47 AM

    In computer sciences this problem is called Overfitting - when you calibrate a model to specific samples it fixates on those samples and loses relevance with the large world.
    The only solution is more knowdledge to calibrate the model or more data.
    This is extremely prone to happen the more variables you have compared to the data points.

  29. Billiards is a really great game.

  30. Good points in the post, but I have an issue with this passage, which seems to misunderstand the way in which you might use a physics-based pool-playing model:

    >> "Suppose the pool player is so perfect that he makes all his shots. In that case, using physics equations to predict what he does is a pointless waste of time and effort...you know where the balls go. No equations required!"

    The above is true only if all you care about is the ultimate result, i.e. where the balls end up. But that's asking the wrong question of the model. The fact that the pool player seeks to get balls in pockets (and largely succeeds in doing so), is an assumption of the model rather than a prediction.

    You're far more likely to use the pool-player model to answer these questions: "given that the red ball will end up in the corner pocket, at what angle does the player need to strike the white ball? And how much would that angle change if we move the red ball 6 inches to the left?"

    The economic equivalent to those questions is "given that a firm seeks to maximize profit, how will they price their product(s)? How will these prices change if demand changes?"

    In that light, Friedman's analogy seems perfectly appropriate to me.

  31. Gordon11:41 AM

    If you use a model of expert pool players knowing exactly where to hit the ball, your model would never explain what was happening when players hit safety shots. Your model might mention probabilities for the ball not actually going in the pocket but it would be a complete black swan to see a ball not aimed at a pocket, instead aimed to make the opponents next shot very awkward. (Perhaps expert pool players rarely find themselves playing such shots but expert snooker players certainly do - and an over assumed pool model probably thinks it can be simply adjusted to model snooker)

  32. The more you think about it, the greater the distance between your arrow and Friedman's intended target gets, so that can only mean one thing: you're overthinking it.