In a Crooked Timber post about cyborgs, Chris Bertram writes:
I expect someone will be along to explain how...contracts [requiring employees to get cyborg modifications] would be win-win.Matt Yglesias drops by in the comments to write:
It seems pretty obvious how they would be win-win: They’d be agreed to voluntarily by two mentally competent adults.Actually, this is a common misconception, so I thought I'd write a quick post to correct it. Basic Econ 101 does not imply that voluntary contracts are mutually beneficial to the people who enter into them.
The misconception springs from some solid intuition. In general, people who are free to do what they want, do do what they want. Maybe sometimes they don't realize what they want, or are subject to compulsions like addiction, but in general, free people only make deals that they want to make.
BUT, it doesn't follow that contracts are mutually beneficial. The reason is that there is uncertainty in the world.
Suppose that there's a deal that has a 60% chance of being to my benefit, and a 40% chance of being to my loss (assume equal benefit and loss here, just for simplicity). If I'm a rational person, and not too risk-averse, I would do that deal. But that still leaves a 40% chance that I'll lose out on the deal.
This is what's known as the difference between ex ante and ex post. Econ 101 says that people only make deals that are to their benefit ex ante. But that still leaves a lot of room for people to lose out ex post. And ex post is more important, since it's the real thing that actually happens to people, whereas ex ante is just what we guess will happen. (As a commenter points out, insurance contracts are a really good illustration of this principle. Would you buy health insurance if you knew you weren't going to have any health problems? Would your insurer sell you insurance if they knew you were going to get sick?)
Of course, all this doesn't mean the government needs to step in and stop people from taking risks. It just means that you can't infer outcomes from people's decisions.
Now just for fun, and because I don't like writing short blog posts, let's move out of the Econ 101 world, and introduce two advanced concepts: 1) asymmetric information, and 2) Knightian Uncertainty.
In a world of asymmetric information, one party to a deal may know something that the other party doesn't. For example, suppose you and I are considering making the deal in the above example. You think that the deal gives you a 60% chance of benefiting and a 40% chance of losing out. So, by your best guess, this deal is worth it ex ante. And so you're willing to do the deal.
But suppose I have information you don't (of which you are entirely unaware). Suppose I know that in reality, you have only a 30% chance of benefiting from the deal and a 70% chance of losing out. If you know what I knew, you'd never agree to the deal.
Now, if we could do 100 such deals, you'd eventually realize that I systematically had better information than you, and you'd become wary and stop making deals with me (as in George Akerlof's "lemons" model of asymmetric information). But in the real world, conditions are changing all the time - today I might have information you don't, and you might have information I don't. Thus, not only is there asymmetric information, but there's uncertainty (called "Knightian Uncertainty" after Frank Knight) about how likely it is that there's asymmetric information.
This allows people to be swindled again and again, as new kinds of asymmetric info keep popping up and falling into different hands. The swindlers may change, but the swindling will never stop, no matter how rational people are or how much experience they get. This is the basis of what George Akerlof calls "Phishing for Phools".
This is why we might want the government to step in and force people to divulge their private information. Econ 101 does say that better information all around can't possibly worsen the outcome of deals. I know of no "economic efficiency" argument for allowing people to try to swindle other people.
Anyway, bottom line: Even in a perfectly rational, perfectly free world, voluntary contracts are not always mutually beneficial to the people who enter the contracts. And in a realistic, ever-changing, uncertain world, some kinds of contracts might be mutually beneficial less than 50% of the time. (Of course, if you allow for people to be irrational and unfree, things get even worse. And this post doesn't even mention things like externalities, which throw a further wrench into the system.)