Ambiguous games and a well-heeled wizard
Some idle thoughts on Ellsberg games motivated by an interesting JMP
First, a thought experiment
A cartoonish premise
It’s somewhat important not to skip ahead too far in this post. Do your best to proceed line-by-line. I’ll do my best not to ramble.
Suppose you were presented with the following scenario. A cash-rich cartoon wizard has invited you to play several games of chance.
Game setup
As the amicable little wizard explains, there are two identical containers labeled R and A, as indicated in Figure 2.
R holds exactly 50 red balls and 50 red balls. A holds exactly 100 balls such that each ball is either red or blue, but in unknown proportions.
The victory conditions
In a moment, the wizard explains, you will have the opportunity to play a few games of chance.
Game 1: Draw two balls with replacement from container R. You win if and only if the two balls share the same color (i.e. both are red or both are blue).
Game 2: Draw two balls with replacement from container A. You win if and only if the two balls share the same color (i.e. both are red or both are blue).
What’s at stake
For each game, if you win, you receive $300. If you lose, you receive nothing.
As a result, you may now realize, you could walk away from these two games with any of the following payouts: $0, $300, $600.
Now there’s just one additional quirk. It’s not certain whether you get to play the game.
Before starting each game, our wizard friend explains, you must write a dollar number on a piece of paper and put in a box.
The wizard will, randomly and fairly, pick a number between $0 and $300. Then there are a few possible outcomes:
If the number drawn by the wizard is greater than or equal to the number you put in the box, you’ll get exactly the number you wrote on the paper
If the number drawn by the wizard is less than the number you put in the box, you’ll play the game. At this point you may win either $0 or $300.
It may or may not be clear at this point — the number you write on the paper should be the payout at which you’re personally indifferent to playing the game or receiving the money on the paper. You could think of it as the minimum price at which you’re content to walk away from the game, your certainty equivalent.
Try it out
Grab a pen and a piece of paper. Or you could write on your hand, use the Notes app on your phone, or skip this part altogether. I’m not here to tell you what to do.
What numbers would you pick? What is your certainty equivalent for Game 1? For Game 2? How do these compare?
Thinking about it analytically
Game 1
This game is fairly simple. There are four equally probably outcomes:
25% chance: Red, then blue —> loss
25% chance: Blue, then Red —> loss
25% chance: Red, then red —> win
25% chance: Blue, then blue —> win
So we can assess a 50% probability of winning $300 and a 50% probability of winning $0. If I had risk-neutral preferences, I might put $150 on my card.
Game 2
This game is less simple. We don’t actually know the parameters of the probability distribution we draw from.
But we can build toward the probability of a win.
Suppose b represents the quantity of blue balls in container A. We can say the probability of drawing a blue ball on any given draw (recall we sample with replacement) is just b / 100. Relatedly, we can call the probability of drawing a red ball (100-b)/100.
So the probability of drawing two balls of the same color is represented by the expression (b/100)^2 + (100-b)/100^2.
As shown in the plot above, although we don’t know the ball composition of container A, it is simply not possible for this composition to be worse than the composition in Game 1.
In fact, the expected payout gets higher — as high as 100% — as we approach the more uneven extremes of possibility for Container A. 50% win probability as outlined in Game 1 is the worst possible case.
Revisiting our original choices
If you participated in the thought experiment and wrote down numbers to put in the imaginary box, how did they compare?
If you were behaving rationally (at least according to one understanding of the word), your certainty equivalent should be higher for Game 2 than for Game 1. That is, it should be harder to convince you to walk away from Game 2 than Game 1. If able to play one game and one game only, you should exhibit a preference for Game 2.
However, if you showed a preference for Game 1, you’re not alone. The majority of people prefer Game 1!
Making sense of this
Attribution first
In putting this post together, I’ve in some ways awkwardly reproduced an experiment ran by Brian Jabarian of the Paris School of Economics in his job market paper.
What are we measuring?
The key point of this experiment is to gauge the extent to which people exhibit ambiguity aversion. Jabarian follows in the tradition of Ellsberg, using a very similar ball-and-urn scenario. (Unrelated note: Daniel Ellsberg has lived an extremely interesting life, and his Wikipedia page is well worth a read.)
Ambiguity in this context means something different than risk, although these things are sometimes easy to mix-up.
There are risky scenarios that are not ambiguous — at least in the sense that you know the probability distribution of relevant events. Slot machines are risk but mechanistically transparent and mostly unambiguous. Game 1 here is very similar. You know the composition of the container, but there is some risk. We have an extensive literature on risk aversion, and to some extent the term has slipped into common usage.
By contrast, you can imagine ambiguity as the lack of information about the distribution of plausible outcomes. The parameters determining risk are fuzzy. In this case, Game 2 — testing container A — has an ambiguous distribution of payouts. But by Jabarian’s clever construction, it bears no more risk than Game 1 and possibly higher payouts.
Ambiguity aversion and risk aversion can move in opposite directions for a given treatment or demographic. For example, we can point to NBER research in which women are more risk-averse then men, but women are less ambiguity-averse then men. We can also observe that ambiguity-averse investors actually absorb more risk in their portfolios, seemingly as a consequence of insufficient diversification.
The main result of Jabarian’s paper is to demonstrate that clear majority (~55%) of subjects prefer the less ambiguous lower payout afforded by Game 1 over the more ambiguous higher payout in Game 2. And the average effect is surprisingly large.
Now, there are a lot of things that can explain that behavior.
One of the reasons we might see this seemingly-irrational behavior from subjects is their failure to understand the experiment. For instance, we might reasonably guess, the intuition behind Figure 4 may not be obvious to everyone. Jabarian actually tested this by nudging participants into recognizing uneven distributions of red vs. blue balls are better. When all of the options are equally ambiguous, the average subject accurately ranks expected payoffs. But when brought back into the context of a a simply risky option against a risky, ambiguous option, subjects fall back into the ‘incorrect’ ordering. So we can’t easily point to poor comprehension. Besides, there’s other work indicating the persistence of ambiguity aversion in highly competent subgroups.
Another reason people show some of this ambiguity aversion is an aversion to complexity. There’s a pretty extensive literature on complexity aversion itself. As Jabarian and others have found, complexity aversion is tightly correlated to ambiguity aversion at the individual level. That is, if you demonstrate an ‘irrational’ avoidance of complex problems (e.g. compounding lotteries), you’ll likely avoid ambiguous problems as well. Now, given how closely related these behaviors are in empirical research, we might choose to treat ambiguity as a special case of complexity.
From what I can tell, we don’t entirely know how to explain the results we observe empirically. Real-world observations defy models of behavior, and the psychological determinants of ambiguity aversion seem a bit muddy.
What do we do with this?
People exhibit an irrational aversion to ambiguity. Those of us who avoid ambiguity pay a price to do so.
I know at least a few people who, for having recognized an instinctive aversion to risk, attempt to lean on more logical expected utility frameworks. I have not encountered anyone who has come to terms with their ambiguity aversion (or related complexity aversion).
It often feels to me that certainty and simplicity have become excessively fashionable. I have written before about my dislike for benchmarks, heuristics, and playbooks — varying flavors of rule-based scaffolding we use to hide from ambiguity.
From my perspective, if most of us will pay to avoid ambiguity, then we might reasonably expect ambiguous opportunities (e.g. in investing, in career paths) to be less rivalrous and therefore less costly to pursue on average. Given the extent of correlation, we might say similar things about complex opportunities. And if this is true, then the more ambiguous and/or complex an opportunity, the more easily we can find excess returns.