January 25, 2005


Games people play (The Economist, Jan 20th 2005)

Dr Kurzban and Dr Houser were interested in the outcomes of what are known as public-goods games. In their particular case they chose a game that involved four people who had never met (and who interacted via a computer) making decisions about their own self-interest that involved assessing the behaviour of others. Each player was given a number of virtual tokens, redeemable for money at the end of the game. A player could keep some or all of these tokens. Any not kept were put into a pool, to be shared among group members. After the initial contributions had been made, the game continued for a random number of turns, with each player, in turn, being able to add to or subtract from his contribution to the pool. When the game ended, the value of the pool was doubled, and the new, doubled value was divided into four equal parts and given to the players, along with the value of any tokens they had held on to. If everybody trusts each other, therefore, they will all be able to double their money. But a sucker who puts all his money into the pool when no one else has contributed at all will end up with only half what he started with.

This is a typical example of the sort of game that economists investigating game theory revel in, and both theory and practice suggests that a player can take one of three approaches in such a game: co-operate with his opponents to maximise group benefits (but at the risk of being suckered), free-ride (ie, try to sucker co-operators) or reciprocate (ie, co-operate with those who show signs of being co-operative, but not with free-riders). Previous investigations of such strategies, though, have focused mainly on two-player games, in which strategy need be developed only in a quite simple context. The situation Dr Kurzban and Dr Houser created was a little more like real life. They wanted to see whether the behavioural types were clear-cut in the face of multiple opponents who might be playing different strategies, whether those types were stable, and whether they had the same average pay-off.

The last point is crucial to the theory of evolutionarily stable strategies. Individual strategies are not expected to be equally represented in a population. Instead, they should appear in proportions that equalise their pay-offs to those who play them. A strategy can be advantageous when rare and disadvantageous when common. The proportions in the population when all strategies are equally advantageous represent the equilibrium.

And that was what happened. The researchers were able to divide their subjects very cleanly into co-operators, free-riders and reciprocators, based on how many tokens they contributed to the pool, and how they reacted to the collective contributions of others. Of 84 participants, 81 fell unambiguously into one of the three categories. Having established who was who, they then created “bespoke” games, to test whether people changed strategy. They did not. Dr Kurban and Dr Houser were thus able to predict the outcomes of these games quite reliably. And the three strategies did, indeed, have the same average pay-offs to the individuals who played them—though only 13% were co-operators, 20% free-riders and 63% reciprocators.

Nature does not select.

Posted by Orrin Judd at January 25, 2005 1:35 PM

"A strategy can be advantageous when rare....".

Who dares, wins.

Or, buy low and sell high. Were people buying in 2001/2002? They should have been. Just today, the Journal had an interesting article about two companies in Milwaukee that manufacture huge machines for mining. They were both bankrupt and virtually shuttered in the late 90s, but are thriving today. A purchase in 2001 would have yielded well over a 100% return today.

And consider Tom Brady - his 60 yard TD pass on first down was a beautiful thing.

Posted by: jim hamlen at January 25, 2005 3:27 PM

I joined through the internet in a simimar experiment in July 2004. It had a twist in that you could punish other players (in my games for the cost of 1 token I could punish others 3 tokens). Link (Dutch only): http://www.vpro.nl/wetenschap/experiment/index.shtml.
The reseachers were Arno Riedl from CREED and Martijn Egas from IBED. The result were presented in WORKSHOP REPORT: COOPERATION AND SOCIAL NORMS IN HUMANS AND OTHER PRIMATES October 1, 2004.
Main conclusion: In an online experiment with hundreds of players, Egas and Riedl varied the costs and effectiveness of punishing shirkers, to investigate the effects on the level of cooperation. They report that when punishment costs increase and/or punishing effectiveness decreases, the level of cooperation remarkably decreases. Apparently, punishing is not a sheer altruistic deed, but people take into account their own losses when punishing.

Posted by: Daran at January 25, 2005 3:31 PM

OK, we've had our fun with Darwin but now I've got a serious question. The way I read this is that there is a group characteristic (specifically, the mix of strategies) being selected for. How does this square with the classical view that the target of selection is the individual? It seems to me that in order to do this, you would have to show that for an individual whose inherited strategy is above or below equlibrium, that individual becomes reproductively-disadvantaged relative to others in their group who have different strategies. But it also seems to me that the individual's disadvantage would be borne by everyone else in the group, since the disadvantage is a matter of ratio -- if he's out of whack so are they. Are we saying that the target of selection is a group?

Posted by: joe shropshire at January 25, 2005 4:23 PM


No, the target of selection is a gene, not an individual or group.

Posted by: Mike Earl at January 25, 2005 9:01 PM


As per the story, there is no selection.

Posted by: oj at January 25, 2005 10:43 PM

Ok, selfish gene theory. Even so, second question: how does selection pressure produce a stable equilibrium among three different strategies? I'm assuming that means at least two genes (one gene would yield two at most.) It's been a while since I took process control but a stable equilibrium requires a negative feedback mechanism of some kind. But selection pressure is a type of positive feedback: over time, it multiplies favorable characteristics, and extinguishes disfavorable ones. In other words, I understand why reciprocators predominate, but I don't understand why cooperators and free-riders persist. Have at it, or at me, at your convenience.

Posted by: joe shropshire at January 25, 2005 11:16 PM


Because it doesn't matter which you choose, none are selected for or against.

Posted by: oj at January 25, 2005 11:21 PM


Well, I'd have to see more math to figure why the equilibrium is where it is for this case. I've heard of computer simulations in which the reciprocaters eventually did squeeze everybody else out, but I'd be willing to believe the percentages given might be stable with some complications (eg, occasional misunderstandings).

I can give you a couple of simpler examples:

1. Gender. If males predominate, females have a reproductive advantage, and vice versa. At a 50/50 split, there's no advantage to either and that's the stable point.

2. Sycle-cell anemia. The gene for this is wonderful if you only have one copy and you're the only person with it; you're resistant to malaria and there's no penality. As it spreads, it becomes rapidly more dangerous because the chance of mating with someone who is also a carrier increases; it will spread until the advantage of being malaria-resistant exactly cancels the risk of marrying another carrier and producing doomed offspring. This will depend on how common malaria is; as expected, we see that this gene is not uncommon (but not 100%, either), and it is basically absent from groups who had no real risk of malaria.

Posted by: Mike Earl at January 25, 2005 11:53 PM


50-50 isn't the historic equilibrium point

Posted by: oj at January 26, 2005 7:29 AM


I don't follow you.

Posted by: Mike Earl at January 26, 2005 10:45 AM

Men and women don't distribute equally in nature.

Posted by: oj at January 26, 2005 11:00 AM

Population, or births?

Posted by: Mike Earl at January 26, 2005 2:49 PM

births--the ratio is something like 105 male births per 100 female.

Posted by: oj at January 26, 2005 5:41 PM

OJ - The validity of Mike's point has absolutely nothing whatsoever to do with what the exact equilibrium percentage is. It could be 44-56 or whatever and his point about how the equilibrium emerges and is maintained still works exactly. The point is simply that beyond a certain proportion of males in the population, it's better to have female offspring because they'll be drenched in potential mates; they'll have their pick. Male offspring, in contrast will be reproductively disadvantaged. In the aggregate, populations that have the right split (whether it's exactly 50-50 isn't the point) have reached the optimum and so further selective pressures cease.
Responding to another poster, notice the relevant machanism is in fact negative feedback, not positive.

Posted by: Tom at January 27, 2005 8:01 AM


But there aren't.

Posted by: oj at January 27, 2005 9:41 AM

Mike and Tom: thanks. I didn't think of sex distribution but should have. (And Tom's right: split doesn't have to be 50/50 numerically, it's whatever equalizes the expected payout.) But: that's also an example of pure competition. What I don't see is where you get differential pressure, in the right directions, in a cooperative game where everybody shares the pot. For example: suppose we know in advance that the equlibrium is 20/20/80 and then suppose we have a population that drifts out, to say 21/21/78. We know the game's no longer optimal so payouts decrease. But the thing is, everybody's payout decreases. Where's the signal being generated to drive back to equlibrium? If anybody's got a worked example they can link to that would be great -- I'm finding general discussion out there. Thanks in advance.

Posted by: joe shropshire at January 27, 2005 12:29 PM


The payouts for a game may be equal, but not all games have the same set of participents; the assumption is more than one game per generation.

Posted by: Mike Earl at January 27, 2005 9:53 PM

Joe - Here's an example of how the equilibrium is maintined: Say the cooperator become a very large fraction of the population. Then there are immense benefits to being a free rider, because the pot is so large. Free-riders then have a reproductive advantage compared to cooperators (they get all the benefits of the cooperation without the costs of contributing to it), so they gradually increase over time. As far as the opposite story, that's not so obvious. It's probably something like this: As the fraction of free riders becomes higher, the incentive to develop defensive measures against them rises, thus diminishing their payoff. E.g., if there's one mugger in the world we'll tolerate him, but if there are 500 million we'll raise an army and start fighting them. Somewhere in between the payout to being a mugger is just equal to that of being a cooperator.

Posted by: Tom at January 28, 2005 9:22 AM


Then why are there so many kleptocracies in the world?

Posted by: oj at January 28, 2005 10:05 AM

OJ - Because we're living in an environment radically different from the evolutionary one. The technology has changed rather faster than the genes. I.e., there's a difference between chasing giraffes on the African savannah and living in modern Havana or Pyongyang.

Posted by: Tom at January 28, 2005 8:32 PM

PS - Or Paris, or Washington...

Posted by: Tom at January 28, 2005 8:35 PM

Ah, yes, the 'we've magically broken free of evolution" argument....

Posted by: oj at January 28, 2005 8:44 PM

Not at all. Just that evolution happens rather slower than social changes like tech, etc.

Posted by: Tom at January 29, 2005 8:37 AM

Yes, intelligence has set us free from the unintelligent force that shaped all other life in the Universe.

Posted by: oj at January 29, 2005 9:44 AM