Monday, November 12, 2012

If Institutions Matter a Little, They Matter a Lot

A strong correlation between cooperation and membership in international institutions is not enough to establish that international institutions cause cooperation.   If we're to claim that institutions matter, we need to at least identify mechanisms by which institutions might promote cooperation among actors who would otherwise be disinclined to cooperate with one another.  The mere fact that such mechanisms can be articulated does not itself tell us whether the correlation is causal, but it lends a certain measure of plausible to causal interpretations that would otherwise be lacking.

Indeed, scholars have identified a variety of such mechanisms, from raising reputation costs to solving coordination problems to monitoring compliance and thereby overcoming information problems.  But even committed neo-liberals will generally grant that these arguments merely identify ways in which institutions provide a little push that can make the difference when (and only when) states almost meet the conditions under which cooperation would occur in an anarchic world.  And if that's all that institutions do, then they can't really matter all that much, can they?

Actually, yes.  

If you grant that international institutions matter at the margins, you've already conceded that they make a big difference to the overall level of cooperation we can expect to observe in the international system.  Look below the fold for an explanation.

As so much of this literature does, let's focus on the iterated prisoner's dilemma.  The standard story is as follows.  In each of an infinite number of stages, \(A\) and \(B\) must simultaneously decide whether to cooperate with one another or to defect.  If they both cooperate, they both receive \(3\).  If they both defect, they both receive \(2\).  If one cooperates while the other defects, the cooperator receives \(1\) while the defector receives \(4\).

In one-shot play, we expect mutual defection.  Neither \(A\) nor \(B\) has any incentive to cooperate if the other is expected to defect (since \(2>1\)) nor even if the other is expected to cooperate (since \(4>2\)).  But, as has long been recognized, mutual cooperation can be sustained in iterated play.

Suppose, for example, that each side adopts a Grim Trigger strategy, whereby they initially cooperate and in each subsequent round cooperate if and only if their opponent has never defected.

Applying a common discount factor of \(\delta \in (0,1)\) to payoffs one period in the future, the expected payoff of adopting a Grim Trigger strategy, conditional upon the other side likewise  adopting a Grim Trigger strategy, is \(3 + \delta 3 + \delta^3 + ...\), which is equivalent to \(\displaystyle \frac{3}{1-\delta}\).

Compare this to a single deviation.  If either player is going to cheat, they want to do so immediately.  There's nothing to be gained, in this setup, by waiting a while before defecting.  If they defect, they receive \(4\) in the current stage, but starting in the next period, they will forever after receive \(2\).  The net payoff then for this deviation from Grim Trigger is \(4 + \delta\displaystyle\frac{2}{1-\delta}\).

When is  \(\displaystyle \frac{3}{1-\delta}\) better than  \(4 + \delta\displaystyle\frac{2}{1-\delta}\)?  Whenever \(\delta \geq 0.5\).

That's the standard take.  A long shadow of the future can induce cooperation.  Nothing new here.

Now suppose that \(A\) and \(B\) belong to some international institutions, and that membership in this institution somehow raises the benefit of cooperating while creating a cost for defecting.  But let's assume that these effects are very modest -- say an additional \(0.05\) for cooperation and a loss of \(0.05\) for defecting.

The standard PD payoffs have no inherent meaning.  They are not measured on any scale.  In fact, we often treat them as if they are cardinal, but they're basically ordinal.  Therefore, to assume that an actor receives a benefit of \(0.05\) whenever they cooperate, and incurs a cost of \(0.05\) whenever they defect, is best interpreted as an assumption of reputation costs that are 5% as large in magnitude as the difference between mutual cooperation and mutual defection, or between mutual cooperation and the temptation payoff, etc.

Adopting a Grim Trigger strategy against an opponent who is likewise adopting a Grim Trigger strategy now yields a payoff of \(\displaystyle \frac{3.05}{1-\delta}\).  A single-period deviation from Grim Trigger now brings a payoff of \(3.95 + \delta\displaystyle\frac{1.95}{1-\delta}\).  The former is greater than the latter so long as \(\delta\geq 0.45\).  Thus, when \(0.45 \leq \delta < 0.5\), Grim Trigger can sustain mutual cooperation between states who would have been disinclined to cooperate in the absence of the institution.

How important is that difference?  Well, it really depends on what you assume about the distribution of \(\delta\).  Less technically, it depends on how many states are moderately patient.  Among states that care a great deal about the future, cooperation is likely to prevail even in the absence of institutions and norms.  Among states that are exceptionally impatient, institutions and norms aren't going to be enough to overcome the short-run temptation to defect.  But I don't think many of us think we live in a world of extremes.

Suppose we assume that \(\delta\) is normally distributed, say with a mean of \(0.5\) and a standard deviation of \(0.1\), implying that roughly \(68\%\) percent of states will value future payoffs between \(40\%\) and \(60\%\) as much as current ones, and roughly \(95\%\) of states will value future payoffs between \(30\%\) and \(70\%\) as much as current ones.  Those numbers are, of course, arbitrary.  But they are useful for comparison.  In this case, cooperation that would otherwise not occur will be observed in about \(19\%\) of cases.

All of this is very stylized, of course.  We have no strong theoretical reason for setting the payoffs in the PD to \(4\), \(3\), \(2\), and \(1\).  Similarly, one might argue that even \(0.05\) is too big an adjustment.  And we really don't have any way of knowing exactly how much states tend to value the future, or how much variation there is in that regard.

But, even with those caveats, there's an important takeaway point here.  Even relatively small differences in short-term payoffs can have a fairly significant impact on behavior, because small differences add up over time.  If you're willing to grant that international institutions can make cooperation just a little bit more attractive, and defection just a little bit less attractive, then you've implicitly granted that international institutions can make the difference between whether states cooperate or not in a fairly large number of cases, at least provided that you think that most states are moderately concerned about the future.

And the same exact argument can be made with respect to norms.


  1. Thanks, Phil. I like this - simple and elegant. Bookmarked for future undergrad course reading assignment. As homework, could make them work out what happens if players use a TFT strategy.

    About the payoffs, I like to tell students they could use emoticons if they so choose, e.g. "most preferred = :), least preferred = :(, etc" to drive home the point that these are representations; numbers are just more convenient.

    1. Glad you got something out of it, TT.

      Good point about emoticons, at least for ordinal payoffs. :)