Saturday, March 24, 2012

Breaking Down Fearon 1995

This is the first in a semi-regular series on prominent applications of game theory to the study of international conflict.\(^1\) My goal is to clarify the main contribution of the piece. I'll do so first by offering a brief synopsis before going through the key claims in more detail. Along the way, I'll try to note some of the important implications and points of common confusion. Assuming that format makes sense to you all, I'll do the same with future articles.

We begin with what I consider to be the single most important contribution to the study of international conflict in the last 25 years: Fearon's Rationalist Explanations for War.

Brief Synopsis

There are three important points made in this article. (And, yes, all three are important.)

First, if we believe war is a means to an end, (edit: and if we limit our attention to unitary actors), rather than an end unto itself, then war is always ex post inefficient. That means that both sides (again, conceived of as unitary actors) would be better off if they agreed to divide up the contested good according to the expected outcome of war rather than fighting a costly war before ultimately arriving at the same outcome. Thus, if we are to explain why war occurs, we cannot simply focus on what it is that states disagree over. We must ask:

Why did they choose war as a particular means of resolving their disagreement, rather than negotiation?

Second, there are three logically consistent answers to the previous question. (Though note that Powell later proved that one is really a special case of another). These are: information problems, commitment problems, and problems of issue indivisibility, though Fearon argues that issue indivisibility is probably the least important in practice. (Again, note that this assumes we can treat states as unitary actors. Many authors, including Fearon himself have shown that war need not ex post inefficient from the perspective of individual leaders if consider its implications for their domestic political standing.)

Third, where information problems exist (as they likely do in any real world situation), we cannot expect ordinary diplomatic communication to resolve them. The problem is not merely that people do not know things that they need to know. It is that there are strong incentives to keep secrets secret.

The Details of the Argument

For the technical portion of these posts, my target audience is those who have some familiarity with basic game theory and formal notation, but limited experience reproducing proofs from contemporary articles. Absolute novices aren't likely to find much of use here. Nor are those who are already at the point where they can work through advanced models on their own. If you feel I'm pitching these posts at either too high or too low a level, given this aim, please do let me know.

I'm now going to try to walk you through modified proofs for Fearon's three claims. My approach will be similar to Fearon's in most respects, but in some cases, I'll proceed a little differently than he did in order to make the proof easier to follow.

Let's start with the first claim.

We have two states, A and B. They have opposing interests with respect to some set of issues, which might concern the distribution of territory or the nature of one side's regime type or some specific policy or policies one of the governments is pursuing. Let \(x\) be the share of this good that would be allocated to A under a negotiated agreement and let \(1-x\) be the share that would be allocated to B. Further, assume that the utility A derives from an outcome in which they receive \(x\) of the good is simply \(x\).

If the two fight a war, let us assume that each side expects A to acquire \(w\) of the good, leaving B with \(1-w\).\(^2\) Moreover, each side will incur some loss of utility associated with incurring the costs of war. Let \(c_A\) and \(c_B\) denote reflect the amount of utility to be lost by A and B, respectively.

Thus, we have
& u_A & u_B\\
\mbox{Negotiation} & x & 1-x\\
\mbox{War} & w-c_A & 1-w-c_B.\\

Any \(x \in [w-c_A, w+c_B]\) ensures that both A and B at least weakly prefer negotiation to war. This is what is meant by the claim that war is ex post inefficient.

Suppose, for example, that we assume the canonical ultimatum bargaining protocol. That is, suppose that A proposes a particular value of \(x\), and a negotiated agreement enters into force if and only if B agrees to A's proposal.

Provided that \(x \leq w+c_B\), it must be true that B will in fact accept A's terms, since B's utility for war would be strictly less than that of a negotiated agreement.

To see this, note that \(u_B(\mbox{accept}) \geq u_B(\mbox{reject})\) is equivalent to \(1-x \geq 1 - w - c_B\), which, solving for \(x\), gives us \(x \leq w + c_B.\)

A thus knows that setting \(x=w+c_B\) will ensure a negotiated agreement. So too would any \(x\) less than \(w+c_B\), but of course A would not ask for less than the most she can get B to give up. That is, if A is looking for an agreement, A is obviously going to try to get the best possible agreement. And that means setting \(x=w+c_B\).

Note that if A does in fact choose to set \(x=w+c_B\), then since A's utility for a negotiated agreement is simply equal to the value of \(x\) that defines that agreement, A will receive precisely \(w+c_B\). If, in contrast, A chose some \(x> w+c_B\), A would provoke a war and thus receive \(w - c_A\). It doesn't matter exactly what value of \(x\) A chooses in this case, since war results when A demands too much, whether it's a little too much or a lot too much.

The question then becomes whether A prefers receiving \(x\) when \(x=w+c_B\) or prefers receiving her war payoff. Since \(w+c_B > w - c_A \Rightarrow c_A + c_B > 0\), which must be true, it follows trivially that A has no incentive to choose set \(x>w+c_B\). Thus, war would never occur in equilibrium.

I have, of course, so far assumed that A and B agree upon the likely outcome of war, and that A knows how much utility B would suffer in war. These are wildly unrealistic assumptions. But the point is, we now see that the mere presence of a disagreement is not enough to explain why war would occur. Nor is the observation that the anarchic international system allows states to choose to go to war any time they please. Nor is a positive expected utility from war sufficient to explain why war would occur. Thus, with this ultra-simplistic model, Fearon demonstrated that extant accounts of war are unpersuasive, because they fail to explain why states do not negotiate.

Let me be clear.

Fearon is not making a normative claim. He did not say that war should not occur. He did not claim that war will not occur. He did not even claim that war is irrational, as I have seen many people say that he did. Rather, he said that we cannot explain war without explaining why not negotiation. The reason why is because war is ex post inefficient. But no one, including Fearon, believes that we actually live in a world that fits the assumptions of the preceding analysis. In the world you and I actually live in, there are indeed reasons why states would, at times, go to war. So please, please, please, stop saying that Fearon said war is irrational. That's not the point here.

To see why information problems could cause war, let's suppose that A does not know the size of \(c_B\). Substantively, this means that we are assuming that A does not know how much subjective loss of utility B will suffer in war. This would be the case if A did not know how resolved B was.

Note that I've been very careful to refer to \(c_B\) as the loss of utility associated with incurring the costs of war, not simply the costs of war. That's more than just splitting hairs. If B cares a lot about the issue at stake, B might feel that the loss of 10,000 lives is entirely acceptable. If B cares very little about the issue, 100 fatalities may be far too many. When we say that A does not know the size of \(c_B\), we do not simply mean that A cannot predict how many fatalities B will suffer, or what price B will pay financially. Maybe those things factor in as well. Maybe they don't. The point is, you don't need to believe that it is hard to forecast how many fatalities will be incurred by each side in order to believe that there is significant uncertainty over the size of \(c_B\).

I am not assuming any uncertainty on behalf of B regarding \(c_A\) because the size of \(c_A\) is irrelevant to B. When faced with the one and only decision B must make, all that matters is what B gets from accepting (\(1-x\)) and what B gets from rejecting (\(1-w-c_B\)). There are other bargaining protocols where this would not be the case. I'll discuss that more later. But for our current purposes, we can safely ignore the fact that both sides are likely to be uncertain about one another's resolve and focus simply on one-sided incomplete information.

Though Fearon provided a more general proof, for ease of exposition, I'm going to make a strong assumption about the precise nature of A's uncertainty. Specifically, I'm going to assume that A believes that \(c_B \sim U(0, \overline{c}_B]\). That is, I assume that A believes that \(c_B\) is equally likely to take on every positive value up to, and including, some maximum possible value that is denoted \(\overline{c}_B\).

There is no particular reason to believe that as an empirical matter states assign equal probability to all possible values. But, like I said, Fearon proved the same result more generally, so there's no reason to worry that I've distorted things by making this assumption. And the uniform distribution has some nice properties.

A no longer faces a simple decision of whether to ask for the best possible terms to which B will agree or to deliberately provoke a war. Rather, A now faces what is known as the risk-return tradeoff. The larger the value of \(x\) that A selects, the better off A is, in the event that B accepts. But, of course, B is less likely to accept.

Generically, B accepts A's terms if and only if \(u_B(\mbox{accept}) \geq u_B(\mbox{reject})\), which, again, is equivalent to \(1 - x \geq 1 - w - c_B\). Above, we solved this for \(x\). Let's instead now isolate \(c_B\) by rewriting the preceding inequality as \(c_B \geq x - w\).

A cannot know for sure whether this inequality holds for any given value of \(x\). But since A knows that \(c_B\) is distributed uniformly, A can readily infer the probability that this inequality holds. Specifically, A knows that B rejects with probability \(\displaystyle \frac{\overline{c}_B - x+w}{\overline{c}_B}\). This is simply the proportion of possible values of \(c_B\) that lie above \(x-w\). Given our assumption that \(c_B\) is uniformly distributed, the proportion of possible values that meet some criteria is the probability that \(c_B\) takes on a value that meets that criteria.

Thus, A faces the following optimization problem:
\max_x u_A(x) \ \mbox{where} \ u_A(x) = \int_{0}^{x-w} w - c_A \ \mbox{U}dc_B + \int_{x-w}^{\overline{c}_B} x \ \mbox{U} dc_B.

Again, having assumed that \(c_B\) is distributed uniformly makes this an easy problem to solve. We can rewrite \(u_A(x)\) as
(\displaystyle \frac{x-w}{\overline{c}_B})(w-c_A) + (\displaystyle \frac{\overline{c}_B - x + w}{\overline{c}_B})x.

That is, A's expected utility for any given \(x\) reflects both A's war payoff and the value of having x accepted, where each of those two possible outcomes is weighted by its probability of occurring. To figure out what \(x\) A will choose, assuming that A is behaving optimally, we look for the value of \(x\) that maximizes this function.

And to find the value of \(x\) that maximizes \(u_A(x)\), we set \(\frac{\partial u_A(x)}{\partial x}\) equal to 0 and solve for \(x\). This yields
\displaystyle \frac{\overline{c}_B(w-c_A)}{\overline{c}_B^2} + \frac{\overline{c}_B(\overline{c}_B - 2x + w)}{\overline{c}_B^2} =0
which simplifies to
2w-c_A + \overline{c}_B = 2x,
or simply \(x = w + \displaystyle \frac{\overline{c}_B-c_A}{2}\).

Intuitively, A demands more from B the better A expects to do in a war against B, and the higher the loss of utility B might suffer when incurring the costs of war, while demanding less as the A's own loss of utility associated with incurring the costs of war increases.

The probability of war is simply the probability that B rejects A's terms. The probability that B reject is simply the proportion of possible values of \(c_B\) that lie below \(x-w\), which is \(\displaystyle \frac{x-w}{\overline{c}_B}\). Plugging in the value of \(x\) we just identified, this simplifies to \(\displaystyle\frac{\overline{c}_B-c_A}{2\overline{c}_B}\).

What does this tell us?

First, if A and B agree about the likely distributional outcome of war, and the sole source of uncertainty is B's resolve, then the likely outcome of war has no relationship with the equilibrium probability of war. That is, if war results from information problems, and those information problems solely concern resolve, then the distribution of material capabilities should have no impact on the probability of war. That will strike most students of IR as a strange result, and indeed it is easy to show that the distribution of material capabilities would matter a great deal if we make slightly different assumptions. I'll return to that in future posts. The point here isn't that capabilities don't matter. It's that our assumptions do. When we assume uncertainty about resolve, we are definitely telling a different story than when we assume uncertainty about the likely outcome of war. As I said, I'll talk more about that later.

The other thing to note is that A tolerates a lower risk of war with her optimal proposal when she finds the prospect of war less tolerable. That's quite intuitive. She also tolerates a lower risk of war when the largest possible value of \(c_B\) is itself not that large -- i.e., when A is not all that uncertain about B's resolve. This too should strike you as relatively intuitive.

One more point before proceeding -- I've seen a number of scholars claim that Fearon proved that observable factors don't influence the probability of war. Not so. He proved that if everything was observable, war would not occur. Therefore, observable factors cannot, in some sense, be considered the fundamental cause of war. But that does not mean that, in a world of incomplete information, observable factors play no role in determining the likelihood of war. If observable factors, such as the presence of natural resources, influence the subjective loss of utility actors would suffer when incurring the costs of war, then they absolutely would influence the equilibrium probability of war. The presence of natural resources would not, itself, constitute an explanation for why the actors did not negotiate. But that's a very different claim than saying the presence of natural resources wouldn't affect the probability of war in a world of incomplete information.\(^3\)

Okay, that's all I've got to say about information problems for now. Much more to come later in the series.

Let's turn to commitment problems. Specifically, I'm going to discuss preventive wars. Fearon also discusses first-strike (or offensive) advantages, but most of the subsequent literature on commitment problems has focused on preventive wars arising due to shifting power, so I'll stick to that. I will, however, use a slightly different formulation. One that will clarify some of the common points of confusion about this argument (or so I hope).

Suppose that we have an iterated game. In each period, A can either attack or attempt negotiations. If A attacks, there will be a war. If A attempts negotiations, A chooses what terms to propose to B, as above. If B accepts, the agreement enters into force, and remains in place until the end of the period. If B rejects, there will be a war, the same as if A had attacked outright. In neither case is the war treated as game-ending. Rather, wars establish a temporary distribution of the good, that will be revisited at the start of the next period. They also, critically, slow the growth of the defeated state. Note that we are not assuming here that preventive wars eliminate rivals. Some have considered models where that is indeed the case, and this has lead people to mistakenly assume that the argument requires such to be true. As we'll see in a minute, it does not.\(^4\)

To keep things simple, I'll focus on the 2 period case. But everything I say here holds more generally.

Let \(\overline{w}_2\) be the share of the good that A will control during period 2 if A and B fight a war in period 2 without having fought a war in period 1. Let \(\underline{w}_2\) be the share of the good that A will control in period 2 if A and B fight a war in period 2 after having also fought a war in period 1. Finally, let \(w_1\) be the share of the good that A will control in period 1 if A and B fight a war in period 1.

Assume that \(\underline{w}_2 < \overline{w}_2 < w_1\). This implies that B is rising relative to A, but B's rise will be slowed by war in period 1. It will not, however, be forestalled. Even if A fights B in period 1, A will still be in a worse position in period 2. I'm treating that as inevitable. I do so not because I believe that wars cannot possible forestall the rise of rivals, but because I want to clearly demonstrate that commitment problems can cause wars even if we assume that preventive wars do nothing more than to delay the inevitable.

Both A and B discount the payoffs from period 2 at a common rate. More formally, the players receive payoffs that reflect the outcome of period 1 plus the outcome of period 2, where the latter is multiplied by \(\delta\), and where \(0 < \delta < 1\). The following table provides the payoffs for A and B for each of the four possible outcomes (they appear in the following order: negotiation in both periods; war in 1 followed by a negotiated agreement in 2; agreement in period 1 followed by war in 2; and war in both periods). \begin{array}{cc} u_A & u_B\\ x_1 +\delta x_2 & 1-x_1 + \delta(1-x_2)\\ w_1-c_A +\delta x_2 & 1-w_1-c_B + \delta(1-x_2)\\ x_1 +\delta(\underline{w}_2-c_A) & 1-x_1 + \delta(1-\underline{w}_2-c_B)\\ w_1-c_A +\delta(\overline{w}_2-c_A) & 1-w_1-c_B + \delta(1-\overline{w}_2-c_B)\\ \end{array} Assume we have complete information. Not because that's particularly realistic, but because this allows us to highlight that commitment problems are themselves sufficient to cause war.\(^5\)

Start with period 2. It is straightforward to show that B accepts if and only if \(x_2\leq w_2+c_B\), where \(w_2\) here is used generically. It is similarly straightforward to establish that A always strictly prefers to set \(x_2\) precisely equal to \(w_2+c_B\).

This illustrates the first important result. Fearon was pretty clear about this, but I've seen people get this wrong, so it's worth stressing. There is no particular reason to expect war to be any more likely occur after a shift in power has taken place. The model tells us that we always expect a negotiated agreement to be reached in period 2. If shifting power causes war, it causes preventive wars -- i.e., wars that occur in advance of anticipated shifts in power.

Now consider period 1.

Knowing that she will set \(x_2=w_2+c_B\) in period 2 either way, A prefers reaching an agreement in period 1 (which by now you should anticipate would involve A setting \(x_1=w_1+c_B\)) to attacking if and only if

x_1 + \delta x_2 \geq w_1 - c_A + \delta x_2,

which, after substituting in the appropriate values for \(x_1\) and \(x_2\), becomes

w_1 + c_B + \delta(\underline{w}_2 + c_B) \geq w_1 - c_A + \delta(\overline{w}_2 + c_B).

We can rewrite this as

c_A + c_B \geq \delta(\overline{w}_2 - \underline{w}_2).

That is, if the total loss of utility associated with fighting a single war is greater than the discounted stakes of allowing 2's growth to continue unchecked, then we'll get peace in both periods. But if the total loss of utility associated with incurring the costs of war is sufficiently low relative to the difference between what A would be able to get if A fights a war against B now and what A would receive in the future if A does nothing to slow B's growth, then a preventive war would occur in equilibrium.

Note that, as Fearon says, such wars are not driven by fears of being attacked in the future. The outcome a preventive war seeks to prevent is a peaceful outcome. The point here is that not all peaceful outcomes are created equal. What A seeks to prevent is having to settle for a peaceful agreement that allocates the disputed good according to an unfavorable distribution of power. By limiting the growth of B's power, A can ensure a brighter future for herself. Note also, as I said above, that preventive war occurs here despite the fact that, by assumption, preventive wars do not in fact prevent B from gaining in power relative to A. They merely limit the degree to which A's position erodes over time.

What does this mean?

Well, suppose you've heard a lot of rhetoric about China's rise. Maybe you've heard a lot of people point out, ever so adroitly, that China hasn't fired a shot against the armed forces of another state in several decades. That China has done everything it can to demonstrate that its intentions are peaceful. What does this tell us about the prospects of war between the US and China?

Perhaps nothing.

There are other reasons to be skeptical about a future war between the US and China. The two are deeply interdependent upon one another economically, for one thing. Moreover, many analysts believe that China cannot continue growing at this pace, and will never actually threaten to surpass the United States. I'm not saying you should expect a war between the two. All I'm saying is that it is really important to recognize that the mechanism by which shifts in power produce wars does not depend upon the rising state explicitly intending to use force to overthrow the dominant power. The question of whether China intends to fight the US or not isn't one that should interest us much.

Finally, we have issue indivisibility.

This one is really straightforward. If meaningful negotiations aren't possible, then we no longer need to answer the question of why war and not negotiation. Simple enough.

Fearon argues, however, that this is probably not much of a concern in practice. I happen to agree, but this post is already long enough. My goal for this post is to help people appreciate the nuance of Fearon's argument, not to argue about whether issues really are divisible. I may say more about issue indivisibility at some point in the future. I may not. But for now, I'm moving on.

Finally, let's consider Fearon's claim about the irrelevance of diplomatic statements in situations where information problems would otherwise create a risk of war.

Suppose that prior to A selecting a value of \(x\), B could announce whether he would or would not accept \(x=w + \displaystyle \frac{\overline{c}_B-c_A}{2}\). Note that this is the offer that B knows that A would otherwise make, given A's beliefs about the distribution of \(c_B\).

For diplomacy to reduce the risk of war, A would have to believe that B would be more likely to claim that he would reject such terms when he is in fact of a type that is willing to do so than when he is not.

If A were to believe such a thing, then A would select a smaller value of \(x\) after observing B claim to be resolved. Since B weakly prefers smaller values of \(x\) to larger ones, he has a clear incentive to signal that he is resolved.

In Fearon's formulation, there is nothing to dissuade less resolved types from mimicking the behavior of more resolved types. Thus, he concludes that diplomatic statements cannot prevent war.

As I've discussed before, there are various reasons why many have been unpersuaded by this claim. In short, there may be disincentives for mimicking the behavior of more resolved types, even if the physical act of sending a diplomatic message does not itself carry a direct cost.


In my view, this article revolutionized the field of international relations. Yet its impact has been less than it probably should have been, in part, I think, because a lot of people didn't understand exactly what Fearon said, what he didn't say, and what the implications are of what he did actually say. I'll have more to say about the implications of his argument in future posts, but I hope that this helps clarify what he did and did not say.

Please do let me know if you found this useful, and, more importantly, whether there are ways I could change the format of these posts to make them more useful.

1. This is also the first post in which I've tried Mathjax. From this point forward, any mathematical notation that appears on this blog should be less of an eyesore. I haven't yet figured out everything Mathjax can do, so please bear with me while I get up to speed. 

2. Fearon argues that we can think of war as costly lottery, such that A acquires full control of the good with probability \(p\) and B acquires full control of the good with probability \(1-p\) In that case, A's utility for war, before we take the costs into account, would simply be \(p(1) + (1-p)(0) = p\) and B's would be \(p(0) + (1-p)(1) = 1-p\). We need not assume that wars are absolute, however. All we need assume is that there is a common expectation about the likely outcome of war. In this sense, \(w\) can be thought of as the reduced form representation of the continuation value of some unstated subgame. 

3. See also this post, which is unfortunately a bit ugly, as I hadn't yet discovered Mathjax. 

4. See also this post by Scott Wolford, which makes the same point. 

4. See Wolford, Reiter and Carrubba 2011 for analysis of a model that simultaneously allows for the presence of both information and commitment problems. 


  1. Very good post.

    One thing that may or may not concern you: when I read the post in Google reader the math didn't came through. It had the TeX code instead.

  2. Thanks, Kindred.

    That's too bad about the math. Didn't realize that. I'll see if there's something I can do about that.

  3. I won't pretend that I've followed every aspect of this.

    But re preventive war: what conceivable peaceful outcome, in the real world, would be so unfavorable to the US that it would decide to fight a preventive war vs. China to forestall that outcome? I'm very hard pressed to think of one.

    In other words, I can't think of a scenario in which "the total loss of utility" incurred in a US-China war would be less than the "loss of utility" involved in whatever consequences might flow from not fighting a war. (Esp. assuming, as I do and as seems eminently reasonable to assume given among other things the norm vs. conquest [e.g. Fazal 2007], that China is not bent on a classic expansionist course of conquest.)
    T.M. Fazal. State Death. Princeton U.P.

    (p.s. I'm planning to be off the computer for a day or so, thus it may take a while for me to pick up any response.)

  4. Hey, LFC.

    Given how costly a war between the two would be, I really don't see it as likely. I tried to make that clear. My point was that when we discuss preventive war, it may be something of a distraction to talk about whether the rising state wants a war with the declining state. Our theoretical understanding of preventive war suggests that's not the real point, and the preventive wars we've seen historically (starting with the classic example of Sparta and Athens) do not necessarily indicate that such factors play an important role.

  5. I fully understood the point about the intentions of the rising state not being necessarily an important factor (according to the model). I was also not suggesting you saw US-China war as likely. That wasn't what I was trying to get at.

    Unfortunately, due to my own limitations, I don't think I can express what I was trying to get at in a way that would conduce to fruitful discussion, so I'll leave it at that. (At least for now.)

  6. I apologize if I misunderstood your question.

    If it is true that norms have changed the way Fazal and Goldstein and others have argued, then the loss of utility associated with the costs of war would be much higher, and we'd expect fewer wars. You obviously don't need a fancy model to tell you that, but it is perhaps worth nothing that such arguments are in no way incompatible with these types of models. I don't know if that speaks in any way to the point you were trying to make, but I just thought I'd mention it.

    The reason I do not focus on norms-based arguments is not that the theoretical models that I rely upon to make sense of war leave no room for them, but because I prefer not to assume the presence of factors that add no additional explanatory power. I have yet to see anyone tell me what I would expect to observe if such norms existed that I could not account for without invoking norms. Since we would expect to see a decreased incidence of conquest even in the absence of norms (, I am reluctant to conclude that norms explain the decreased incidence of conquest.

  7. Thanks, this is quite interesting.

    I will look at the linked article (and perhaps try to address it in a post I am planning to put up in early April).


    I don't think Goldstein btw is all that keen on norms arguments. In 'Winning the War on War' his basic approach is (in my words, not his): "there a whole bunch of explanatory factors and norms are one of them but causation is complicated here and we can't really untangle the causes. Peacekeeping seems esp. important though b/c among other things it has a rough temporal fit w/ the decline of war post- 1990, so that's what I'm going to emphasize." Perhaps this is being unfair to him; the book, after all, is intended for a general audience, not mainly a scholarly one. In any event Joshua has not responded substantively to my review of his book on my blog (though he has read the review), so I don't know what he would say to this particular point.

    1. Good point. I read him as arguing that peacekeeping best accounts for the decline of civil wars but to account for the decline in interstate wars, we would need to focus more heavily on other factors, including norms. But you are right that he is careful to acknowledge how difficult it is to sort out the causal issues, and I should have been clearer about that in my comment.

      I should also say, with respect to my own views, that my reluctance to focus on factors that do not clearly add additional explanatory power should not be treated as an assertion that such factors do not exist. I do in fact believe that norms exist. I also believe that psychology and culture and lots of other factors that you'll rarely see me emphasize on this blog play a role in shaping events. I'm just not sure how large their role is, and I'm very much interested in figuring out just how much of what we observe we could explain if we stick with the admittedly unrealistic assumptions of the sparse models I discuss so often.

      Put differently, I think it is important to figure out what the baseline is, what we'd see if the world *was* really simple. And, currently, I don't think we've got a great understanding of that. So I focus my attention there. But that doesn't mean I actually believe we live in such a world.

  8. I just downloaded the Gartzke/Rohner article. In glancing at it -- haven't read it yet -- I see they cite Zacher 2001 and Fazal 2007 at n. 19. Unfortunately they cite these works for a line about the effect of US hegemony on deterring territorial expansion. Neither one has much if anything to do w the effect of US hegemony on deterring expansion. I can't say this gives me a lot of confidence in the rest of the article. However, I will read it (the parts that I can understand, that is).

    1. I think their argument stands on its own, independent of their characterization of Zacher or Fazal. But I see your point.

  9. Just like the article itself, this post is an oldie but a goodie. I return to it every semester when I prepare to teach this to my conflict class.

  10. Hi Phil! You should add that this applies only to UNITARY rational actors. As Fearon shows himself (1994) there can be private benefits to war for domestic actors which provide other rational explanations for war.

    1. Hey Hein! You're absolutely right. I'll clarify that.