aa r X i v : . [ q -f i n . E C ] S e p A Partial Solution to Continuous Blotto
Kostyantyn Mazur*September 15, 2017
Abstract
This paper analyzes the structure of mixed-strategy equilibria for ColonelBlotto games, where the outcome on each battlefield is a polynomial functionof the difference between the two players’ allocations. This paper severelyreduces the set of strategies that needs to be searched to find a Nash equi-librium. It finds that there exists a Nash equilibrium where both players’mixed strategies are discrete distributions, and it places an upper bound onthe number of points in the supports of these discrete distributions.* Kostyantyn Mazur, Tandon School of Engineering, New York Univer-sity, 6 Metrotech Center, Brooklyn, NY; email: [email protected] would like to acknowledge Dr. Laurent Mathevet (Department of Eco-nomics, New York University) and Dr. Edward Miller (Department of Math-ematics, Tandon School of Engineering, New York University) for their inputsfor this paper. 1 ontents R . . . . . . . . . . . . . . . . . . . . . . . . 175.1.2 Removing the Odd Coordinates . . . . . . . . . . . . . 175.1.3 Removing the 0-th Coordinate . . . . . . . . . . . . . . 195.1.4 Extending Carath´eodory’s Theorem . . . . . . . . . . . 205.1.5 The Total Effect . . . . . . . . . . . . . . . . . . . . . 215.2 Other Possible Extensions . . . . . . . . . . . . . . . . . . . . 22 Appendix 2: Calculations 49 Introduction
Colonel Blotto is a two-player game, whose prototypical version has twocolonels fighting a battle with each other on multiple fronts. In an economicscontext, the colonels might be, for instance, two firms competing simultane-ously in multiple geographical areas, or two political candidates seeking to“win” as many places as possible. Each colonel has only a fixed number oftroops (or resources) to distribute among the fronts, and whichever colonelassigns more troops on a front wins that front. Both colonels want to winas many fronts as possible. If one colonel knew how the other would arrangethe troops, then this colonel would simply arrange to win all but one frontby just one soldier. Thus, each colonel must follow a mixed strategy; thatis, deciding at random how to arrange the troops. Of course, it matters justfrom which distribution is that randomly-determined pure strategy.This paper studies Blotto on two battlefields where the outcome on abattlefield depends continuously on the advantage in allocations on that bat-tlefield, a topic that has received scant attention to date. The motivation forthis is that outnumbering one’s opponent by one soldier does not guaranteevictory. Having one more soldier is an advantage, but the battle might stillgo the other way. The bigger the advantage, the more likely the bigger armyis to win. Thus, it is reasonable to have probability of winning be a contin-uous function of the two numbers of troops (instead of the function used inthe classical Blotto game: 1 for the player with the bigger allocation, 0 forthe other player). The goal is to maximize the expected value of the numberof battlefields won. As expectation is a linear function, the probabilities ofwinning each battlefield can be re-interpreted as expected numbers of battle-fields won. For example, a 57% chance of winning would mean an expectedvalue of 0.57 battlefields. This interpretation is all the more useful, becauseit can also accommodate varying degrees of victory. After all, not all battlesare clear wins for one side or the other. It is also reasonable to subtract0.5 from the function, so that a positive result on a battlefield representsan expected victory (the bigger the positive result, the bigger or more likelythe victory), and a negative result, correspondingly, represents an expecteddefeat. Thus, an expected result of 0.57 battlefields becomes an expectedresult of 0.07 battlefields better than equality on that battlefield. This sub-traction in effect turns a constant-sum game into a zero-sum game. Thisexpected result can be written as a function of the difference of allocationson the battlefield, and this function can be called the outcome function.4f the two players had equal resources, then, at least if the outcome func-tion were an odd function, then this game would have a simple solution: bothplayers can play any strategy at all, and the result will be zero. Any gainon one battlefield would be compensated by a loss of equal size on the otherbattlefield. Therefore, it is necessary to introduce some unfairness into thegame, either by allowing the outcome function not to be odd, or by givingone player an advantage in resources. The model of this paper accommodatesboth approaches.It is already known that a game like this has a Nash equilibrium; Glicks-berg’s theorem [7] guarantees it. This paper shows that if the outcome func-tion is polynomial, then, not only is there a Nash equilibrium, but there isone where both players choose from only a small number of pure strategies.Actually finding an equilibrium exactly is a difficult matter.This type of game can model two firms simultaneously competing ontwo markets, where one firm happens to be larger. It can also model twopolitical parties competing in two districts (by spending campaign money) if(for instance) the number of seats for a party in a district is proportional tothe vote percentage for that party in that district, something that classicalwin/loss Blotto cannot do. It can model winner-take-all rules also, becausethere is no guarantee of winning a district just by outspending the opponent.It is true that the exact outcome function is unlikely to be a polynomial,but it can be approximated by a polynomial fit through several points. Themore points the polynomial goes through, the closer the result is to the Nashequilibrium, but the harder it is to evaluate the game.In a zero-sum game, a Nash equilibrium represents perfect strategies forboth players, in the sense that each player’s strategy (if played as in theequilibrium) guarantees that player at least the equilibrium result. Those twocombined guarantees rule out the possibility of either player seeking a betterresult - since the opponent’s equilibrium strategy renders that impossibleregardless of what the player may do. To the extent that a (zero-sum) ColonelBlotto game reflects an economic competition correctly, a Nash equilibriumto it gives perfect strategies for both players. Normally, with a continuousrange of options, a Nash equilibrium looks like a continuous distribution,and all together, these constitute an infinite-dimensional space. This papershows that a specific class of continuous Blotto games are exceptions to this,by having Nash equilibria that can be obtained by searching only a finite-dimensional space of strategies, each of which has a simple form.5he general approach that this paper uses is a series of reductions andtransformations of the set of mixed strategies, which are repsesented asprobability-density functions. First, any pure strategy of either player isrepresented by just a single number, x for Player 1, and y for Player 2.(Let Player 1 be the player with a resource advantage if there is one.) Thepayoff-matrix (call it E ) based on this number has infinitely-many rows andcolumns (Player 1 choosing the row and Player 2 choosing the column).However, of those infinitely-many columns (or rows), only N + 1 (where N is the degree of the outcome function) are linearly independent, essentiallybecause the columns are polynomials of degree at most N in x , Player 1’schoice of pure strategy. This means that E maps the space of all functionsto a finite-dimensional space. So, there is only an ( N + 1)-dimensional spaceof functions g (probability-density functions for Player 2’s strategies) whereall the Eg are distinct. Thus, if ˜ g is another function, then E ˜ g = Eg . But,if f is the probability-density function of Player 1’s mixed strategy (and g is the same for Player 2), then the expected result is f T Eg . If Eg = E ˜ g ,then f T Eg = f T E ˜ g regardless of what f is, so every strategy for Player 2is payoff equivalent to one of the g s in the ( N + 1)-dimensional space. Thisworks just as well if the two players are reversed.In other words, there is an ( N + 1)-dimensional space of equivalenceclasses of strategies (for either player), where any two strategies in the sameequivalence class are payoff equivalent. Each of these equivalence classes canbe described as an ( N + 1)-dimensional vector. That is true for both players.What is more, the function that reduces the probability-density functionof a strategy to the vector of the strategy’s equivalence class can be madeto be linear. Therefore, it preserves convex combinations . Thus, the vec-tors of all the equivalence classes are convex combinations of the vectorsof the equivalence classes of the pure strategies. The equivalence classesof the pure strategies form a curve in ( N + 1)-dimensional space. How-ever, Carath´eodory’s Theorem says that each convex combination of pointsin ( N + 1)-dimensional space can be described as a convex combination ofat most N + 2 of the points. This means that every equivalence class thatcontains any strategies contains one that is a convex combination of onlyfinitely many (at most N + 2) pure strategies, or, equivalently, contains a T stands for transpose, and this multiplication should be interpreted as if it were ausual matrix multiplication, with an integral replacing the summation. A convex combination is a weighted average with nonnegative weights; here, theweights are the probability-density function of the mixed strategy. N + 2 components . Wherever any Nash-equilibrium strategy may be, its equivalence class contains a strategy withthis type of discrete distribution, which is payoff equivalent to the Nash-equilibrium strategy (and thus is a Nash-equilibrium strategy too).Finding Nash equilibria for Blotto games is the topic of many papers,each tasked with covering a specific variation of the conditions, the first oneto propose the problem being Borel, in [5] (translated into English as [6]).Things that were varied include the aim of the players (maximizing expectedvalue or the probability of winning a majority), whether the allocations canbe varied discretely or continuously, the number of battlefields fought on,the numbers of troops each colonel has access to (and whether one colonelhas more troops than the other), the relative values of the fronts (where theaim is to maximize the total value won), whether the game is zero-sum ornot, and even the nature of the resource constraints. These include Wein-stein’s [20], Hart’s [10], Gross and Wagner’s [9], Roberson and Kvasov’s [17],Hortala-Vallve and Llorente-Saguer’s [11], Schwartz, Loiseau, and Sastry’s[18], Thomas’s [19], Kovenock and Roberson’s [12], and Macdonell and Mas-tronardi’s [13]. Macdonell and Mastronardi’s paper has a similar result tothis paper, in that it also has two battlefields with unequal resources, and aset of Nash equilibria that always includes a discrete distribution.This paper explores yet another dimension of Blotto, where the resultdepends on the degree of victory. Even this form of Blotto was explored,although not as much as traditional win/loss Blotto. Unfortunately, thesepapers only search for pure strategies, and if there is none, the only conclusionthat can be drawn from using the methods of those papers is exactly that:”No pure-strategy Nash equilibrium exists”. One example is Blackett’s [4],which only has a necessary condition for the existence of a pure-strategyNash equilibrium. Blackett allows for any outcome function. Golman andPage’s [8] has a one-parameter family of outcome functions, and allows manybattlefields and dependencies between battlefield outcomes, but Golman andPage find that in most of the cases they study, pure-strategy equilibria do notexist. Os´orio’s [16] does succeed in finding a pure-strategy Nash equilibrium,but that paper restricts itself to a narrow class of outcome functions. Thispaper is a complement to the pure-strategy searches, in that it serves as abound on how ”impure” are the strategies that need to be considered. Mixed-strategy equilibria for this form of Blotto have been explored in [15], but thatpaper limits its outcome function to a finite-parameter set. A component of a discrete-distribution mixed strategy is one of the pure strategiesthat are played with positive probability, according to the mixed strategy. .However, a simpler version of the method of Beale and Heselden’s [1],which is also a simpler version of the algorithm in Behnezhad, Dehghani,Derakhshan, HajiAghayi, and Seddighin’s [2], is applicable here. Player 1takes an integer L (the larger L , the better the approximation, but the longerit will take to calculate), and treats both players’ possible allocations as ifthey were required to be integer multiple of n + aL . Then, Player 1 considersthe probabilities of playing each strategy as variables, adds an extra variablefor the payoff, and sets up inequalities to reflect that the probabilities mustbe nonnegative and sum to 1, and that the expected payoff against any ofPlayer 2’s strategies be no lower than the payoff variable. Player 1 seeks tomaximize the payoff variable subject to these inequalities, and uses a linearprogramming model to do this. This has the drawback of possibly produc-ing increasingly complicated equilibrium strategies as L goes up, which areproven here not to be necessary in the case of a polynomial outcome function.The knowledge that there exist Nash-equilibrium strategies that are dis-crete distributions, with an upper bound on the number of components,makes it easier to search for a Nash-equilibrium, analytically or numerically.One possible approach for this is to group each player’s possible mixed strate-gies by the number of components, and by which ”edge” strategies , if any,were used as components. Then, in each group, the parameters that theplayer has control of are: which pure strategies are the components, andall but one of the probabilities of playing the components . For each pairof groups (one group for each player), the critical points of the payoff canbe found by setting the partial derivatives of the payoff with respect to allparameters (for both players) to zero and solving the resulting system of The outcome function can be written as P (˜ x − ˜ y ) (where ˜ x and ˜ y are the two players’allocations), and ∂ ∂ ˜ x [ P (˜ x − ˜ y )] and ∂ ∂ ˜ y [ P (˜ x − ˜ y )] both equal P ′′ (˜ x − ˜ y ), so the concavityof the outcome function is the same with respect to both players’ allocations. An edge strategy is a pure strategy that is at a boundary of the pure-strategy space;here, this would mean placing all resources on one battlefield or the other. The probability of playing the remaining component is determined by the fact thatthe probabilities have a sum of 1, and so, it is not a parameter.
The Nash equilibrium is one of the critical points,and a point can be checked for being a Nash equilibrium by checking thatneither player can improve the result by changing to a pure strategy.For efficiency, the critical points from the lowest-parameter groups shouldbe checked first, before the systems for the higher-parameter groups getsolved, as these are the groups that yield the simplest systems of equations.There are also ways to make this algorithm faster, by reducing the upperbound on the number of pure-strategy components that could be required.The extensions concern themselves primarily with this. One of the exten-sions takes advantage of the symmetry between the two battlefields, whileanother one uses the theorem proven in [14] to take advantage of the conti-nuity of the pure strategies. Together, these extensions make a brute-forceapproach (described in the extensions) feasible in practice for low degrees ofthe polynomial as the outcome function.
A two-field continuous Blotto game is defined by an ordered triple ( n, a, r ),where n is Player 2’s resources, a is Player 1’s advantage in resources, and r is the outcome function. That is, Player 1’s resources are n + a , and Player 1chooses a number ˜ x ∈ [0 , n + a ]. Player 2 chooses a number ˜ y ∈ [0 , n ]. Thesenumbers are called allocations to battlefield 1. The allocation to battlefield 2is n + a − ˜ x for Player 1 and n − ˜ y for Player 2. Thus, each player’s allocationssum to that player’s resources.On each battlefield, the outcome is r ( z ), where z is the difference inallocations between Player 1 and Player 2 on that battlefield. That is, z =˜ x − ˜ y on battlefield 1 and z = a − ˜ x + ˜ y on battlefield 2.Player 1’s payoff is r (˜ x − ˜ y ) + r ( a − ˜ x + ˜ y ), and correspondingly, Player2’s payoff is the opposite, which − r (˜ x − ˜ y ) − r ( a − ˜ x + ˜ y ), which makescontinuous Blotto a zero-sum game. Player 1 seeks to maximize the expectedvalue of Player 1’s payoff, while Player 2 seeks to minimize the expected value If both players’ groups have zero parameters (like the group of one-component strate-gies with an ”all-on-battlefield-2” component), then every point is critical. There are no special cases for boundary extrema (from the point of view of one player),because these are already accounted for by being in a different group, either one with fewercomponents, or one using more ”edge” strategies.
9f Player 1’s payoff.The outcome function represents the dependence of the degree of victory(or the probability of victory) on a battlefield on the advantage in resourceson that battlefield, where a battlefield could (for instance) mean a districtin an election, or one of two markets over which two firms simultaneouslycompete.
Theorem.
In any continuous Blotto game where r is a polynomial, thereexists a (mixed-strategy) Nash-equilibrium, in which both players’ strategiesare distributions with support on at most N + 2 points, where N is the degreeof r . It was already known that this game would have a Nash equilibrium; thenew result is that there exists one with this form.This theorem is proven by first establishing that, if r is a polynomial,then the payoff matrix has rank not greater than N + 1. That allows achange of coordinates that leaves both players with only an at most ( N + 1)-dimensional strategy space, each point in which corresponds to many mixedstrategies that are exactly equivalent to each other. In the new coordinates,the mixed strategies still form the convex hull of the pure strategies, and thatmakes each mixed strategy have a representation as the convex combinationof only at most N + 2 pure strategies. This equality in the new coordinatescorresponds to equivalence as mixed strategies. Wherever a Nash equilibriummay be, both players can choose an equivalent strategy with only finitelymany components. The reduction of the strategy space to discrete distributions will be illustratedwith a relatively simple example, r ( z ) = − z . It should be noted that whenactually applying this result, these steps are not needed; they only exist toshow that, in fact, there is a Nash equilibrium with a discrete distribution.10 .1 Symmetrization of the Battlefields The players’ possible strategies can equally well be written in terms of devia-tions from the even-split strategy. That is, instead of choosing ˜ x , Player 1 canbe considered as choosing x , defined as ˜ x − n + a , and similarly, Player 2 can beconsidered as choosing y , defined as ˜ y − n . From the fact that ˜ x ∈ [0 , n + a ],it follows that x ∈ (cid:2) − n + a , n +22 (cid:3) , and from the fact that ˜ y ∈ [0 , n ], it followsthat y ∈ (cid:2) − n , n (cid:3) .Player 1’s payoff, denoted by R ( x, y ), is r (cid:0) x − y + a (cid:1) + r (cid:0) − x + y + a (cid:1) .The proof of this is in Appendix 1, as the proof of Lemma 1. With r ( z ) = − z , R ( x, y ) = − x − y ) (cid:0) a (cid:1) − (cid:0) a (cid:1) (as demonstrated in Calculation 1in Appendix 2).The goal of this change of variables is simply to treat the two battlefieldssymmetrically. Another useful property is that either player exchanging allo-cations to the two battlefields results in that player’s variable ( x or y ) simplychanging sign when written in this form (because, since Player 1’s allocationsadd to n + a , subtracting n + a from each allocation leaves 0 for Player 1’s newsum, and similarly, since Player 2’s allocations add to n , subtracting n fromeach allocation leaves 0 for Player 2’s new sum).Player 1’s resource advantage gives Player 1 two benefits, namely theability to have higher allocations and a wider range of strategies. This re-mains true after the change in coordinates, except that the advantages aredecoupled from each other: the ability to have higher allocations is now inthe form of the function R , while the wider range of strategies still manifestsitself as a wider interval.The addition of a constant to every payoff does not change which strate-giy pairs are Nash equilibria, and neither does multiplying every payoffby a positive constant. As such, if the payoff were − ( x − y ) instead of − x − y ) (cid:0) a (cid:1) − (cid:0) a (cid:1) , that would not change the location of the Nashequilibria. Since the function − ( x − y ) is easier to manipulate, that shallbe used as the example for R ( x, y ) in the rest of this paper. Let Player 1 and Player 2 both play mixed strategies.11layer 1’s mixed strategies are distributions of x , with support (cid:2) − n + a , n + a (cid:3) .Let f ( x ) be the probability-density function of Player 1’s mixed strategy.Similarly, Player 2’s mixed strategies are distributions of y with support (cid:2) − n , n (cid:3) . Let g ( y ) be the probability-density function of Player 2’s mixedstrategy.For functions f a and f b with domain (cid:2) − n + a , n + a (cid:3) , let f a · f b = n + a R − n + a f a ( x ) f b ( x ) dx .For functions g a and g b with domain (cid:2) − n , n (cid:3) , let g a · g b = n R − n g a ( y ) g b ( y ) dy .Let E be the linear transformation, from the functions with domain (cid:2) − n , n (cid:3) to the functions with domain (cid:2) − n + a , n + a (cid:3) , such that Eg = n R − n R ( x, y ) g ( y ) dy .Player 1’s expected payoff in this game is the weighted average of all thepossible results (with f ( x ) and g ( y ) being the weights), which is n + a R − n + a n R − n R ( x, y ) f ( x ) g ( y ) dy dx ,can also be written as f · ( Eg ). (This is proven in Lemma 2 in Appendix 1.)If R ( x, y ) = − ( x − y ) , then: f · ( Eg ) = n + a Z − n + a n Z − n (cid:0) − ( x − y ) (cid:1) f ( x ) g ( y ) dy dx One way to analyze a linear transformation is to find its matrix, given abasis. This will be done with R ( x, y ) = − ( x − y ) , and the basis used will12e the orthonormal basis of polynomials f ( x ) = r ! (cid:18) n + a (cid:19) − f ( x ) = r ! (cid:18) n + a (cid:19) − xf ( x ) = r ! (cid:18) n + a (cid:19) − x − r ! (cid:18) n + a (cid:19) − f ( x ) = r ! (cid:18) n + a (cid:19) − x − r ! (cid:18) n + a (cid:19) − xf ( x ) = r ! (cid:18) n + a (cid:19) − x − r ! (cid:18) n + a (cid:19) − x + r ! (cid:18) n + a (cid:19) − · · · for Player 1, and the orthonormal basis of polynomials g ( y ) = r ! (cid:16) n (cid:17) − g ( y ) = r ! (cid:16) n (cid:17) − yg ( y ) = r ! (cid:16) n (cid:17) − y − r ! (cid:16) n (cid:17) − g ( y ) = r ! (cid:16) n (cid:17) − y − r ! (cid:16) n (cid:17) − yg ( y ) = r ! (cid:16) n (cid:17) − y − r ! (cid:16) n (cid:17) − y + r ! (cid:16) n (cid:17) − · · · for Player 2. (See Calculation 3 in Appendix 2 for the process of obtainingit.) Using this basis, f can be written as f · f f · f f · f ... and g can be written as13 g · g g · g g · g ... . As for the matrix of E , its entry in the i -th row and j -th columnshould be the i -th component of Eg j , but this is just f i · ( Eg j ). Thus, itfollows that the matrix of E is f · ( Eg ) f · ( Eg ) f · ( Eg ) · · · f · ( Eg ) f · ( Eg ) f · ( Eg ) · · · f · ( Eg ) f · ( Eg ) f · ( Eg ) · · ·· · · · · · · · · · · · which means that f · ( Eg ) is f · f f · f f · f ... T f · ( Eg ) f · ( Eg ) f · ( Eg ) · · · f · ( Eg ) f · ( Eg ) f · ( Eg ) · · · f · ( Eg ) f · ( Eg ) f · ( Eg ) · · ·· · · · · · · · · · · · g · g g · g g · g ... where T indicates transpose. Using f · ( Eg ) = n + a R − n + a n R − n (cid:0) − ( x − y ) (cid:1) f ( x ) g ( y ) dy dx ,the matrix of E is (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) + (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) · · · (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) · · · (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) · · · · · · · · ·· · · · · · · · · · · · · · · · · · Note the all-zero columns and rows. Calculation 2 shows that those and allfurther columns and rows are, in fact, all zero.The fact that only the top 3-by-3 corner of the matrix has any non-zeroentries means that f · ( Eg ) only depends on the first three rows and columns, This is true because f · f s is also the s -th component of f , as, if f = ∞ P s =0 c s f s for someconstants c s , then f · f s = f · ∞ P s =0 c s f s = ∞ P s =0 c s ( f · f s ), but as the f s are orthonormal, thisis just c s , which is the s -th component of f . f · f f · f f · f T (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) + (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) g · g g · g g · g The same is true in general. f · ( Eg ) still equals f · f f · f f · f ... T f · ( Eg ) f · ( Eg ) f · ( Eg ) · · · f · ( Eg ) f · ( Eg ) f · ( Eg ) · · · f · ( Eg ) f · ( Eg ) f · ( Eg ) · · ·· · · · · · · · · · · · g · g g · g g · g ... and this can be truncated to at most N + 1 dimensions (rows and columns),where N is the degree of r as a polynomial. ( f , f , f , · · · and g , g , g , · · · are exactly the same orthonormal polynomials as they were in the R ( x, y ) = − ( x − y ) case.) This statement is proven in Appendix 1 as Lemma 4.Let the old coordinates refer to the coordinates where Player 1’s strategyis represented by its probability-density function f , and Player 2’s strategy isrepresented by its probability-density function g , and let the new coordinatesbe f · f f · f f · f · · · f · f N for Player 1 and g · g g · g g · g · · · g · g N for Player 2, where N is the degreeof r as a polynomial. While the strategy space in the new coordinates is finite-dimensional if r isa polynomial, its boundaries are rather difficult to find directly. However,tools from convexity theory allow an easier description of the strategy space.Every mixed strategy is a convex combination of pure strategies, in thatif g is a mixed strategy, then g ( y ) = n R − n δ ( t y − y ) g ( t y ) dt y , where δ ( t y − y )is the Dirac delta function applied to t y − y , which is the probability density15unction of a random variable that always takes on the value t y . Since thetransformation to the new coordinates is linear, it follows that every mixedstrategy in the new coordinates is likewise a convex combination of the purestrategies. In fact, the coefficients that show the mixed strategy is a convexcombination of the pure strategies are exactly the same as the coefficientsthat do this in the old coordinates.Thus, the convex hull of the pure streategies in the new coordinates con-tains all the mixed strategies. If r is a polynomial of degree N , the purestrategies are part of an (at most) N + 1-dimensional space. Lemma 7 in Ap-pendix 1 shows that every point in the convex hull of an N + 1-dimensionalset is a convex combination of at most N + 2 points, so every mixed strategyis a convex combination of N + 2 or fewer strategies. This includes whateverthe Nash equilibrium is.In the example with R ( x, y ) = ( x − y ) , the strategies, in the new coor-dinates, are f · f f · f f · f for Player 1 and g · g g · g g · g for Player 2.The pure strategies are, in the new coordinates (as shown in Calculation4 in Appendix 2), (cid:16)q (cid:17) (cid:0) n + a (cid:1) − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − x (cid:16)q (cid:17) (cid:0) n + a (cid:1) − x − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − for Player 1, and (cid:16)q (cid:17) (cid:0) n (cid:1) − (cid:16)q (cid:17) (cid:0) n (cid:1) − y (cid:16)q (cid:17) (cid:0) n (cid:1) − y − (cid:16)q (cid:17) (cid:0) n (cid:1) − for Player 2. This can be viewed as a parametric representation of the curvecontaining all pure strategies (in the new coordinates). The Nash equilibriumstrategies (for both players) must lie within the convex hull of this curve,and Calculation 5 in Appendix 2 reduces the strategy space further, to aone-dimensional one (and actually finds the Nash equilibrium from there,although it may be easier to determine the strategy space in terms of the16ld coordinates, and then find the Nash equilibrium; after all, in the oldcoordinates, these strategies have simple descriptions).This is fewer than the 4 strategies (which imply a 7-parameter familyof strategies: 4 strategies, 4 coefficients, minus 1 for the coefficients alwayssumming to 1) allowed by Lemma 7 in Appendix 1 (and the overall theoremallows 5 strategies, which give 9 parmaters). Moreover, in each individualmixed strategy, the two strategies have fixed weights ( each), so in fact,one parameter suffices for naming each mixed strategy that could be a Nashequilibrium. Are there always going to be fewer parameters describing thespace in which a Nash equilibrium must lie? The answer is yes, and some ofthe extensions concern themselves with this. RR ( x, y ) = r (cid:0) x − y + a (cid:1) + r (cid:0) − x + y + a (cid:1) . Therefore, R is an even functionof x − y , which means that its degree in x − y is even. Thus, if r ( z ) has degree N where N is odd, then R ( x, y ) cannot have degree N . At most, R ( x, y ) hasdegree N −
1. That reduces the maximum number of components that mightbe required to describe a Nash-equilibrium strategy by one if N is odd, so if N is odd, there is a Nash-equilibrium strategy with only N + 1 components(or fewer), rather than N + 2. The equivalence of the two battlefields means that only symmetrical strate-gies need to be considered, where a symmetrical strategy is one that, forwhatever probability (or probability density) it assigns to a particular purestrategy, it assigns the same probability (or probability density) to the samestrategy with the battlefields reversed. As exchanging allocations turns apure strategy x into pure strategy − x (or y into − y ), if Player 1 exchangesallocations in every pure strategy of which Player 1’s mixed strategy f ( x ) is17omposed, that turns Player 1’s strategy into f ( − x ). A similar allocationexchange for Player 2 turns g ( y ) into g ( − y ). That makes any symmetricalstrategy be represented by an even function.A strategy can be symmetrized by taking its even part, f e ( x ) = f ( x )+ f ( − x )2 (or g e ( y ) = g ( y )+ g ( − y )2 . Since this is a mixture of two mixed strategies withcoefficients adding to 1, it is likewise a mixed strategy. If one player plays asymmetrical strategy, then the other player can symmetrize, without affect-ing the result, because a symmetrical strategy is invariant to the opponentexchanging allocations (and symmetrizing is mixing the original strategy withthe same strategy with exchanged allocations). If Player 1 playing strategy f and Player 2 playing strategy g is a Nash equilibrium, then none of the stepsof the following cycle: Player 1 symmetrizing, then Player 2 symmetrizing,then Player 1 removing the symmetrization, then Player 2 removing the sym-metrization, improves Player 1’s payoff. That means that each step keepsthe payoff the same, so there is no effect on the payoff from both playerssymmetrizing from a Nash equilibrium. Furthermore, if a deviation from f e succeeded (in improving the result against the original, with Player 1 playingstrategy f e and Player 2 playing strategy g e ) against g e , the symmetrizationof the deviation would succeed equally well, and the symmetrization wouldalso succeed against g , but that is impossible because Player 1 playing strat-egy f and Player 2 playing strategy g is a Nash equilibrium. The deviationdoes not succeed for Player 1, and a deviation by Player 2 would similarly notsucceed. Thus, if Player 1 playing strategy f and Player 2 playing strategy g is a Nash equilibrium, so is Player 1 playing strategy f e and Player 2 playingstrategy g e . More details can be found in Lemma 12 in Appendix 1 and itsproof.Lemma 10 in Appendix 1 confirms that f , f , f , · · · and g , g , g , · · · are all even or odd functions, and therefore, the new coordinates need onlyinclude the components corresponding to even-function polynomials, whichare polynomials with only even powers, the components of an even functioncorresponding to odd-function polynomials being zero. That reduces thedimension of the mixed strategy space to N +22 (or to N +12 if N is odd), thisbeing the number of nonnegative even integers less than or equal to thedegree of R . As symmetrizing is a linear operator, these mixed strategies areall and only the convex combinations of the symmetrized pure strategies. (Asymmetrized pure strategy at x , or y , is a mixed strategy of x and − x , orcorrespondingly, y and − y , each with probability .) That means that thereis a Nash-equilibrium strategy that is a convex combination of only N +42 (or N +32 if N is odd) symmetrized pure strategies.18n the new coordinates, the strategies were f · f f · f f · f · · · f · f N for Player 1 and g · g g · g g · g · · · g · g N for Player 2. This consideration reduces the strategy space to f · f f · f f · f · · · f · f ⌊ N ⌋ for Player 1 and g · g g · g g · g · · · g · g ⌊ N ⌋ for Player 2.
10 11 -th Coordinate When Carath´eodory’s Theorem was used, the dimension of the curve of purestrategies equaled the number of parameters. However, this curve actuallyhas a dimension one lower than the number of parameters, because f · f and The subscript of the last entry is N if N is even, or N − N is odd. The functions with the even subscripts are even, and the functions with the oddsubscripts are odd, because one term of f s as a polynomial in x is a nonzero multiple of x s , and therefore, as f s was already shown to be even or odd, it must be even if x s is even,or odd if x s is odd. However, x s is even exactly when s is even, and x s is odd exactlywhen s is odd. · g are constants, and that is because f · f = n + a Z − n + a f ( x ) f ( x ) dx = n + a Z − n + a f ( x ) r ! (cid:18) n + a (cid:19) − ! dx = r ! (cid:18) n + a (cid:19) − ! n + a Z − n + a f ( x ) dx = r ! (cid:18) n + a (cid:19) − (because n + a R − n + a f ( x ) dx = 1), and similarly for g · g . This reduces the truedimension of either player’s strategy space by one. Another extension is to make use of the fact that the set of all pure strategiesis a polynomial curve. Carath´eodory’s Theorem assumes the worst-case sce-nario, that is, that the points of which the convex hull is taken are discrete.This is not the case here; the locations of the pure strategies can be variedcontinuously, so, if the curve is M -dimensional, then, instead of the usual M + 1 points on the curve being required to name any point in the convexhull, only M +12 points are needed, as each component contributes 2 to thedimension of the space of the strategies: 1 for the probability of playing thispure strategy, plus 1 for the location of the pure strategy. That this isindeed possible is shown as the corollary of the theorem in [14]. A number of points that is a half-integer is to be interpreted as the next higher integernumber of points, with the first point fixed at its lowest possible value. This is “half apoint” in the sense that it only contributes 1 to the dimension of the space of the strategies,instead of 2. .1.5 The Total Effect These dimension-reducing extensions are not mutually-exclusive; indeed, allof them can be applied, in the order in which they were presented. Afterthe odd coordinates were removed, the strategy space became f · f f · f f · f · · · f · f ⌊ N ⌋ for Player 1 and g · g g · g g · g · · · g · g ⌊ N ⌋ for Player 2, and after the 0-th coordinate wasremoved, the strategy space became only ⌊ N ⌋ -dimensional, and the allowanceof 2 parameters per component means that only ⌊ N ⌋ +12 symmetrized purestrategies are required. Thus, only N +24 if N is even, or N +14 if N is odd,symmetrized pure strategies are required to find the Nash equilibrium.This low number allows the use of an otherwise-infeasible brute-force al-gorithm to find an approximate Nash equilibrium. First, find the payoff whenboth players play symmetrized pure strategies, in terms of which symmetrizedpure strategies the players play. Then, restrict Player 1’s symmetrized purestrategies to (cid:26) , ( n + a ) L , n + a ) L , · · · n + a (cid:27) , for some integer L . Do the same forPlayer 2, which gives the set (cid:26) , ( n ) L , n ) L , · · · n (cid:27) . For (symmetrized) mixedstrategies, express them as convex combinations of symmetrized pure strate-gies, and restrict the coefficients to the set of integer multiples of L (withthese coefficients still summing to 1). Then, for each mixed strategy ofPlayer 1, and for each symmetrized pure strategy of Player 2 (for both play-ers, using the only pure strategies possible to be ones in the “restricted” set),compute the expected payoff. For each mixed strategy of Player 1, choosethe lowest result obtained by Player 2’s pure strategies. Over all the mixedstrategies, the highest such minimum is the approxmate Nash-equilibriumstrategy for Player 1. Reverse the players (so now Player 2 selects a mixedstrategy, while Player 1 selects a symmetrized pure strategy) to get an ap- The only mixed strategies that are to be used here are those that are convex com-binations of up to N +24 , if N is even, (or of up to N +14 , if N is odd) symmetrized purestrategies. The main result need not be limited to the class of games examined here; forinstance, it would also apply if the choices of allocations for both players werediscrete, or if the two battlefields did not have the same outcome function, orif the outcome function (on either battlefield) were any polynomial of degree N or less in both x and y (not just a polynomial of degree N or less in x − y ),or any combination of these. However, not all of these extensions apply inall of these cases.Polynomials are not the only kind of function that give a payoff matrix offinite rank. This theorem might be modified to be applied to other functionsproducing payoff matrices of finite rank, perhaps functions like P ( z ) sin ( z )or P ( z ) sinh ( z ), where P ( z ) is a polynomial.With more than two batlefields, the strategy spaces still have an orthonor-mal basis, this time of polynomials in several variables. Might a Blotto gamewith the same polynomial outcome function on each battlefield also have aNash equilibrium with a discrete distribution? If so, how many componentsdoes this discrete distribution have? Perhaps, the methods of this paper canbe used to answer these questions. Modeling a competition using Blotto with a polynomial outcome functionhas two advantages over modeling the same competition with Blotto wherethe outcome on every battlefield is a win or a loss. One advantage is thata polynomial is a better fit to the actual function than a staircase function,especially if the staircase function is required to have its jump at zero (whichit should if we want to make the battlefield fair). This is even true if the bat-tlefields actually do have a winner-take-all rule, because a resource advantagedoes not guarantee victory. 22he other advantage is the existence of a Nash-equilibrium strategy thatis a discrete distribution with relatively few components. This leaves only afinite-dimensional space of possible Nash equilibria, which can be searchedfor saddle points. This is also a good reason to try to minimize the dimen-sion of the space in which Nash equilibria exist, so that the objects on whichthere is no small perturbation of strategy that improves the result are aslow-dimensional as possible; ideally, these should be points; hence the exten-sions. Once such an equilibrium strategy is found, it is easy to implementwith a random-number generator. The same cannot be said about continu-ous distributions in general, because simulating such a distribution requiresknowing how to invert the cumulative distribution function, and this inversemight not have a convenient form.
Lemma 1 (Effect of the shift of coordinates on the payoff) . Player 1’s payoffis r (cid:0) x − y + a (cid:1) + r (cid:0) − x + y + a (cid:1) .Proof. Player 1’s payoff is: r (˜ x − ˜ y ) + r ( a − ˜ x + ˜ y )= r (cid:18)(cid:18) x + n + a (cid:19) − (cid:16) y + n (cid:17)(cid:19) + r (cid:18) a − (cid:18) x + n + a (cid:19) + (cid:16) y + n (cid:17)(cid:19) = r (cid:18) x + n + a − y − n (cid:19) + r (cid:18) a − x − n + a y + n (cid:19) = r (cid:16) x − y + a (cid:17) + r (cid:16) − x + y + a (cid:17) Lemma 2.
Player 1’s expected payoff, which is n R − n n R − n R ( x, y ) f ( x ) g ( y ) dy dx ,equals f · ( Eg ) . roof. Player 1’s expected payoff can be simplified as follows: n Z − n n Z − n R ( x, y ) f ( x ) g ( y ) dy dx = n Z − n f ( x ) n Z − n R ( x, y ) g ( y ) dy dx = n Z − n ( f ( x )) (( Eg ) ( x )) dx = f · ( Eg )and thus equals f · ( Eg ). Lemma 3. If g is a function, there is exactly one decomposition of g into g || + g ⊥ , such that g || is a polynomial with degree at most M , and g ⊥ isorthogonal to all polynomials with degree at most M .Proof sketch . Use Gram-Schmidt orthogonalization to get an orthonormalbasis of the polynomials with degree at most M . Then, project g to the spaceof polynomials with degree at most M , for instance by adding the projectionsof g to the spaces of each individual vector of the basis. This is g || . What isleft is orthogonal to all polynomials with degree at most M .This is the only decomposition; if there were two distinct ones, subtractone from the other, which gives an alternative way to write 0: 0 = 0 || + 0 ⊥ .0 || = − ⊥ , so 0 || is both a polynomial of degree at most M and orthogonalto all polynomials of degree at most M , including itself. That makes 0 || = 0and 0 ⊥ = 0, so there is no alternative decomposition. Proof.
As the polynomials with degree at most M form an ( M + 1)-dimensionalvector space, this vector space must have an orthonormal basis (which can beconstructed by the Gram-Schmidt process, starting from the non-orthonormalbasis (cid:8) , y, y , ..., y M (cid:9) ). Let { b , b , b , ..., b M } be this orthonormal basis.Then, g || = M P t =0 (( g · b t ) b t ), and g ⊥ = g − g || . Indeed, for any b u with24 ∈ { , , , ..., M } : b u · g ⊥ = b u · (cid:0) g − g || (cid:1) = b u · g − M X t =0 (( g · b t ) b t ) !! = b u · g − M X t =0 (( g · b t ) ( b u · b t ))= b u · g − g · b u = 0(As { b , b , b , ..., b M } is an orthonormal basis, all terms of M P t =0 (( g · b t ) ( b u · b t ))except for the term with t = u , and if t = u , then b t · b u = 1.)The decomposition is unique, because if there were two decompositions g = g || + g ⊥ = g || + g ⊥ , then: (cid:0) g || + g ⊥ (cid:1) − (cid:0) g || + g ⊥ (cid:1) = 0 (cid:0) g || − g || (cid:1) + ( g ⊥ − g ⊥ ) = 0 (cid:0) g || − g || (cid:1) = − ( g ⊥ − g ⊥ )However, this means that g || − g || is both a polynomial with degree atmost M , and is orthogonal to all polynomials with degree at most M . Inparticular, this means that g || − g || is orthogonal to itself, and the onlypolynomial that is orthogonal to itself is the zero polynomial. The same istrue of g ⊥ − g ⊥ . Thus, g || = g || and g ⊥ = g ⊥ , so the two decompositionswere the same. Lemma 4 (Finite Rank of E ) . If r ( z ) is a polynomial function of z withdegree N , if f , f , · · · , f M are orthonormal and are all polynomials in x withdegree not greater than M , and if g , g , · · · , g M are orthonormal and are allpolynomials in y with degree not greater than M , then, for some M ≤ N , f · ( Eg ) can be expressed as (cid:0) f · f f · f · · · f · f M (cid:1) f · ( Eg ) f · ( Eg ) · · · f · ( Eg M ) f · ( Eg ) f · ( Eg ) · · · f · ( Eg M ) · · · · · · · · · · · · f M · ( Eg ) f M · ( Eg ) · · · f M · ( Eg M ) g · g g · g · · · g · g M roof sketch . Let M be the degree of R ( x, y ) as a polynomial in x − y .Decompose f · ( Eg ) into f || · (cid:0) Eg || (cid:1) + f ⊥ · (cid:0) Eg || (cid:1) + f || · ( Eg ⊥ ) + f ⊥ · ( Eg ⊥ ).The last two terms are zero, because R ( x, y ) is a polynomial of degree M inboth x and y , and at each x , applying E to g ⊥ is equivalent to taking thedot product of R and g ⊥ , which is zero. Eg is a polynomial in x with degreeat most M (as R ( x, y ) is such, and as Eg = n R − n R ( x, y ) g ( y ) dy is an integralwith respect to y ), and f ⊥ is orthogonal to that, so the second term is zeroalso. Thus, f · ( Eg ) = f || · (cid:0) Eg || (cid:1) .Decompose f according to an orthonormal basis { f , f , · · · f M } , anddecompose g according to an orthonormal basis { g , g , · · · g M } . Then, if f = f s x for some s x ∈ { , , · · · , M } and g = g s y for some s y ∈ { , , · · · , M } ,then the equation ( f · ( Eg ) equaling the expression) holds. Thus, using linearcombinations, the equation holds for f = f || and g = g || , because both sidesare linear in f and in g . However, for all s ∈ { , , · · · , M } , f ⊥ · f s = 0 and g ⊥ · g s = g || · g s , so, for f = f ⊥ or g = g ⊥ , both sides of the equation are zero,so it holds again. Thus, again using linear combinations, the equation holdsfor f in general. Proof. Eg = n Z − n R ( x, y ) g ( y ) dy = n Z − n (cid:16) r (cid:16) x − y + a (cid:17) + r (cid:16) − x + y + a (cid:17)(cid:17) g ( y ) dy = n Z − n (cid:16) r (cid:16) x − y + a (cid:17) + r (cid:16) − ( x − y ) + a (cid:17)(cid:17) g ( y ) dy a being a constant, it follows that r (cid:0) x − y + a (cid:1) + r (cid:0) − ( x − y ) + a (cid:1) is a poly-nomial in x − y with degree not more than N . Let P ( x − y ) = r (cid:0) x − y + a (cid:1) + r (cid:0) − ( x − y ) + a (cid:1) , and let M be the degree of P . This means that M is anonnegative integer not greater than N . Thus, R ( x, y ) = P ( x − y ).Decompose g into g || + g ⊥ , such that g || is a polynomial with degree atmost M , and g ⊥ is orthogonal to all polynomials with degree at most M f into f || + f ⊥ . Then: f · ( Eg ) = f · n Z − n P ( x − y ) g ( y ) dy = f · n Z − n P ( x − y ) (cid:0) g || ( y ) + g ⊥ ( y ) (cid:1) dy = f · n Z − n P ( x − y ) g || ( y ) dy + n Z − n P ( x − y ) g ⊥ ( y ) dy P is a polynomial in x − y with degree at most M , which means that, regard-less of the value of x , g ⊥ is orthogonal to P . That means that n R − n P ( x − y ) g ⊥ ( y ) dy =0 for all values of x (in (cid:2) − n , n (cid:3) ). Thus: f · ( Eg ) = f · n Z − n P ( x − y ) g || ( y ) dy + n Z − n P ( x − y ) g ⊥ ( y ) dy = f · n Z − n P ( x − y ) g || ( y ) dy = (cid:0) f || + f ⊥ (cid:1) · n Z − n P ( x − y ) g || ( y ) dy = f || · n Z − n P ( x − y ) g || ( y ) dy + f ⊥ · n Z − n P ( x − y ) g || ( y ) dy As f ⊥ is orthogonal to all polynomials in x of degree not more than N , andas n R − n P ( x − y ) g || ( y ) dy is itself a polynomial in x of degree not more than27 , this means that f ⊥ · n R − n P ( x − y ) g || ( y ) dy ! = 0. Thus: f · ( Eg ) = f || · n Z − n P ( x − y ) g || ( y ) dy + f ⊥ · n Z − n P ( x − y ) g || ( y ) dy = f || · n Z − n P ( x − y ) g || ( y ) dy = f || · n Z − n R ( x, y ) g || ( y ) dy = f || · (cid:0) Eg || (cid:1) As the polynomials in x (defined on (cid:2) − n + a , n + a (cid:3) ) with degree not more than M constitute an ( M + 1)-dimensional space, they have a basis with M + 1elements (for example, (cid:8) , x, x , ..., x M (cid:9) ). Moreover, applying the Gram-Schmidt construction guarantees the existence of an orthonormal basis ofthe polynomials of degree not more than M , and this basis still has M + 1elements. Let { f , f , f , ..., f M } be this orthonormal basis, or any other or-thonormal basis. The polynomials in y (defined on (cid:2) − n , n (cid:3) ) with degreenot more than M have a different orthonormal basis, { g , g , g , ..., g M } . Us-ing these orthonormal bases, f || decomposes as M P s x =0 (cid:0) f || · f s x (cid:1) f s x (as can bechecked by computing the dot products of each f s x with both f || and its28ecomposition). Similarly, g || decomposes as M P s y =0 (cid:0) g || · g s y (cid:1) g s y . Thus: f · ( Eg ) = f || · (cid:0) Eg || (cid:1) = M X s x =0 (cid:0) f || · f s x (cid:1) f s x ! · E M X s y =0 (cid:0) g || · g s y (cid:1) g s y = M X s x =0 (cid:0) f || · f s x (cid:1) f s x ! · M X s y =0 (cid:0) g || · g s y (cid:1) (cid:0) Eg s y (cid:1) = M X s x =0 (cid:0)(cid:0) f || · f s x (cid:1) f s x (cid:1) · M X s y =0 (cid:0) g || · g s y (cid:1) (cid:0) Eg s y (cid:1) = M X s x =0 M X s y =0 (cid:0) g || · g s y (cid:1) (cid:0)(cid:0)(cid:0) f || · f s x (cid:1) f s x (cid:1) · (cid:0) Eg s y (cid:1)(cid:1) = M X s x =0 M X s y =0 (cid:0)(cid:0) f || · f s x (cid:1) (cid:0) g || · g s y (cid:1) (cid:0) ( f s x ) · (cid:0) Eg s y (cid:1)(cid:1)(cid:1) As f , f , f , · · · , f M are all polynomials with degree at most M , and as f ⊥ is orthogonal to all polynomials with degree at most M , it follows that f ⊥ · f s x = 0. Thus, f · f s x = (cid:0) f || + f ⊥ (cid:1) · f s x = (cid:0) f || · f s x (cid:1) + ( f ⊥ · f s x )= (cid:0) f || · f s x (cid:1) + 0= f || · f s x For the same reason, g · g s y = g || · g s y . Thus: f · ( Eg ) = M X s x =0 M X s y =0 (cid:0)(cid:0) f || · f s x (cid:1) (cid:0) g || · g s y (cid:1) (cid:0) ( f s x ) · (cid:0) Eg s y (cid:1)(cid:1)(cid:1) = M X s x =0 M X s y =0 (cid:0) ( f · f s x ) (cid:0) g · g s y (cid:1) (cid:0) ( f s x ) · (cid:0) Eg s y (cid:1)(cid:1)(cid:1) = (cid:0) f · f · · · f · f M (cid:1) f · ( Eg ) · · · f · ( Eg M ) · · · · · · · · · f M · ( Eg ) · · · f M · ( Eg M ) g · g · · · g · g M f , f , · · · , f M areorthonormal and are all polynomials with degree not greater than M (asthey were defined to be this way), and where g , g , · · · , g M are orthonormaland are all polynomials with degree not greater than M (again, as they weredefined to be this way). Lemma 5.
Let v , ..., v N +2 be points in ℜ N . Then, if v is a convex combi-nation of these points, v can also be expressed as a convex combination of atmost N + 1 of them, not necessarily the same ones for every choice of v .Proof sketch . Consider v as a convex combination of exactly N + 2 points, v , v , ..., and v N +2 . The differences between the N + 2 points span a spaceof no more than N dimensions, so N differences suffice to span this space.That allows one of v − v , v − v , ...,and v N +2 − v to be expressed in termsof the rest, or, equivalently, allows 0 to be a linear combination of thesedifferences. Add this alternative way of representing 0 to v , which leavesthe sum of the coefficients untouched (because each difference has 0 as itssum of coefficients). In fact, the same is true for adding any multiple ofthis representation of 0 to v . Some multiples will keep all the coefficientsnonnegative; some will not. Small multiples will keep all the coefficientsnonnegative (unless one was zero to begin with, but then that term can justbe removed). Find the edges of region (of multiples of the zero) that keepall the coefficients nonnegative. On either edge of the region, one or morecoefficients is zero, and the rest are nonnegative. Now remove whicheverterms have the zero coefficient, and the result is a convex combination of N + 1 points or fewer. Proof.
Let v = N +2 P s =1 c s v s , where N +2 P s =1 c s = 1 and 0 ≤ c s ≤ s ∈{ , , ..., N + 2 } (in other words, v is a convex combination of the v s ). If v is not the zero vector, then subtract v from all the v s (including v itself)and also from v , and if v was a convex combination of the v s , then it is stilla convex combination of the v s , with the same choices of c s ; conversely, if v is now a convex combination of the v s , it was a convex combination of the v s , again with the same choices of c s .Let S be the space spanned by the v s ; this may or may not be ℜ N .Regardless, it is possible to select N vectors from the v s that span S . Add v to this set (or if v is already in, add another of the v s instead). Let v N +2 be the one of the v s that is not in this set; if it is not v N +2 , then reassign30he labels of the v s so that it is indeed v N +2 outside the set (and so that v remains 0).Given that v , ..., v N +1 span S , it follows that v N +2 = N +1 P s =1 d s v s for somereal numbers d s . Equivalently, if d N +2 is set to be − N +2 P s =1 d s v s = 0. As v = 0, it can be further specified that d = − N +2 P s =2 d s , so that the sum of allthe d s is zero.Thus, not only does v = N +2 P s =1 c s v s , but also, v = N +2 P s =1 c s v s + b (cid:18) N +2 P s =1 d s v s (cid:19) = v for any real number b . Simplifying the right side of this equation yields v = N +2 P s =1 ( c s + bd s ) v s , with N +2 P s =1 ( c s + bd s ) = N +2 P s =1 c s + b N +2 P s =1 d s = 1 + b (0) = 1,so, provided that all coefficients of N +2 P s =1 ( c s + bd s ) v s are nonnegative, thisexpression gives a different way to write v as a convex combination of v , ..., v N +2 .All that remains is to select a value of b that leaves all coefficients non-negative, but that sets at least one of them to zero. There is at least onenegative d s (for instance, d N +2 ). For all s where d s is negative, let t be theone for which − c s d s is lowest, and set b = − c t d t . b is a nonnegative number,because c t ≥
0. Thus, for any s where d s ≥ c s + bd s ≥ d s <
0, then c s + bd s = c s + − c t d t d s ≥ c s + − c s d s d s = 0. Thus,all coefficients are nonnegative. The coefficient with index t is zero, because c t + bd t = c t + − c t d t d t = 0. Thus, this is a value of b that makes N +2 P s =1 ( c s + bd s ) v s a representation of v as a convex combination of all the v , ..., v N +2 , with atleast one coefficient equal to 0. Remove this term, and the result is a convexcombination of at most N + 1 of v , ..., v N +2 . Lemma 6.
Let v , ..., v M be points in ℜ N , with M > N . Then, if v isa convex combination of these points, v can also be expressed as a convexcombination of at most N + 1 of them, not necessarily the same ones forevery choice of v .Proof sketch . For a convex combination of more than N + 2 points,consider just the first N + 2 points, with rescaled coefficients, so that the31escaled coefficients sum to 1. That is a convex combination of N + 2 points,so it can be rewritten as a convex combination of no more than N +1 of those N + 2 points by the previous lemma. Scale the N + 1 coefficients back, sothat their sum is the same as was the sum of the N + 2 original coefficients,and replace the part of the convex combination corresponding to the first N + 2 points with this new expression. That is still a representation of v asa convex combination of points, but this time, it is a convex combination ofonly M − N + 2 or more points, this procedurecan remove one, so repeat this procedure until there are only N + 1 or fewerpoints. Proof.
Let v = M P s =1 c s v s , where M P s =1 c s = 1 and 0 ≤ c s ≤ s ∈{ , , ..., M } (in other words, v is a convex combination of the v s ). Also,let at least one of c , c , ..., c N +2 be nonzero. (Reshuffle the points if neces-sary to make it so. There is at least one point with a nonzero coefficient.)Now: v = M X s =1 c s v s = N +2 X s =1 c s v s ! + M X s = N +3 c s v s ! = N +2 P u =1 c uN +2 P u =1 c u N +2 X s =1 c s v s ! + M X s = N +3 c s v s ! = N +2 X u =1 c u ! N +2 X s =1 c sN +2 P u =1 c u v s + M X s = N +3 c s v s ! As N +2 P s =1 c sN +2 P u =1 c u v s is a convex combination of v , ..., v N +2 (as its coefficientsare nonnegative and sum to 1), by Lemma 5, N +2 P s =1 c sN +2 P u =1 c u v s is a convexcombination of at most N + 1 vectors chosen from v , ..., v N +2 . Let w ,32.., w N +1 be these N + 1 vectors. Thus, for some nonnegative numbers d t summing to 1, N +2 P s =1 c sN +2 P u =1 c u v s = N +1 P t =1 d t w t . Therefore: N +2 X u =1 c u ! N +2 X s =1 c sN +2 P u =1 c u v s + M X s = N +3 c s v s ! = N +2 X u =1 c u ! N +1 X t =1 d t w t ! + M X s = N +3 c s v s ! = N +1 X t =1 d t N +2 X u =1 c u ! w t ! + M X s = N +3 c s v s ! All the coefficients sum to 1, as in the first sum, the coefficients sum to N +1 P t =1 (cid:18) d t N +2 P u =1 c u (cid:19) = N +2 P u =1 c u N +1 P t =1 d t = N +2 P u =1 c u = N +2 P s =1 c s , and in the second term,the coefficients sum to M P s = N +3 c s v s , for an overall sum of M P s =1 c s v s , which is1. Also, all the coefficients are nonnegative (because N +2 P s =1 c s is nonnegative).Thus, this is a convex combination of w , ..., w N +1 , v N +3 , ..., v M , and this listincludes only M − N + 2 (or fewer, in whichcase the lemma statement is true); at that point, Lemma 5 (or yet anotherrun of this procedure, with the sum from N + 3 to M having no terms andtherefore being zero) can be applied for a final reduction to at most N + 1points. Lemma 7.
Let h be a continuous function from (cid:2) − n , n (cid:3) to ℜ N . Then, if v = n R n f ( x ) h ( x ) dx with n R n f ( x ) dx = 1 and f ( x ) ≥ for all x in (cid:2) − n , n (cid:3) (in other words, v is in the convex hull of the path h ), then v is a convexcombination of at most N + 1 values of the range of h (in other words, of atmost N + 1 points on the path h ).Proof sketch . From the previous lemma, it is possible to reduce a convexcombination of finitely many points in an N -dimensional space to a convex33ombination of only N + 1 of those points. For a point v that is the con-vex combination of infinitely-many points on the path, divide the integralinto M pieces. In each piece, keep f ( x ) exactly as it was, but approximate h ( x ) by h at the left (lowest x ) endpoint. As M gets big, these approxi-mations approach v . Each of the approximations is a convex combination offinitely many points, so the previous lemma shows that each approximationis a convex combination of only N + 1 points. Write it this way for eachapproximation. Then, at least some of the approximations form a conver-gent sequence, in that the first points of the convex combinations converge,as do the second points, and so on until the ( N + 1)st points, and so do thecorresponding coefficients. For each point or coefficient, take the limit of thecorresponding point or coefficient in the convergent subsequence, and thisserves as a representation of v as a convex combination of N + 1 points. Proof.
Divide (cid:2) − n , n (cid:3) into M equal-size intervals (cid:2) − n , − n + nM (cid:3) , (cid:2) − n + nM , − n + 2 nM (cid:3) ,34.., (cid:2) − n + ( M − nM , n (cid:3) . Then, v = n Z n f ( x ) h ( x ) dx = M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h ( x ) dx = M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h ( x ) ...h N ( x ) dx (with h ( x ), ..., h N ( x ) being the components of h ( x ))= M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h ( x ) ...f ( x ) h N ( x ) dx = M − X s =0 − n +( s +1) nM R − n + s nM f ( x ) h ( x ) dx... − n +( s +1) nM R − n + s nM f ( x ) h N ( x ) dx = M − P s =0 − n +( s +1) nM R − n + s nM f ( x ) h ( x ) dx... M − P s =0 − n +( s +1) nM R − n + s nM f ( x ) h N ( x ) dx Let v t = M − P s =0 − n +( s +1) nM R − n + s nM f ( x ) h t ( x ) dx . In other words, v t is the t -th compo-35ent of v . Then, because f ( x ) is nonnegative: v t = M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h t ( x ) dx ≤ M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) (cid:18) max − n + s nM ≤ b ≤− n +( s +1) nM h t ( b ) (cid:19) dx In other words, if the h t ( x ) in each interval get increased to the maximum h t ( x ) in that interval (which exists because h t is continuous), then v t doesnot decrease. A similar statement gives a lower bound to v t : v t = M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h t ( x ) dx ≥ M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) (cid:18) min − n + s nM ≤ b ≤− n +( s +1) nM h t ( b ) (cid:19) dx Let b s,t,max satisfy − n + s nM ≤ b ≤ − n + ( s + 1) nM , and be such that h t ( b s,t,max ) = max − n + s nM ≤ b ≤− n +( s +1) nM ( h t ( b )). In other words, in the maximumexpression, b s,t,max is the b -value achieving that maximum, which exists be-cause h t is continuous and the maximum is taken over a closed interval.Similarly, let b s,t,min satisfy − n + s nM ≤ b ≤ − n + ( s + 1) nM , and be suchthat h t ( b s,t,min ) = min − n + s nM ≤ b ≤− n +( s +1) nM ( h t ( b )). Thus: v t ≤ M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h t ( b s,t,max ) dx and v t ≥ M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h t ( b s,t,min ) dx M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h t ( b s,t,max ) dx − M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h t ( b s,t,min ) dx = M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h t ( b s,t,max ) dx − − n +( s +1) nM Z − n + s nM f ( x ) h t ( b s,t,min ) dx = M − X s =0 − n +( s +1) nM Z − n + s nM ( f ( x ) h t ( b s,t,max ) − f ( x ) h t ( b s,t,min )) dx = M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) ( h t ( b s,t,max ) − h t ( b s,t,min )) dx Since h t is a continuous function on a closed interval (which is (cid:2) − n , n (cid:3) ), itis uniformly continuous. Therefore, for any ǫ >
0, there exists a δ >
0, suchthat for all x and x in (cid:2) − n , n (cid:3) , if | x − x | < δ , then | h t ( x ) − h t ( x ) | < ǫ .For each ǫ , select a δ where this is so, and choose any M > nδ . For any such M , − n + s nM and − n +( s + 1) nM are only nM apart, which is less than n ( nδ ) , or,equivalently, less than δ . That means that b s,t,min and b s,t,max are less than δ apart, as they are both in (cid:2) − n + s nM , − n + ( s + 1) nM (cid:3) and the endpointsof this interval are less than δ apart. Thus, | b s,t,min − b s,t,max | < δ , and thus, | h t ( b s,t,min ) − h t ( b s,t,max ) | < ǫ . Thus, for any ǫ , it is possible, just by setting37 to be big enough, to set: (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) ( h t ( b s,t,max ) − h t ( b s,t,min )) dx (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ M − X s =0 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − n +( s +1) nM Z − n + s nM f ( x ) ( h t ( b s,t,max ) − h t ( b s,t,min )) dx (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ M − X s =0 − n +( s +1) nM Z − n + s nM | f ( x ) ( h t ( b s,t,max ) − h t ( b s,t,min )) | dx ≤ M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) | h t ( b s,t,max ) − h t ( b s,t,min ) | dx ≤ M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) ( ǫ ) dx ≤ ǫ M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) dx ≤ ǫ n Z − n f ( x ) dx ≤ ǫ Thus, lim M →∞ M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) ( h t ( b s,t,max ) − h t ( b s,t,min )) dx = 0which means that the bounds on v t approach each other. Therefore, bothbounds, as well as anything always at or between them, must converge to v t .38ne thing guaranteed to be at or between the bounds is the left sum for v t = M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) h t ( x ) dx which is M − X s =0 (cid:16) h t (cid:16) − n s nM (cid:17)(cid:17) − n +( s +1) nM Z − n + s nM f ( x ) dx Recombining these left sums gives: M − P s =0 (cid:0) h (cid:0) − n + s nM (cid:1)(cid:1) − n +( s +1) nM R − n + s nM f ( x ) dx ! ... M − P s =0 (cid:0) h N (cid:0) − n + s nM (cid:1)(cid:1) − n +( s +1) nM R − n + s nM f ( x ) dx ! = M − X s =0 (cid:0) h (cid:0) − n + s nM (cid:1)(cid:1) − n +( s +1) nM R − n + s nM f ( x ) dx... (cid:0) h N (cid:0) − n + s nM (cid:1)(cid:1) − n +( s +1) nM R − n + s nM f ( x ) dx = M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) dx h (cid:0) − n + s nM (cid:1) ...h N (cid:0) − n + s nM (cid:1) = M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) dx h (cid:16) − n s nM (cid:17) This is actually a convex combination of the h (cid:0) − n + s nM (cid:1) , as the coefficientsof the h (cid:0) − n + s nM (cid:1) are − n +( s +1) nM R − n + s nM f ( x ) dx , which are all nonnegative, and39hich sum to: M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) dx = n Z − n f ( x ) dx = 1As such, by Lemma 6, the left sum for v (which is M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) dx h (cid:16) − n s nM (cid:17) ) is a convex combination of at most N +1 of the h (cid:0) − n + s nM (cid:1) . If it has fewerthan N +1 points, add more points to it with coefficient 0 to make it a convexcombination of exactly N + 1 points.Equivalently, for some nonnegative c q,M (where q goes from 1 to N +1) that sum to 1, and where the x q,M are selectedfrom the − n + s nM (for the same M ), M − X s =0 − n +( s +1) nM Z − n + s nM f ( x ) dx h (cid:16) − n s nM (cid:17) = N +1 X q =1 ( c q,M h ( x q,M ))Also, as M gets large, the left sum for v approaches v , because each of itscomponents was already shown to approach the corresponding component of v . Therefore, lim M →∞ N +1 X q =1 ( c q,M h ( x q,M )) = v The sequence of vectors c ,M ...c N +1 ,M x ,M ...x N +1 ,M has a convergent (in all components) subsequence. The reason for that isthat there is a subsequence where the first components converge (the first40omponents being bounded), and from there, there is a sub-subsequencewhere the second components converge, and this process can be repeatedfor each component. Let { M p } ∞ p =1 be the choices of M in this convergentsubsequence. That means that all the lim p →∞ c q,M p and all the lim p →∞ x q,M p exist.As such, the expression, N +1 X q =1 lim p →∞ c q,M p h (cid:18) lim p →∞ x q,M p (cid:19) ...h N (cid:18) lim p →∞ x q,M p (cid:19) has no non-existent limits in it, so it can be simplified using the usual rulesfor limits to lim p →∞ N +1 X q =1 c q,M p h (cid:0) x q,M p (cid:1) ...h N (cid:0) x q,M p (cid:1) which is lim p →∞ N +1 X q =1 (cid:0) c q,M p h (cid:0) x q,M p (cid:1)(cid:1) Given that it is already known thatlim M →∞ N +1 X q =1 ( c q,M h ( x q,M )) = v it follows that lim p →∞ N +1 X q =1 (cid:0) c q,M p h (cid:0) x q,M p (cid:1)(cid:1) = v This means that v = N +1 X q =1 lim p →∞ c q,M p h (cid:18) lim p →∞ x q,M p (cid:19) ...h N (cid:18) lim p →∞ x q,M p (cid:19) As long as each (as q varies) h (cid:18) lim p →∞ x q,M p (cid:19) ...h N (cid:18) lim p →∞ x q,M p (cid:19)
41s a point on the path h , this is a representation of v as a convex combinationof at most N + 1 points on the path h , as the c q,M p sum to 1, and thus sodo their limits as p approaches infinity, and as the c q,M p are all nonnegative,and thus so are their limits as p approaches infinity.Indeed, since all the h t are continuous, it follows that h (cid:18) lim p →∞ x q,M p (cid:19) ...h N (cid:18) lim p →∞ x q,M p (cid:19) = h (cid:18) lim p →∞ (cid:0) x q,M p (cid:1)(cid:19) The limit lim p →∞ (cid:0) x q,M p (cid:1) exists (since the M p were selected to make it exist) andis in (cid:2) − n , n (cid:3) (since all the (cid:0) x q,M p (cid:1) are in that interval), so h (cid:18) lim p →∞ x q,M p (cid:19) ...h N (cid:18) lim p →∞ x q,M p (cid:19) is indeed on the path h . Thus, v indeed is a convex combination of at most N + 1 points on the path h . Lemma 8.
For any function h , let h − be the function defined as h − ( z ) = h ( − z ) . Then, ( Eg ) − = E ( g − ) . In other words, if g were graphed with y on the horizontal axis, and if Eg were graphed with x on the horizontal axis,then, a reflection of g across the vertical axis causes Eg to likewise reflectacross the vertical axis.Proof. ( E ( g − )) ( x ) = n Z − n (cid:16) r (cid:16) x − y + a (cid:17) + r (cid:16) − x + y + a (cid:17)(cid:17) g − ( y ) dy = n Z − n (cid:16) r (cid:16) x − y + a (cid:17) + r (cid:16) − x + y + a (cid:17)(cid:17) g ( − y ) dy u = − y : n Z − n (cid:16) r (cid:16) x − y + a (cid:17) + r (cid:16) − x + y + a (cid:17)(cid:17) g ( − y ) dy = − − n Z n (cid:16) r (cid:16) x + u + a (cid:17) + r (cid:16) − x − u + a (cid:17)(cid:17) g ( u ) du = n Z − n (cid:16) r (cid:16) x + u + a (cid:17) + r (cid:16) − x − u + a (cid:17)(cid:17) g ( u ) du = n Z − n (cid:16) r (cid:16) − x − u + a (cid:17) + r (cid:16) x + u + a (cid:17)(cid:17) g ( u ) du = n Z − n (cid:16) r (cid:16) − x − u + a (cid:17) + r (cid:16) − ( − x ) + u + a (cid:17)(cid:17) g ( u ) du As the choice of the letter used for the variable of integration does not matter: n Z − n (cid:16) r (cid:16) − x − u + a (cid:17) + r (cid:16) − ( − x ) + u + a (cid:17)(cid:17) g ( u ) du = n Z − n (cid:16) r (cid:16) − x − y + a (cid:17) + r (cid:16) − ( − x ) + y + a (cid:17)(cid:17) g ( y ) du Thus:( E ( g − )) ( x ) = n Z − n (cid:16) r (cid:16) − x − y + a (cid:17) + r (cid:16) − ( − x ) + y + a (cid:17)(cid:17) g ( y ) du (cid:0) ( Eg ) − (cid:1) ( x ) = ( Eg ) ( − x )= n Z − n (cid:16) r (cid:16) − x − y + a (cid:17) + r (cid:16) − ( − x ) + y + a (cid:17)(cid:17) g ( y ) du This implies that ( Eg ) − = E ( g − ), as these two functions of x agree on everyvalue of x . Lemma 9 (Effect on E on even and odd parts of g ) . For all functions h , let h e = h + h − and h o ( z ) = h − h − , with h − ( z ) = h ( − z ) , as defined in Lemma 8.(In other words, h e is the ”even part” of h , and h o is the ”odd part” of h .)Then, ( Eg ) e = E ( g e ) and ( Eg ) o = E ( g o ) .Proof. ( Eg ) e = Eg + ( Eg ) − Eg + E ( g − )2= E ( g + g − )2= E (cid:18) g + g − (cid:19) = E ( g e )and, similarly: ( Eg ) o = Eg − ( Eg ) − Eg − E ( g − )2= E ( g − g − )2= E (cid:18) g − g − (cid:19) = E ( g o )44 emma 10. The orthonormal basis of Player 1’s strategies { f ( x ) , f ( x ) , f ( x ) , · · · } contains only even functions and odd functions. Likewise, the orthonormalbasis of Player 1’s strategies { g ( y ) , g ( y ) , g ( y ) , · · · } also contains onlyeven functions and odd functions.Proof. Let w be x for Player 1 or y for Player 2, let ν be n + a for Player 1or n for Player 2, and for all nonnegative integers s , let h s be f s for Player1 or g s for Player 2. In either case, to prove this lemma, it suffices to showthat the orthonormal basis { h ( w ) , h ( w ) , h ( w ) , · · · } contains only evenfunctions and odd functions. For either player, this is the orthonormal (usingthe dot product h a · h b = ν R − ν h a ( x ) h b ( x ) dx ) basis generated from the basis { , w, w , · · · , w n } by the Gram-Schmidt process.Suppose that this is not the case; that is, suppose that, for some nonneg-ative integer s , h s is neither even nor odd. Of all the s , where h s is neithereven nor odd, choose the lowest. s is not 0, because h = √ · is an evenfunction. For s > h s = w s − s − P j =0 (( w s · h j ) h j ) vuut w s − s − P j =0 (( w s · h j ) h j ) ! · w s − s − P j =0 (( w s · h j ) h j ) ! (obtained by reducing w s to its component perpendicular to all of h , h , h , · · · , h s − ,and by normalizing the result by dividing by the magnitude). This is defined,because, for every polynomial h , h · h = ν R − ν ( h j ( w )) dw ≥
0, and is only zerowhen h is the zero polynomial; w s − s − P j =0 (( w s · h j ) h j ) is not the zero polyno-mial, or else w s is a linear combination of h , h , · · · h s − , or, equivalently, alinear combination of 1 , w, w , · · · , w s − , but this is not the case.Now, each h j is either an even function or an odd function, as is w s . Ifexactly one of w s and h j is an even function (the other one being odd), then w s · h j = ν R − ν w s h j ( w ) dw = 0, as this is the integral of an odd function overa symmetric interval. Thus, for the sum s − P j =0 (( w s · h j ) h j ), if w s is even, then45very term in this sum is even , and, similarly, if w s is odd, then every termin this sum is odd. It follows that w s − s − P j =0 (( w s · h j ) h j ) itself is either aneven function or an odd function, but h s is w s − s − P j =0 (( w s · h j ) h j ) dividedby a (nonzero) constant, so h s is either even or odd, which contradicts theassumption.From the contrary assumption leading to a contradiction, the orthonor-mal basis { h ( w ) , h ( w ) , h ( w ) , · · · } contains only even functions and oddfunctions. It thus follows that the orthonormal basis of Player 1’s strategies { f ( x ) , f ( x ) , f ( x ) , · · · } contains only even functions and odd functions,and that the orthonormal basis of Player 1’s strategies { g ( y ) , g ( y ) , g ( y ) , · · · } also contains only even functions and odd functions. Lemma 11. If f and g are functions, then f o · ( Eg e ) = 0 , and similarly, f e · ( Eg o ) = 0 (where the subscripts e and o are as defined in Lemma 9).Proof. f o · ( Eg e ) = f o · (( Eg ) e ) (by Lemma 9)= n + a Z − n + a f o ( x ) ( Eg ) e ( x ) dx This is an integral of an odd function (as f o ( x ) ( Eg ) e , being the product ofan odd function and an even function, is odd) over a symmetric interval, and,therefore, it is zero. Similarly: f e · ( Eg o ) = f e · (( Eg ) o ) (by Lemma 9)= n + a Z − n + a f e ( x ) ( Eg ) o ( x ) dx This is an integral of an odd function (as f e ( x ) ( Eg ) o , being the product ofan even function and an odd function, is odd) over a symmetric interval, and,therefore, it is zero. Each h j is either even or odd; if h j is even, then ( w s · h j ) h j is even, being a constantmultiple of h j ; if h j is odd, then ( w s · h j ) h j = 0 h j = 0 is also an even function. emma 12 (Even-function equilibrium) . If Player 1 playing strategy f andPlayer 2 playing strategy g is a Nash equilibrium, then so is Player 1 playingstrategy f e and Player 2 playing strategy g e .Proof. Let f and g be strategies that can be played ( f by Player 1 and g byPlayer 2); that is, let n R n f ( x ) dx = 1 and n R n g ( y ) dx = 1, and let f and g benonnegative on their domains: f on (cid:2) − n + a , n + a (cid:3) , and g on (cid:2) − n , n (cid:3) . First,it is indeed possible for Player 1 to play strategy f e , as:1 = n + a Z n + a f ( x ) dx = n + a Z n + a f e ( x ) + f o ( x ) dx = n + a Z n + a f e ( x ) dx + n Z n f o ( x ) dx = n + a Z n + a f e ( x ) dx + 0 as f o is an odd function= n + a Z n + a f e ( x ) dx and as: f e ( x ) = f ( x ) + f − ( x )2= f ( x )2 + f − ( x )2= f ( x )2 + f ( − x )2 ≥
02 + 02 (as f ( x ) ≥ x ∈ (cid:20) − n + a , n + a (cid:21) )= 0 47t is also possible for Player 2 to play strategy g e , for the same reason. As f o · ( Eg e ) = 0 (by Lemma 11), it follows that: f · ( Eg e ) = ( f e + f o ) · ( Eg e )= ( f e · ( Eg e )) + ( f o · ( Eg e ))= f e · ( Eg e ) + 0= f e · ( Eg e )Similarly, as f e · ( Eg o ) = 0 (by Lemma 11), it follows that: f e · ( Eg ) = f · ( E ( g e + g o ))= f · ( Eg e + Eg o )= ( f · ( Eg e )) + ( f · ( Eg o ))= f e · ( Eg e ) + 0= f e · ( Eg e )Let f by Player 1 and g by Player 2 now be a Nash equilibrium. Thus, bydefinition of the Nash equilibrium: f · ( Eg ) ≥ f e · ( Eg )= f e · ( Eg e )= f · ( Eg e ) ≥ f · ( Eg )which makes all these quantities equal: f · ( Eg ), f e · ( Eg ), f e · ( Eg e ), and f · ( Eg e ). As shown earlier, it is also the case that f e · ( Eg ), f e · ( Eg e ),and f · ( Eg e ) (but not necessarily f · ( Eg )) would be equal even withoutthe assumption that f by Player 1 and g by Player 2 is a Nash equilibrium.Now, let Player 1 play strategy f e , and let Player 2 play strategy g e . Then,if Player 1 modifies Player 1’s strategy to ˜ f , the result is ˜ f · ( Eg e ). Then:˜ f · ( Eg e ) = (cid:16) ˜ f (cid:17) e · ( Eg e ) where (cid:16) ˜ f (cid:17) e is the even part of ˜ f = (cid:16) ˜ f (cid:17) e · ( Eg ) ≤ f · ( Eg ) (as f by Player 1 and g by Player 2 is a Nash equilibrium)= f e · ( Eg e )That means that Player 1 cannot benefit from modifying strategy f e whilePlayer 2 plays strategy g e . If, instead, Player 2 modifies Player 2’s strategy48o ˜ g , the result is f e · ( E ˜ g ). Then: f e · ( E ˜ g ) = f e · ( E (˜ g ) e ) where (˜ g ) e is the even part of ˜ g = f · ( E (˜ g ) e ) ≥ f · ( Eg ) (as f by Player 1 and g by Player 2 is a Nash equilibrium)= f e · ( Eg e )That means that Player 2 cannot benefit from modifying strategy g e whilePlayer 1 plays strategy f e . Therefore, f e by Player 1 and g e by Player 2 is aNash equilibrium. Calculation . If r ( z ) = − z , then: R ( x, y )= r (cid:16) x − y + a (cid:17) + r (cid:16) − x + y + a (cid:17) = − (cid:16) x − y + a (cid:17) − (cid:16) − x + y + a (cid:17) = − ( x − y ) − x − y ) (cid:16) a (cid:17) − x − y ) (cid:16) a (cid:17) − (cid:16) a (cid:17) − ( − x + y ) − − x + y ) (cid:16) a (cid:17) − − x + y ) (cid:16) a (cid:17) − (cid:16) a (cid:17) = − x − y ) (cid:16) a (cid:17) − (cid:16) a (cid:17) Calculation . If R ( x, y ) = − ( x − y ) , then this calculation shows that if g is orthogonal to 1, y , and y (which means that n R − n g ( y ) dy , n R − n yg ( y ) dy ,and n R − n y g ( y ) dy are all zero), or if f is orthogonal to 1, x , and x (whichmeans that n + a R − n + a f ( x ) dx , n + a R − n + a xg ( x ) dx , and n + a R − n + a x g ( x ) dx are all zero)then f · ( Eg ) = 0. 49f g is orthogonal to all of 1, y , and y : Eg = n Z − n − ( x − y ) g ( y ) dy = n Z − n − (cid:0) x − xy + y (cid:1) g ( y ) dy = − x n Z − n − (1) g ( y ) dy + 2 x n Z − n − ( y ) g ( y ) dy − n Z − n − (cid:0) y (cid:1) g ( y ) dy = − x (0) + 2 x (0) − (0)= 0Thus, if g is orthogonal to 1, to y , and to y , then Eg = 0. This automaticallyimplies that f · ( Eg ) = n + a R − n + a f ( x ) 0 dx = 0.50f, instead, f is orthogonal to all of 1, x , and x : f · ( Eg ) = n + a Z − n + a n Z − n (cid:0) − ( x − y ) (cid:1) f ( x ) g ( y ) dy dx = n + a Z − n + a n Z − n (cid:0) − x + 2 xy − y (cid:1) f ( x ) g ( y ) dy dx = n + a Z − n + a n Z − n (cid:0) − x f ( x ) + 2 xf ( x ) y − y f ( x ) (cid:1) g ( y ) dy dx = n + a Z − n + a (cid:0) x f ( x ) (cid:1) n Z − n − g ( y ) dy + ( xf ( x )) n Z − n yg ( y ) dy + ( f ( x )) n Z − n − y g ( y ) dy dx = n + a Z − n + a (cid:0) x f ( x ) (cid:1) n Z − n − g ( y ) dy dx + n + a Z − n + a ( xf ( x )) n Z − n yg ( y ) dy dx + n + a Z − n + a ( f ( x )) n Z − n − y g ( y ) dy dx = n Z − n − g ( y ) dy n + a Z − n + a x f ( x ) dx + n Z − n yg ( y ) dy n + a Z − n + a xf ( x ) dx + n Z − n − y g ( y ) dy n + a Z − n + a f ( x ) dx = 0The last step is valid, because the x -integrals are all zero, because of theorthogonality of f ( x ) to 1, x , and x . Thus, if f is orthogonal to 1, to x ,and to x , then f · ( Eg ) = n + a R − n + a f ( x ) 0 dx = 0. Calculation . From the polynomials 1 , x, x , · · · , an orthonormal basis canbe constructed by using the Gram-Schmidt process. This process is the same51or both players, but the results are different, because the dot products aredifferent for both players. The letter ν should be interpreted as n + a for Player1 and n for Player 2. Thus, for both players, h · h = ν R − ν h ( x ) h ( x ) dx .To ease computation: x A · x B = ν Z − ν x A x B dx = ν Z − ν x A + B dx = ν A + B +1 A + B + 1 − − ν A + B +1 A + B + 1= ν A + B +1 A + B + 1 (cid:16) − ( − A + B +1 (cid:17) = ν A + B +1 A + B + 1 − ( , A + B + 1 is even − , A + B + 1 is odd !! = ν A + B +1 A + B + 1 ( , A + B + 1 is even2 , A + B + 1 is odd ! Thus, if A + B is even (so A + B + 1 is odd), x A · x B = (cid:0) A + B +1 (cid:1) ν A + B +1 ,while, if A + B is odd (so A + B + 1 is even), x A · x B = 0.The first (non-normalized) vector of the basis is 1. The second vector ofthe basis is x − (cid:0) x · · (cid:1)
1, which is the component of x orthogonal to 1. As x · x · x = 0, this is just x .The third vector is x − (cid:16) x · · (cid:17) − (cid:16) x · xx · x (cid:17) x , which is orthogonal to both1 and x . x − (cid:18) x · · (cid:19) − (cid:18) x · xx · x (cid:19) x = x − (cid:0) (cid:1) ν (cid:0) (cid:1) ν ! − (cid:0) (cid:1) ν ! x = x − (cid:0) (cid:1) ν ν ! − (cid:0) (cid:1) ν ! x = x − (cid:18) (cid:19) ν x − (cid:0) (cid:1) ν . The fourth vector is: (again, to beorthogonal to 1, x , and x − (cid:0) (cid:1) ν ) x − (cid:18) x · · (cid:19) − (cid:18) x · xx · x (cid:19) x − x · (cid:0) x − (cid:0) (cid:1) ν (cid:1)(cid:0) x − (cid:0) (cid:1) ν (cid:1) · (cid:0) x − (cid:0) (cid:1) ν (cid:1) ! (cid:18) x − (cid:18) (cid:19) ν (cid:19) = x − (cid:0) (cid:1) ν ! − (cid:0) (cid:1) ν (cid:0) (cid:1) ν ! x − x · x − x · (cid:0)(cid:0) (cid:1) ν (cid:1)(cid:0) x − (cid:0) (cid:1) ν (cid:1) · (cid:0) x − (cid:0) (cid:1) ν (cid:1) ! (cid:18) x − (cid:18) (cid:19) ν (cid:19) = x − (cid:18)(cid:18) (cid:19) ν (cid:19) x − − (cid:0)(cid:0) (cid:1) ν (cid:1)(cid:0) x − (cid:0) (cid:1) ν (cid:1) · (cid:0) x − (cid:0) (cid:1) ν (cid:1) ! (cid:18) x − (cid:18) (cid:19) ν (cid:19) = x − (cid:18)(cid:18) (cid:19) ν (cid:19) x (since the dot product of a nonzero with itself is the integral of the square ofa nonzero polynomial, and hence is itself nonzero). Thus, the fourth vectoris x − (cid:0)(cid:0) (cid:1) ν (cid:1) x . The fifth vector is (to be orthogonal to the first four): x − (cid:18) x · · (cid:19) − (cid:18) x · xx · x (cid:19) x − x · (cid:0) x − (cid:0) (cid:1) ν (cid:1)(cid:0) x − (cid:0) (cid:1) ν (cid:1) · (cid:0) x − (cid:0) (cid:1) ν (cid:1) ! (cid:18) x − (cid:18) (cid:19) ν (cid:19) − x · (cid:0) x − (cid:0) (cid:1) ν x (cid:1)(cid:0) x − (cid:0) (cid:1) ν x (cid:1) · (cid:0) x − (cid:0) (cid:1) ν x (cid:1) ! (cid:18) x − (cid:18) (cid:19) ν x (cid:19) = x − (cid:0) (cid:1) ν (cid:0) (cid:1) ν ! − (cid:18) x · x (cid:19) x − x · x − (cid:0) (cid:1) ν ( x · (cid:0) x − (cid:0) (cid:1) ν (cid:1) · (cid:0) x − (cid:0) (cid:1) ν (cid:1) ! (cid:18) x − (cid:18) (cid:19) ν (cid:19) − x · x − (cid:0) (cid:1) ν ( x · x ) (cid:0) x − (cid:0) (cid:1) ν x (cid:1) · (cid:0) x − (cid:0) (cid:1) ν x (cid:1) ! (cid:18) x − (cid:18) (cid:19) ν x (cid:19) = x − (cid:18) (cid:19) ν − (cid:0) (cid:1) ν − (cid:0)(cid:0) (cid:1) ν (cid:1) (cid:0)(cid:0) (cid:1) ν (cid:1) x · x − (cid:0)(cid:0) (cid:1) ν (cid:1) ( x ·
1) + (cid:0)(cid:0) (cid:1) ν (cid:1) (1 · ! (cid:18) x − (cid:18) (cid:19) ν (cid:19) − − (cid:0) (cid:1) ν (0) (cid:0) x − (cid:0) (cid:1) ν x (cid:1) · (cid:0) x − (cid:0) (cid:1) ν x (cid:1) ! (cid:18) x − (cid:18) (cid:19) ν x (cid:19) = x − (cid:18) (cid:19) ν − (cid:0) (cid:1) ν − (cid:0) (cid:1) ν (cid:0) (cid:1) ν − (cid:0)(cid:0) (cid:1) ν (cid:1) (cid:0)(cid:0) (cid:1) ν (cid:1) + (cid:0)(cid:0) (cid:1) ν (cid:1) (cid:0)(cid:0) (cid:1) ν (cid:1) ! (cid:18) x − (cid:18) (cid:19) ν (cid:19) x − (cid:18) (cid:19) ν − (cid:0) (cid:1) ν (cid:0) (cid:1) ν ! (cid:18) x − (cid:18) (cid:19) ν (cid:19) = x − (cid:18) (cid:19) ν − (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:18) x − (cid:18) (cid:19) ν (cid:19) = x − (cid:18) (cid:19) ν − (cid:18) (cid:19) ν x + (cid:18) (cid:19) ν = x − (cid:18) (cid:19) ν x + (cid:18) (cid:19) ν Thus, the first five non-normalized vectors are 1, x , x − (cid:0) (cid:1) ν , x − (cid:0) (cid:1) ν x ,and x − (cid:0) (cid:1) ν x + (cid:0) (cid:1) ν . Normalizing them requires dividing each of themby its magnitude. Thus, the first vector is:1 √ · q(cid:0) (cid:1) ν = r ! ν − The second vector is: x √ x · x = x q(cid:0) (cid:1) ν = r ! ν − x x − (cid:0) (cid:1) ν q(cid:0) x − (cid:0) (cid:1) ν (cid:1) · (cid:0) x − (cid:0) (cid:1) ν (cid:1) = x − (cid:0) (cid:1) ν q x · x − (cid:0)(cid:0) (cid:1) ν (cid:1) ( x ·
1) + (cid:0)(cid:0) (cid:1) ν (cid:1) (1 · x − (cid:0) (cid:1) ν q(cid:0) (cid:1) ν − (cid:0)(cid:0) (cid:1) ν (cid:1) (cid:0)(cid:0) (cid:1) ν (cid:1) + (cid:0)(cid:0) (cid:1) ν (cid:1) (cid:0)(cid:0) (cid:1) ν (cid:1) = x − (cid:0) (cid:1) ν q(cid:0) (cid:1) ν − (cid:0) (cid:1) ν + (cid:0) (cid:1) ν = x − (cid:0) (cid:1) ν q(cid:0) (cid:1) ν = r ! ν − x − r ! ν − The fourth vector is: x − (cid:0) (cid:1) ν x q(cid:0) x − (cid:0) (cid:1) ν x (cid:1) · (cid:0) x − (cid:0) (cid:1) ν x (cid:1) = x − (cid:0) (cid:1) ν x q x · x − (cid:0)(cid:0) (cid:1) ν (cid:1) ( x · x ) + (cid:0)(cid:0) (cid:1) ν (cid:1) ( x · x )= x − (cid:0) (cid:1) ν x q(cid:0) (cid:1) ν − (cid:0)(cid:0) (cid:1) ν (cid:1) (cid:0)(cid:0) (cid:1) ν (cid:1) + (cid:0)(cid:0) (cid:1) ν (cid:1) (cid:0)(cid:0) (cid:1) ν (cid:1) = x − (cid:0) (cid:1) ν x q(cid:0) (cid:1) ν − (cid:0) (cid:1) ν + (cid:0) (cid:1) ν = x − (cid:0) (cid:1) ν x q(cid:0) (cid:1) ν = r ! ν − x − r ! ν − x x − (cid:0) (cid:1) ν x + (cid:0) (cid:1) ν q(cid:0) x − (cid:0) (cid:1) ν x + (cid:0) (cid:1) ν (cid:1) · (cid:0) x − (cid:0) (cid:1) ν x + (cid:0) (cid:1) ν (cid:1) = x − (cid:0) (cid:1) ν x + (cid:0) (cid:1) ν vuuuuut x · x − (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:0) x · x (cid:1) + 2 (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:0) x · (cid:1) + (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:0) x · x (cid:1) − (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:0) x · (cid:1) + (cid:18)(cid:18) (cid:19) ν (cid:19) (1 · x − (cid:0) (cid:1) ν x + (cid:0) (cid:1) ν vuuuuuut (cid:18) (cid:19) ν − (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:18)(cid:18) (cid:19) ν (cid:19) + 2 (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:18)(cid:18) (cid:19) ν (cid:19) + (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:18)(cid:18) (cid:19) ν (cid:19) − (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:18)(cid:18) (cid:19) ν (cid:19) + (cid:18)(cid:18) (cid:19) ν (cid:19) (cid:18)(cid:18) (cid:19) ν (cid:19) This simplifies to: x − (cid:0) (cid:1) ν x + (cid:0) (cid:1) ν q(cid:0) (cid:1) ν − (cid:0) (cid:1) ν + (cid:0) (cid:1) ν + (cid:0) (cid:1) ν − (cid:0) (cid:1) ν + (cid:0) (cid:1) ν = x − (cid:0) (cid:1) ν x + (cid:0) (cid:1) ν q(cid:0) (cid:1) ν = r ! ν − x − r ! ν − x + r ! ν − r ! ν − r ! ν − x r ! ν − x − r ! ν − r ! ν − x − r ! ν − x r ! ν − x − r ! ν − x + r ! ν − (where ν is n + a for Player 1 and n for Player 2). Calculation . The pure strategy with x = t , with t ∈ [ − ν, ν ], where ν is n + a for Player 1 and n for Player 2, is represented by the Dirac delta “function” δ ( x − t ). For any function h , the following holds: δ ( x − t ) · h = ν Z − ν δ ( x − t ) h ( x ) dx = ν Z − ν δ ( x − t ) h ( t ) dx = h ( t ) ν Z − ν δ ( x − t ) dx = ( h ( t )) (1)= h ( t )This means that the pure strategies for either player in the new coordinatesare f ( t ) f ( t ) f ( t ) for Player 1 and g ( t ) g ( t ) g ( t ) for Player 2, which, equivalently, are (cid:16)q (cid:17) (cid:0) n + a (cid:1) − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − t (cid:16)q (cid:17) (cid:0) n + a (cid:1) − t − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − (cid:16)q (cid:17) (cid:0) n (cid:1) − (cid:16)q (cid:17) (cid:0) n (cid:1) − t (cid:16)q (cid:17) (cid:0) n (cid:1) − t − (cid:16)q (cid:17) (cid:0) n (cid:1) − for Player 2. Calculation . The curve (cid:16)q (cid:17) (cid:0) n + a (cid:1) − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − t (cid:16)q (cid:17) (cid:0) n + a (cid:1) − t − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − of Player 1’s pure strategies is symmetrical over the plane x = 0. That is,if ( x , x , x ) is on the curve, then so is ( x , − x , x ), which can be reachedsimply by switching the sign of t . It follows that the convex hull of thiscurve is also symmetric over the plane x = 0, for the same reason: inany convex combination of pure strategies, switching the sign of t in all thecomponents switches the sign of the third component, but leaves the othertwo components as they were.Similarly, the convex hull of the curve of Player 2’s pure strategies, whichis the convex hull of the curve (cid:16)q (cid:17) (cid:0) n (cid:1) − (cid:16)q (cid:17) (cid:0) n (cid:1) − t (cid:16)q (cid:17) (cid:0) n (cid:1) − t − (cid:16)q (cid:17) (cid:0) n (cid:1) − is symmetrical over the plane y = 0.This means that for every choice of f · f and f · f Player 1 can make,Player 1 can always set f · f to be zero without leaving the convex hull, anda similar statement holds for Player 2.Player 1’s expected payoff f · ( Eg ), which is f · f f · f f · f T (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) + (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) g · g g · g g · g f · f f · f f · f T (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) + (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) g · g g · g g · g or as (cid:18) f · f f · f (cid:19) T (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) + (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:18) g · g g · g (cid:19) + (cid:18) − (cid:19) (cid:18) n + a (cid:19) (cid:16) n (cid:17) ( f · f ) ( g · g )(as can be verified by carrying out the matrix multiplications). Either playercan set (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) ( f · f ) ( g · g ) to be zero without affecting theother term, by setting f · f or g · g to zero. It follows that in any Nashequilibrium, Player 1 setting f · f to zero, and Player 2 setting g · g to zero,and both players doing this, yield the same expected payoff. Thus, Player1 setting f · f to zero gives no new options to Player 2, as Player 2 couldsimply have ”simulated” those options by setting g · g to zero. Thus, theresult of both f · f and g · g being set to zero is another Nash equilibrium.Thus, Player 1’s strategy space can be reduced to the convex hull of (cid:16)q (cid:17) (cid:0) n + a (cid:1) − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − t − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − , and Player 2’s strategy space canbe reduced to the convex hull of (cid:16)q (cid:17) (cid:0) n (cid:1) − (cid:16)q (cid:17) (cid:0) n (cid:1) − t − (cid:16)q (cid:17) (cid:0) n (cid:1) − , with t going from − n + a to n + a for Player 1 and from − n to n for Player 2, theinterval in which the pure strategies fall. This interval can be cut in half to (cid:2) , n + a (cid:3) for Player 1 and to (cid:2) , n (cid:3) for Player 2, because the negative values of t contribute no points to either curve that were not already contributed by apositive value of t , and thus they contribute no new points to the convex hullof that curve. This makes t a bijective function, so T x = (cid:0) n + a (cid:1) − t for Player1 and T y = (cid:0) n (cid:1) − t can serve as the parameters for these curves. Both T x and T y are in [0 , T x , Player 1’s strategy space can be reduced to the59onvex hull of (cid:16)q (cid:17) (cid:0) n + a (cid:1) − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − T x − (cid:16)q (cid:17) (cid:0) n + a (cid:1) − , and Player 2’s strat-egy space can be reduced to the convex hull of (cid:16)q (cid:17) (cid:0) n (cid:1) − (cid:16)q (cid:17) (cid:0) n (cid:1) − T y − (cid:16)q (cid:17) (cid:0) n (cid:1) − .These are line segments (although they would not be line segments if r wereof higher degree than 3), so they are their own convex hulls.After factoring, these segments become (cid:16)q (cid:17) (cid:0) n + a (cid:1) − (cid:16)q (cid:17) T x − q for Player 1 and (cid:16)q (cid:17) (cid:0) n (cid:1) − (cid:16)q (cid:17) T y − q for Player 2.60s these reduced spaces are in the new coordinates, Player 1’s payoff is r ! (cid:18) n + a (cid:19) − (cid:16)q (cid:17) T x − q T ∗ (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) + (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) ∗ r ! (cid:16) n (cid:17) − (cid:16)q (cid:17) T y − q = (cid:18) (cid:19) (cid:18) n + a (cid:19) − (cid:16) n (cid:17) − (cid:16)q (cid:17) T x − q T ∗ (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) + (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:0) − (cid:1) (cid:0) n + a (cid:1) (cid:0) n (cid:1) (cid:16) − √ (cid:17) (cid:0) n + a (cid:1) (cid:0) n (cid:1) ∗ (cid:16)q (cid:17) T y − q = (cid:18) (cid:19) (cid:18) n + a (cid:19) − (cid:16) n (cid:17) − (cid:18) − (cid:19) (cid:18) n + a (cid:19) (cid:16) n (cid:17) + (cid:18) − (cid:19) (cid:18) n + a (cid:19) (cid:16) n (cid:17) ! (1) (1)+ − √ ! (cid:18) n + a (cid:19) (cid:16) n (cid:17) r ! T x − r ! (1)+ − √ ! (cid:18) n + a (cid:19) (cid:16) n (cid:17) (1) r ! T y − r ! (by multiplying out the matrix product and ignoring any zero entries).This is a decreasing function in both T x and T y , so Player 1 should choose T = 0 and Player 2 should choose T = 1. Player 1’s choice corresponds to61 = 0, and hence, to the pure strategy 0. Player 2’s choice corresponds to t = n , and hence, to the equal mixture of pure stategies n and − n . References [1] E. M. L. Beale and G. P. M. Heselden. An approximate method ofsolving blotto games.
Naval Research Logistics Quarterly , 9(2):65–79,1962.[2] Soheil Behnezhad, Sina Dehghani, Mahsa Derakhshan, Mohammad-Taghi HajiAghayi, and Saeed Seddighin. Faster and simpler algorithmfor optimal strategies of blotto game.
CoRR , abs/1612.04029, 2016.[3] Richard Bellman. Short notes: On “Colonel Blotto” and analogousgames.
SIAM Review , 11(1):66–68, 1969.[4] D. W. Blackett. Pure strategy solutions of Blotto games.
Naval ResearchLogistics Quarterly , 5(2):107–109, 1958.[5] Emile Borel. La th´eorie du jeu et les ´equations int´egrales `a noyausym´etrique.
Comptes rendus de l’Acad´emie des Sciences , 173(1304-1308):58, 1921.[6] ´Emile Borel. The theory of play and integral equations with skew sym-metric kernels.
Econometrica , 21(1):97–100, 1953.[7] I. L. Glicksberg. A further generalization of the Kakutani Fixed Pointtheorem, with application to Nash equilibrium points.
Proceedings ofthe American Mathematical Society , 3(1):170–174, 1952.[8] Russell Golman and Scott E. Page. General Blotto: games of allocativestrategic mismatch.
Public Choice , 138(3):279–299, 2009.[9] O. Gross and R. Wagner. A continuous Colonel Blotto game. 1950.[10] Sergiu Hart. Discrete Colonel Blotto and General Lotto games.
Inter-national Journal of Game Theory , 36(3):441–460, 2008.[11] Rafael Hortala-Vallve and Aniol Llorente-Saguer. Pure strategy Nashequilibria in non-zero sum colonel Blotto games.
International Journalof Game Theory , 41(2):331–343, 2012.6212] Dan J. Kovenock and Brian Roberson. Generalizations of the GeneralLotto and Colonel Blotto Games. CESifo Working Paper Series 5291,CESifo Group Munich, 2015.[13] Scott T. Macdonell and Nick Mastronardi. Waging simple wars: acomplete characterization of two-battlefield Blotto equilibria.
EconomicTheory , 58(1):183–216, 2015.[14] Kostyantyn Mazur. Convex hull of ( t, t , · · · , t n ). 2017.[15] V. V. Morozov and K. D. Shalbuzov. On a solution of the discrete re-source allocation game. Moscow University Computational Mathematicsand Cybernetics , 38(2):37–44, 2014.[16] Ant´onio Os´orio. The lottery Blotto game. Working papers, UniversitatRovira i Virgili, Department of Economics, 2013.[17] Brian Roberson and Dmitriy Kvasov. The non-constant-sum ColonelBlotto game.
Economic Theory , 51(2):397–433, 2012.[18] Galina Schwartz, Patrick Loiseau, and Shankar Sastry. The heteroge-neous Colonel Blotto Game. In
NETGCOOP 2014, International Con-ference on Network Games, Control and Optimization, October 29-31,2014, Trento, Italy , Trento, ITALY, 10 2014.[19] Caroline D. Thomas. N-Dimensional Blotto Game with AsymmetricBattlefield Values. Department of Economics Working Papers 130116,The University of Texas at Austin, Department of Economics, December2009.[20] Jonathan Weinstein. Two notes on the Blotto game.