Optimal ordering of transmissions for computing Boolean threhold functions
aa r X i v : . [ c s . I T ] A p r Optimal ordering of transmissions for computing Booleanthrehold functions
Hemant Kowshik
CSL and Department of ECEUniversity of Illinois Urbana-ChampaignEmail: [email protected]
P. R. Kumar
CSL and Department of ECEUniversity of Illinois Urbana-ChampaignEmail: [email protected]
Abstract —We address a sequential decision problem that arisesin the computation of symmetric Boolean functions of distributeddata. We consider a collocated network, where each node’stransmissions can be heard by every other node. Each nodehas a Boolean measurement and we wish to compute a givenBoolean function of these measurements. We suppose that themeasurements are independent and Bernoulli distributed. Thus,the problem of optimal computation becomes the problem ofoptimally ordering node’s transmissions so as to minimize thetotal expected number of bits.We solve the ordering problem for the class of Booleanthreshold functions. The optimal ordering is dynamic, i.e., itcould potentially depend on the values of previously transmittedbits. Further, it depends only on the ordering of the marginalprobabilites, but not on their exact values. This provides anelegant structure for the optimal strategy. For the case where eachnode has a block of measurements, the problem is significantlyharder, and we conjecture the optimal strategy.
I. INTRODUCTIONMost sensor network applications are typically interestedonly in computing some relevant function of the correlateddata at distributed sensors. For example, one might want tocompute the mean temperature for environmental monitoring,or the maximum temperature in fire alarm systems. On theother hand, sensor nodes are severely limited in terms ofpower and bandwidth, and are generating enormous quantitiesof data. Thus, we seek efficient in-network computation andcommunication strategies for the function of interest.Computing and communicating functions of distributed datapresents several challenges. On the one hand, the wirelessmedium being a broadcast medium, nodes have to deal withinterference from other transmissions. On the other hand,nodes can exploit these overheard transmissions, and thestructure of the function to be computed, to achieve a moreefficient description of their own data. Moreover, the strategyfor computation may benefit from interactive informationexchange between nodes.We consider a collocated network where each node’s trans-missions can be heard by every other node. At most one node
This material is based upon work partially supported by AFOSR underContract FA9550-09-0121, NSF under Contract No. CNS-07-21992, andUSARO under Contract Nos. W911NF-08-1-0238 and W-911-NF-0710287.Any opinions, findings, and conclusions or recommendations expressed in thispublication are those of the authors and do not necessarily reflect the viewsof the above agencies. is allowed to transmit successfully at any time. Each node hasa Boolean variable and we focus on the specific problem ofsymmetric Boolean function computation. We adopt a deter-ministic formulation of the problem of function computation,allowing zero error. We suppose that node measurementsare independent and distributed according to given marginalBernoulli distributions. In this paper, we focus on optimalstrategies for Boolean threshold functions, which are equalto 1 if and only if the number of nodes with measurement1 is greater than a certain threshold. The set of admissiblestrategies includes all interactive strategies, where a node mayexchange several messages with other nodes.In the case where each node has a single bit, the commu-nication problem is rendered trivial, since it is optimal forthe transmitting node to simply indicate its bit value. Thus,it only remains to determine the optimal ordering of nodes’transmissions so as to minimize the expected number of bitsexchanged. For the class of Boolean threshold functions, wepresent a simple indexing policy for ordering the transmissionsand prove its optimality. The optimal policy is dynamic, possi-bly depending on the previously transmitted bits. Further, theoptimal policy depends only on the ordering of the marginalprobabilities, but surprisingly not on their values.The problem of optimally ordering transmissions of nodesis a sequential decision problem and can indeed be solved bydynamic programming. However, this would require solvingthe dynamic program for all thresholds and all probabilitydistributions, which is computationally hard. We avoid this,and establish a more insightful solution, in the form of a simplerule defining the optimal policy.In Section III, we formulate the problem of single instancecomputation, and derive the resulting dynamic programmingequation. We then propose the indexing policy and presenta detailed proof of optimality, by induction on the number ofnodes in the network. In Section IV, we consider the extensionto the case of block computation, where each node has ablock of measurements and we are allowed block coding.This problem is significantly harder, and we conjecture thestructure of an optimal multi-round policy, building on theoptimal policy for single instance computation.I. RELATED WORKThe the problem of worst-case block function computationwith zero error was formulated in [1]. The authors identifytwo classes of symmetric functions namely type-sensitive functions exemplified by Mean, Median and Mode, and type-threshold functions, exemplified by Maximum and Minimum.The maximum rates for computation of type-sensitive andtype-threshold functions in random planar networks are shownto be Q ( n ) and Q ( n ) respectively, where n is thenumber of nodes. If we impose a probability distribution onthe node measurements, one can show that the average casecomplexity of computing type-threshold functions is Q ( ) [2].In this paper, we require that every node must compute thefunction. This approach naturally allows the use of tools fromcommunication complexity [3]. In communication complexity[3], we seek to find the minimum number of bits that mustbe exchanged between two nodes to achieve worst-case zero-error computation of a function of the node variables. Theproblem of worst-case Boolean function computation wasfirst considered in [4], where the complexity of the BooleanAND function was shown to be log n nodes have i.i.d. measurements, and a central agentwishes to know the identities of the nodes with the k largestvalues. One is allowed questions of the type “Is X ≥ t ”,to which the central agent receives the list of all nodeswhich satisfy the condition. Under this framework, the optimalrecursive strategy of querying the nodes is found. A keydifference in our formulation is that we are only allowed toquery particular nodes, and not all nodes at once.III. O PTIMAL ORDERING FOR SINGLE INSTANCECOMPUTATION
Consider a collocated network with nodes 1 through n ,where each node i has a Boolean measurement X i ∈ { , } . The X i s are independent of each other and drawn from a Bernoullidistribution with P ( X i = ) = : p i . Without loss of generality,we assume that p ≤ p ≤ · · · ≤ p n .We address the following problem. Every node wants tocompute the same function f ( X , X , . . . , X n ) of the measure-ments. We seek to find communication schemes which achievecorrect function computation at each node, with minimumexpected total number of bits exchanged. Throughout thispaper, we consider the broadcast scenario where each node’s transmission can be heard by every other node. We also sup-pose that collisions do not convey information thus restrictingourselves to collision-free strategies as in [1]. This meansthat for the k th bit b k , the identity of the transmitting node T k depends only on previously broadcast bits b , b , . . . , b k − ,while the value of the bit it sends can depend arbitrarily on allprevious broadcast bits as well as its own measurements X T k .First, we note that since each node has exactly one bit ofinformation, it is optimal to set b k = X T k . Indeed, for anyother choice b ′ k = g ( b , . . . , b k − , X T k ) , the remaining nodes canreconstruct b ′ k since they already know b i , . . . , b k − . Thus theonly freedom available is in choosing the transmitting node T k as a function of b , b , . . . , b k − , for otherwise the transmissionitself could be avoided. We call this the ordering problem .Thus, by definition, the order can dynamically depend on theprevious broadcast bits. In this paper, we address the orderingproblem for a class of Boolean functions, namely thresholdfunctions. Notation:
The set of measurements of nodes 1 through n is denoted by ( X , X , . . . , X n ) which is abbreviated as X n . In the sequel, we will use X n − i to denote the set ofmeasurements ( X , . . . , X i − , X i + , . . . , X n ) . As a natural ex-tension, we use X n − ( i , j ) to denote the set of measurements ( X , . . . , X i − , X i + , . . . , X j − , X j + , . . . , X n ) , where i < j . A. Optimal ordering for computing Boolean threshold func-tionsDefinition 1 (Boolean threshold functions):
A Booleanthreshold function P q ( X , X , . . . , X n ) is defined as P q ( X , X , . . . , X n ) = (cid:26) (cid:229) i X i ≥ q , P n − k ( X n ) , the ordering problem can in-deed be solved using dynamic programming. Let C ( P n − k ( X n ) denote the minimum expected number of bits required tocompute P n − k ( X n ) . The dynamic programming equation is C ( P n − k ( X n )) = min i { + p i C ( P n − k − ( X n − i ))+( − p i ) C ( P n − k ( X n − i )) } . However solving this equation is computationally complex.Further, it is unclear at the outset if the optimal strategy willdepend only on the ordering of the p i s, or their particularvalues. This makes the explicit solution of (III-A) for all n , k and ( p , p , . . . p n ) notoriously hard. We present a very simplecharacterization of the optimal strategy for each n and 0 ≤ k ≤ n − p i s, but only depends on the ordering.To begin with, we argue that solving the ordering problemfor Boolean threshold functions, is equivalent to solving thefollowing problem for each n and k : In the optimal strategyfor computing P n − k ( X , X , . . . X n ) determine which node musttransmit first. Indeed, if T ( ) is the first node to transmit underthe optimal strategy, then, depending on whether X T ( ) = X T ( ) =
1, the rest of the nodes would need to compute P n − k ( X n − T ( ) ) or P n − k − ( X n − T ( ) ) . Since we solved the prob-lem for all n and k , we can determine which node shouldtransmit next in either case. heorem 1: In order to compute the Boolean thresholdfunction P n − k ( X n ) , it is optimal for node k + n and all 0 ≤ k ≤ n − p ≤ p ≤ . . . ≤ p n . Proof:
Define ˜ C ( P n − k ( X n )) : = C ( P n − k ( X n )) − T m , k , i ( X m ) : = p k + ˜ C ( P m − k − ( X m − ( k + ) )) + ( − p k + ) ˜ C ( P m − k ( X m − ( k + ) )) − p i ˜ C ( P m − k − ( X m − i )) − ( − p i ) ˜ C ( P m − k ( X m − i )) . (1) T m , k , i is the difference between the expected number of bitswhen node k + i transmits first. S ( ) m , k , i ( X m ) : = ( p k + − p i ) C ( P m − k − ( X m − ( k + , i ) ))+ ( − p k + ) ˜ C ( P m − k ( X m − ( k + ) )) − ( − p i ) ˜ C ( P m − k ( X m − i )) (2) S ( ) m , k , i ( X m ) : = ( p i − p k + ) C ( P m − k − ( X m − ( i , k + ) ))+ p k + ˜ C ( P m − k − ( X m − ( k + ) )) − p i ˜ C ( P m − k − ( X m − i )) (3) We do not yet have an interpretation for S ( ) m , k , i and S ( ) m , k , i .However, we will use these expressions in the sequel.We establish the above theorem by induction on the numberof nodes n . However, we need to load the induction hypothesis.Consider the following induction hypothesis.(a) T m , k , i ( X m ) ≤ ≤ k ≤ ( m − ) , ≤ i ≤ m (b) S ( ) m , k , i ( X m ) ≤ ≤ k ≤ ( m − ) , k + ≤ i ≤ m (c) S ( ) m , k , i ( X m ) ≤ ≤ k ≤ ( m − ) , ≤ i ≤ k Observe that part ( a ) immediately establishes that k + P m − k ( X m ) .The basis step for m = , k = m ≤ n . Wenow proceed to prove the hypothesis for m = n + Lemma 1:
For fixed k and i ≥ k +
2, we have S ( ) n + , k , i ( X n + ) ≤ Proof:
See Appendix A.
Lemma 2:
For fixed k and i ≤ k , we have S ( ) n + , k , i ( X n + ) ≤ Proof:
See Appendix B.Lemmas 1 and 2 establish the induction step for parts ( b ) and ( c ) of the induction hypothesis. We now proceed to showthe induction step for part ( a ) . Lemma 3:
For fixed k and i ≥ k +
2, we have T n + , k , i ( X n + ) ≤ S ( ) n + , k , i ( X n + ) . Proof:
See Appendix C
Lemma 4:
For fixed k and i ≤ k , we have T n + , k , i ( X n + ) ≤ S ( ) n + , k , i ( X n + ) . Proof:
See Appendix DUsing Lemmas 3 and 4 together with Lemmas 1 and 2, wesee that T n + , k , i ( X n + ) ≤ ≤ k ≤ n and i = k +
1. Forthe case i = k +
1, we have T ( n + , k , k + ) = ( a ) , and the proof of theTheorem. ✷ IV. O
PTIMAL ORDERING FOR BLOCK COMPUTATION
We now shift attention to the case where we allow for nodesto accumulate a block of N measurements, and thus achieveimproved efficiency by using block codes. We consider theclass of all interactive strategies for computation, where the k th bit can depend arbitrarily on all previously broadcast bits.We require that all nodes compute the function with zero errorfor the block. We present a conjecture for the optimal strategybased on the insight gained from the single instance solution. Conjecture 1:
In order to compute the Boolean thresholdfunction P n − k ( X n ) , it is optimal for node k + n and all 0 ≤ k ≤ n − p ≤ p ≤ . . . ≤ p n .Observe that after node k + X k + =
0, we need to compute P n − k ( X n − ( k + ) ) and for the instanceswhere X k + =
1, we need to compute P n − k − ( X n − ( k + ) ) . Thusthe conjectured strategy can be recursively applied, yieldingan interactive multi-round strategy. However, proving theoptimality of this strategy is significantly harder. For worstcase block computation, the lower bound is established usingfooling sets [5]. Adapting this idea to the probabilistic scenarioremains an interesting challenge for the future.V. C ONCLUDING REMARKS
We have considered a sequential decision problem, thatarises in the context of optimal computation of Booleanthreshold functions in collocated networks. For single instancecomputation, we show that the optimal strategy has an elegantstructure, which depends only on the ordering of the marginalprobabilities, and not on their exact values. The extensionto the case of block computation is harder and remains achallenge for the future. It is also interesting to extend thisresult to the case of correlated measurementsR
EFERENCES[1] A. Giridhar and P. R. Kumar. Computing and communicating functionsover sensor networks.
IEEE Journal on Selected Areas in Communication ,23(4):755–764, April 2005.[2] H. Kowshik and P. R. Kumar. Zero-error function computation in sensornetworks. In
Proceedings of the 48th IEEE Conference on Decision andControl(CDC) , December 2009.[3] E. Kushilevitz and N. Nisan.
Communication Complexity . CambridgeUniversity Press, 1997.[4] R. Ahlswede and Ning Cai. On communication complexity of vector-valued functions.
IEEE Transactions on Information Theory , 40:2062–2067, 1994.[5] H. Kowshik and P. R. Kumar. Optimal strategies for computing booleanfunctions in collocated networks. In
Proceedings of the InformationTheory Workshop, Cairo , January 2010.[6] A. Orlitsky and J. R. Roche. Coding for computing.
IEEE Transactionson Information Theory , 47:903–917, 2001.[7] N. Ma, P. Ishwar, and P. Gupta. Information-theoretic bounds for multi-round function computation in collocated networks. In
IEEE InternationalSymposium on Information Theory (ISIT) , 2009.[8] Yosi Ben-Asher and Ilan Newman. Decision trees with boolean thresholdqueries.
Journal of Computer and System Sciences , 51, 1995.[9] K. J. Arrow, L. Pesotchinsky, and M. Sobel. On partitioning of a samplewith binary-type questions in lieu of collecting observations.
Journal ofthe American Statistical Association , 76(374):402–409, June 1981.
PPENDIX
A. Proof of Lemma 1 ( p k + − p i ) C ( P n − k ( X n + − ( k + , i ) )) + ( − p k + ) ˜ C ( P n − k + ( X n + − ( k + ) )) − ( − p i ) ˜ C ( P n − k + ( X n + − i ))= ( p k + − p i ) h + p k C ( P n − k − ( X n + − ( k , k + , i ) )) + ( − p k ) C ( P n − k ( X n + − ( k , k + , i ) )) i +( − p k + ) h p k C ( P n − k ( X n + − ( k , k + ) )) + ( − p k ) C ( P n − k + ( X n + − ( k , k + ) )) i − ( − p i ) h p k C ( P n − k ( X n + − ( k , i ) )) + ( − p k ) C ( P n − k + ( X n + − ( k , i ) )) i (4) = p k h ( p k + − p i ) C ( P n − k − ( X n + − ( k , k + , i ) )) + ( − p k + ) ˜ C ( P n − k ( X n + − ( k , k + ) )) − ( − p i ) ˜ C ( P n − k ( X n + − ( k , i ) )) i +( − p k ) h ( p k + − p i ) C ( P n − k ( X n + − ( k , k + , i ) )) + ( − p k + ) ˜ C ( P n − k + ( X n + − ( k , k + ) )) − ( − p i ) ˜ C ( P n − k + ( X n + − ( k , i ) )) i ≤ p k h ( p k + − p i ) C ( P n − k − ( X n + − ( k , k + , i ) )) + ( − p k + ) ˜ C ( P n − k ( X n + − ( k , k + ) )) − ( − p i ) ˜ C ( P n − k ( X n + − ( k , i ) )) i + ( − p k ) S ( ) n , k − , i − ( X n + − k ) ≤ p k h ( p k + − p i ) C ( P n − k − ( X n + − ( k , k + , i ) )) + ( − p k + ) ˜ C ( P n − k ( X n + − ( k , k + ) )) − ( − p i ) ˜ C ( P n − k ( X n + − ( k , i ) )) i (5) = p k h ( p k + − p i ) C ( P n − k − ( X n + − ( k , k + , i ) )) + ( − p k + ) ˜ C ( P n − k ( X n + − ( k , k + ) )) − ( − p i )[ p k + C ( P n − k − ( X n + − ( k , k + , i ) )) + ( − p k + ) C ( P n − k ( X n + − ( k , k + , i ) ))] i (6) = p k ( − p k + ) h ˜ C ( P n − k ( X n + − ( k , k + ) )) − p i C ( P n − k − ( X n + − ( k , k + , i ) )) − ( − p i ) C ( P n − k ( X n + − ( k , k + , i ) )) i ≤ . (7)Equation (4) follows from the optimal ordering for computing P n − k ( X n + − ( k + , i ) ) , P n − k + ( X n + − ( k + ) ) and P n − k + ( X n + − i ) , whichis true by the induction hypothesis for m = n . The inequality (5) follows from the induction hypothesis that S ( ) n , k − , i ( X n + − k ) ≤ P n − k ( X n + − ( k , i ) ) and P n − k ( X n + − ( k , k + ) ) respectively. ✷ B. Proof of Lemma 2 ( p i − p k + ) C ( P n − k ( X n + − ( i , k + ) )) + p k + ˜ C ( P n − k ( X n + − ( k + ) )) − p i ˜ C ( P n − k ( X n + − i ))= ( p i − p k + ) h + p k + C ( P n − k − ( X n + − ( i , k + , k + ) )) + ( − p k + ) C ( P n − k ( X n + − ( i , k + , k + ) )) i + p k + h p k + C ( P n − k − ( X n + − ( k + , k + ) )) + ( − p k + ) C ( P n − k ( X n + − ( k + , k + ) )) i − p i h p k + C ( P n − k − ( X n + − ( i , k + ) )) + ( − p k + ) C ( P n − k ( X n + − ( i , k + ) )) i (8) = p k + h ( p i − p k + ) C ( P n − k − ( X n + − ( i , k + , k + ) )) + p k + ˜ C ( P n − k − ( X n + − ( k + , k + ) )) − p i ˜ C ( P n − k − ( X n + − ( i , k + ) )) i +( − p k + ) h ( p i − p k + ) C ( P n − k ( X n + − ( i , k + , k + ) )) + p k + ˜ C ( P n − k ( X n + − ( k + , k + ) )) − p i ˜ C ( P n − k ( X n + − ( i , k + ) )) i ≤ p k + h S ( ) n , k , i ( X n + − ( k + ) ) i + ( − p k + ) h ( p i − p k + ) C ( P n − k ( X n + − ( i , k + , k + ) )) + p k + ˜ C ( P n − k ( X n + − ( k + , k + ) )) − p i ˜ C ( P n − k ( X n + − ( i , k + ) )) i ≤ ( − p k + ) h ( p i − p k + ) C ( P n − k ( X n + − ( i , k + , k + ) )) + p k + ˜ C ( P n − k ( X n + − ( k + , k + ) )) − p i ˜ C ( P n − k ( X n + − ( i , k + ) )) i (9) = ( − p k + ) h ( p i − p k + ) C ( P n − k ( X n + − ( i , k + , k + ) )) + p k + ˜ C ( P n − k ( X n + − ( k + , k + ) )) − p i [ p k + C ( P n − k − ( X n + − ( i , k + , k + ) )) + ( − p k + ) C ( P n − k ( X n + − ( i , k + , k + ) ))] i (10) = ( − p k + ) p k + h ˜ C ( P n − k ( X n + − ( k + , k + ) )) − p i C ( P n − k − ( X n + − ( i , k + , k + ) )) − ( − p i ) C ( P n − k ( X n + − ( i , k + , k + ) )) i ≤ . (11)Equation (8) follows from the optimal ordering for computing P n − k ( X n + − ( i , k + ) ) , P n − k ( X n + − ( k + ) ) and P n − k ( X n + − i ) , which followsfrom the induction hypothesis for m = n . The inequality (9) follows from the induction hypothesis that S ( ) n , k , i ( X n + − ( k + ) ) ≤ P n − k ( X n + − ( i , k + ) ) and P n − k ( X n + − ( k + , k + ) ) respectively. ✷ . Proof of Lemma 3 First, we observe that T n + , k , i ( X n + ) − S ( ) n + , k , i ( X n + ) = p k + ˜ C ( P n − k ( X n + − ( k + ) )) − p i ˜ C ( P n − k ( X n + − i )) − ( p k + − p i ) C ( P n − k ( X n + − ( k + , i ) )) Thus it is enough to show that p k + ˜ C ( P n − k ( X n + − ( k + ) )) − p i ˜ C ( P n − k ( X n + − i )) ≤ ( p k + − p i ) C ( P n − k ( X n + − ( k + , i ) )) for i ≥ k + p k + ˜ C ( P n − k ( X n + − ( k + ) )) − p i ˜ C ( P n − k ( X n + − i ))= p k + h p k + C ( P n − k − ( X n + − ( k + , k + ) )) + ( − p k + ) C ( P n − k ( X n + − ( k + , k + ) )) i − p i h p k + C ( P n − k − ( X n + − ( k + , i ) )) + ( − p k + ) C ( P n − k ( X n + − ( k + , i ) )) i (12) = p k + h p k + C ( P n − k − ( X n + − ( k + , k + ) )) − p i C ( P n − k − ( X n + − ( k + , i ) )) i + p k + ( − p k + ) C ( P n − k ( X n + − ( k + , k + ) )) − p i ( − p k + ) C ( P n − k ( X n + − ( k + , i ) )) ≤ p k + h ( − p i ) C ( P n − k ( X n + − ( k + , i ) )) − ( − p k + ) C ( P n − k ( X n + − ( k + , k + ) )) i + p k + ( − p k + ) C ( P n − k ( X n + − ( k + , k + ) )) − p i ( − p k + ) C ( P n − k ( X n + − ( k + , i ) )) (13) = ( p k + − p i ) C ( P n − k ( X n + − ( k + , i ) )) Equation 12 follows from the optimal order for computing P n − k ( X n + − ( k + ) ) and P n − k ( X n + − i ) . The inequality in 13 follows fromthe induction hypothesis T n , k , i ( X n + − ( k + ) ) ≤ ✷ D. Proof of Lemma 4
First, we observe that T n + , k , i ( X n + ) − S ( ) n + , k , i ( X n + ) = ( − p k + ) ˜ C ( P n − k + ( X n + − ( k + ) )) − ( − p i ) ˜ C ( P n − k + ( X n + − i )) − ( p i − p k + ) C ( P n − k ( X n + − ( i , k + ) )) Thus it is enough to show that ( − p k + ) ˜ C ( P n − k + ( X n + − ( k + ) )) − ( − p i ) ˜ C ( P n − k + ( X n + − i )) ≤ ( p i − p k + ) C ( P n − k ( X n + − ( i , k + ) )) for i ≤ k ( − p k + ) ˜ C ( P n − k + ( X n + − ( k + ) )) − ( − p i ) ˜ C ( P n − k + ( X n + − i ))= ( − p k + ) h p k C ( P n − k ( X n + − ( k , k + ) )) + ( − p k ) C ( P n − k + ( X n + − ( k , k + ) )) i − ( − p i ) h p k + C ( P n − k ( X n + − ( i , k + ) )) + ( − p k + ) C ( P n − k + ( X n + − ( i , k + ) )) i (14) = ( − p k + ) h ( − p k ) C ( P n − k + ( X n + − ( k , k + ) )) − ( − p i ) C ( P n − k + ( X n + − ( i , k + ) )) i + p k ( − p k + ) C ( P n − k ( X n + − ( k , k + ) )) − p k + ( − p i ) C ( P n − k ( X n + − ( i , k + ) )) ≤ ( − p k + ) h p i C ( P n − k ( X n + − ( i , k + ) )) − p k C ( P n − k ( X n + − ( k , k + ) )) i + p k ( − p k + ) C ( P n − k ( X n + − ( k , k + ) )) − p k + ( − p i ) C ( P n − k ( X n + − ( i , k + ) )) (15) = ( p i − p k + ) C ( P n − k ( X n + − ( i , k + ) )) Equation 14 follows from the optimal order for computing P n − k + ( X n + − ( k + ) ) and P n − k + ( X n + − i ) . The inequality in 15 followsfrom the induction hypothesis T n , k − , i ( X n + − ( k + ) ) ≤ ✷✷