Maximum Selection and Sorting with Adversarial Comparators and an Application to Density Estimation
Jayadev Acharya, Moein Falahatgar, Ashkan Jafarpour, Alon Orlitsky, Ananda Theertha Suresh
aa r X i v : . [ c s . D S ] J un Maximum Selection and Sorting with Adversarial Comparators andan Application to Density Estimation
Jayadev AcharyaEECS, MIT [email protected]
Moein FalahatgarECE, UCSD [email protected]
Ashkan JafarpourYahoo Labs [email protected]
Alon OrlitksyECE & CSE, UCSD [email protected]
Ananda Theertha SureshGoogle Research [email protected]
September 18, 2018
Abstract
We study maximum selection and sorting of n numbers using pairwise comparators that output thelarger of their two inputs if the inputs are more than a given threshold apart, and output an adversarially-chosen input otherwise. We consider two adversarial models. A non-adaptive adversary that decides onthe outcomes in advance based solely on the inputs, and an adaptive adversary that can decide on theoutcome of each query depending on previous queries and outcomes.Against the non-adaptive adversary, we derive a maximum-selection algorithm that uses at most 2 n comparisons in expectation, and a sorting algorithm that uses at most 2 n ln n comparisons in expectation.These numbers are within small constant factors from the best possible. Against the adaptive adversary,we propose a maximum-selection algorithm that uses Θ( n log(1 /ǫ )) comparisons to output a correctanswer with probability at least 1 − ǫ . The existence of this algorithm affirmatively resolves an openproblem of Ajtai, Feldman, Hassadim, and Nelson [AFHN15].Our study was motivated by a density-estimation problem where, given samples from an unknownunderlying distribution, we would like to find a distribution in a known class of n candidate distributionsthat is close to underlying distribution in ℓ distance. Scheffe’s algorithm [DL01] outputs a distributionat an ℓ distance at most 9 times the minimum and runs in time Θ( n log n ). Using maximum selection,we propose an algorithm with the same approximation guarantee but run time of Θ( n log n ). Maximum selection and sorting are fundamental operations with wide-spread applications in computing,investment, marketing [AMPP09], decision making [Thu27, Dav63], and sports. These operations are of-ten accomplished via pairwise comparisons between elements, and the goal is to minimize the number ofcomparisons.For example, one may find the largest of n elements by first comparing two elements and then successivelycomparing the larger one to a new element. This simple algorithm takes n − n − merge sort sorts n elements using less than n log n comparisons, close to the information theoretic lower bound of log n ! = n log n − o ( n ). ∗ Part of this paper appeared in [AJOS14]. x and y are compared, then x is selected as the larger withprobability x/ ( x + y ). Observe that the comparison is correct with probability max { x, y } / ( x + y ) ≥ / e.g., in [NOS12]. Anothermodel assumes that the output of any comparator gets reversed with probability less than 1 /
2. Algorithmsapplying this model for maximum selection were proposed in [AGHB +
94] and for ranking in [KK07, BM08].We consider a third model where, unlike the previous models, the comparison outcome can be adversarial.If the numbers compared are more than a threshold ∆ apart, the comparison is correct, while if they differby at most ∆, the comparison is arbitrary, and possibly even adversarial.This model can be partially motivated by physical observations. Measurements are regularly quantizedand often adulterated with some measurement noise. Quantities with the same quantized value may thereforebe incorrectly compared. In psychophysics, the Weber-Fechner law [Ekm59] stipulates that humans candistinguish between two physical stimuli only when their difference exceeds some threshold (known as justnoticeable difference ). And in sports, a judge or a home-team advantage may, even adversarially, sway theoutcome of a game between two teams of similar strength, but not between teams of significantly differentstrengths. Our main motivation for the model derives from the important problem of density-estimation anddistribution-learning.
In a typical PAC-learning setup [Val84, KMR + p in a known distribution class P and would like to find, with high probability, a distribution ˆ p ∈ P such that k ˆ p − p k < δ .One standard approach proceeds in two steps [DL01].1. Offline, construct a δ -cover of P , a finite collection P δ ⊆ P of distributions such that for any distribution p ∈ P , there is a distribution q ∈ P δ such that k p − q k < δ .2. Using the samples from p , find a distribution in P δ whose ℓ distance to p is close to the ℓ distanceof the distribution in P δ that is closest to p .These two steps output a distribution whose ℓ distance from p is close to δ . Surprisingly, for several commondistribution classes, such as Gaussian mixtures, the number of samples required by this generic approachmatches the information theoretically optimal sample complexity, up to logarithmic factors [DK14, SOAJ14,DKK + P δ with a small ℓ distance from p . It takes every pair of distributions in P δ and usesthe samples from p to decide which of the two distributions is closer to p . It then declares the distributionthat “wins” the most pairwise closeness comparisons to be the nearly-closest to p . As shown in [DL01],with high probability, the Scheffe algorithm yields a distribution that is at most 9 times further from p than the distribution in P δ with the lowest ℓ distance from p , plus a diminishing additive term; hence findsa distribution that is roughly 9 δ away from p . Since this algorithm compares every pair of distributionsin P δ , it uses quadratic in |P δ | comparisons. In Section 6, we use maximum-selection results to derive analgorithm with the same approximation guarantee but with linear in |P δ | comparisons. The paper is organized as follows. In Section 2 we define the problem and introduce the notations, inSection 3 we summarize the results, in Section 4 we derive simple bounds and describe the performance2f simple algorithms, and in Section 5 we present our main maximum-selection algorithms. The relationbetween density estimation problem and our comparison model is discussed in Section 6, and in Section 7we discuss sorting with adversarial comparators.
Practical applications call for sorting or selecting the maximum of not just numbers, but rather of itemswith associated values. For example, finding the person with the highest salary, the product with lowestprice, or a sports team with the most capability of winning. Associate with each item i a real value x i , andlet X def = { x , . . . , x n } be the multiset of values. In maximum selection, we use noisy pairwise comparisonsto find an index i such that x i is close to the largest element x ∗ def = max { x , . . . , x n } .Formally, a faulty comparator C takes two distinct indices i and j , and if | x i − x j | > ∆, outputs theindex associated with the higher value, while if | x i − x j | ≤ ∆, outputs either i or j , possibly adversarially.Without loss of generality, we assume that ∆ = 1. Then, C ( i, j ) = (cid:26) arg max { x i , x j } if | x i − x j | > ,i or j (adversarially) if | x i − x j | ≤ . It is easier to think just of the numbers, rather than the indices. Therefore, informally we will simply viewthe comparators as taking two real inputs x i and x j , and outputting C ( x i , x j ) = (cid:26) max { x i , x j } if | x i − x j | > ,x i or x j (adversarially) if | x i − x j | ≤ . (1)We consider two types of adversarial comparators: non-adaptive and adaptive . • A non-adaptive adversarial comparator that has complete knowledge of X and the algorithm, but mustfix its outputs for every pair of inputs before the algorithm starts. • An adaptive adversarial comparator not only has access to the algorithm and the inputs, but is alsoallowed to adaptively decide the outcomes of the queries taking into account all the previous queriesmade by the algorithm.A non-adaptive comparator can be naturally represented by a directed graph with n nodes representingthe n indices. There is an edge from node i to node j if the comparator declares x i to be larger than x j ,namely, C ( x i , x j ) = x i . Figure 1 is an example of such a comparator, where for simplicity we show only thevalues 0, 1, 1, 2, and not the indices. Note that by definition, C (2 ,
0) = 2, but for all the other pairs, theoutputs can be decided by the comparator. In this example, the comparator declares the node with value2 as the “winner” against the right node with value 1, but as the “loser” against the left node, also withvalue 1. Among the two nodes with value 1, it arbitrarily declares the left one as the winner. An adaptiveadversary reveals the edges one by one as the algorithm proceeds.
Figure 1: Comparator for four inputs with values { , , , }
3e refer to each comparison as a query . The number of queries an algorithm A makes for X = { x , . . . , x n } is its query complexity , denoted by Q A n . Our algorithms are randomized and Q A n is a ran-dom variable. The expected query complexity of A for the input X is q A n def = E [ Q A n ] , where the expectation is over the randomness of the algorithm.Let C non ( X ) or simply C non be the set of all non-adaptive adversarial comparators, and let C adpt be the setof all adaptive adversarial comparators. The maximum expected query complexity of A against non-adaptiveadversarial comparators is q A , non n def = max C∈C non max X q A n . (2)Similarly, the maximum expected query complexity of A against adaptive adversarial comparators is q A , adpt n def = max C∈C adpt max X q A n . We evaluate an algorithm by how close its output is to x ∗ , the maximum of X . Definition 1.
A number x is a t -approximation of x ∗ if x ≥ x ∗ − t . The t -approximation error of an algorithm A over n inputs is E A n ( t ) def = Pr ( Y A ( X ) < x ∗ − t ) , the probability that A ’s output Y A ( X ) is not a t -approximation of x ∗ . For an algorithm A , the maximum t -approximation error for the worst non-adaptive adversary is E A , non n ( t ) def = max C∈C non max X E A n ( t ) , and similarly for the adaptive adversary, E A , adpt n ( t ) def = max C∈C adpt max X E A n ( t ) . For the non-adaptive adversary, the minimum t -approximation error of any algorithm is E non n ( t ) def = min A E A , non n ( t ) . and similarly for the adaptive adversary, E adpt n ( t ) def = min A E A , adpt n ( t ) . Since adaptive adversarial comparators are stronger than non-adaptive, for all t , E adpt n ( t ) ≥ E non n ( t ) . The next example shows that E non3 ( t ) ≥ for all t < Example 2. E non3 ( t ) ≥ for all t < . Consider X = { , , } and the following comparators. By symmetry, no algorithm can differentiate between the three inputs, hence any algorithm will output 0with probability / . This is a slight abuse of notation suppressing X . Previous and new results
In Section 4.1 we lower bound E non n ( t ) as a function of t . In Lemma 3, we show that for all t < n , E non n ( t ) ≥ − /n , namely for some X , approximating the maximum to within less than one is equivalentto guessing a random x i as the maximum. In Lemma 4, we modify Example 2 and show that for all t < n , any algorithm has t -approximation error close to 1 / compl , and the sequentialselection, denoted seq . Algorithm compl compares all the possible input pairs, and declares the input withthe most number of wins as the maximum. We show the simple result that almost surely compl outputs a2-approximation of x ∗ . We then consider the algorithm seq that compares a pair of inputs and discards theloser and compares the winner with a new input. We show that even under random selection of the inputs,there exist inputs such that, with high probability, seq cannot provide a constant approximation to x ∗ .We then consider more advanced algorithms. The knock-out algorithm, at each stage pairs the inputsat random, and keeps the winners of the comparisons for the next stage. We design a slight modificationof this algorithm, denoted ko-mod that achieves a 3-approximation with error probability at most ǫ , evenagainst adaptive adversarial comparators. We note that [AFHN15] proposed a different algorithm withsimilar performance guarantees.Motivated by quick-sort, we propose a quick-select algorithm q-select that outputs a 2-approximation,with zero error probability. It has an expected query complexity of at most 2 n against the non-adaptiveadversary. However, in Example 12, we see that this algorithm requires (cid:0) n (cid:1) queries against the adaptiveadversary.This leaves the question of whether there is a randomized algorithm for 2-approximation of x ∗ with O ( n )queries against the adaptive adversary. In fact, [AFHN15] pose this as an open question. We resolve thisproblem by designing an algorithm comb that combines quick-select and knock-out. We prove that comb outputs a 2-approximation with probability of error at most ǫ , using O ( n log ǫ ) queries. We summarize theresults in Table 1.algorithm notation approximation q A , non n q A , adpt n complete tournament compl E compl , adpt n (2) = 0 (cid:0) n (cid:1) deterministic upperbound [AFHN15] - E A , adpt n (2) = 0 Θ( n )deterministic lowerbound [AFHN15] - E A , adpt n (2) = 0 - Ω( n )sequential seq E seq , non n (cid:16) log n log log n − (cid:17) → n − ko-mod E ko-mod , adpt n (3) < ǫ < n + log n (cid:6) ǫ ln ǫ (cid:7) quick-select q-select E q-select , adpt n (2) = 0 < n (cid:0) n (cid:1) knock-out andquick-select combination comb E comb , adpt n (2) < ǫ O (cid:0) n log ǫ (cid:1) Table 1: Maximum selection algorithmsWe note that while we focus on randomized algorithms, [AFHN15] also studied the best possible trade-offsfor deterministic algorithms. They designed a deterministic algorithm for 2-approximation of the maximumusing only O ( n / ) queries. Moreover, they prove that no deterministic algorithm with fewer than Ω( n / )queries can output a 2-approximation of x ∗ for the adaptive adversarial model.5 Simple results
In Lemmas 3 and 4 we prove lower bounds on the error probability of any algorithm that provides a t -approximation of x ∗ for t < t < We show the following two lower bounds. • E non n ( t ) ≥ − n for all 0 ≤ t < n . • E non n ( t ) ≥ − n for all 1 ≤ t < n .These lower bounds can be applied to n which is even, by adding an extra input smaller than all the othersand losing to everyone. Lemma 3.
For all ≤ t < and odd n , E non n ( t ) ≥ − n . Proof.
Let ( x , x , . . . , x n ) be an unknown permutation of (1 , , . . . , | {z } n − ). Suppose we consider an adversarythat ensures each input wins exactly ( n − / n = 5. Figure 2: Tournament for Lemma 3 when n = 5We want a lower bound on the performance of any randomized algorithm. By Yao’s principle, we consideronly deterministic algorithms over a uniformly chosen permutation of the inputs, namely only one of thecoordinates is 1, and remaining are less than 1 − t . In this case, if we fix any comparison graph (as in thefigure above), and permute the inputs, the algorithm cannot distinguish between 1 and 0’s, and outputs 0with probability 1 − /n . Lemma 4.
For all ≤ t < and odd n , E non n ( t ) ≥ − n . Proof.
Let m be ( n − /
2. Let ( x , x , . . . , x n ) be an unknown permutation of (2 , , . . . , | {z } m , , . . . , | {z } m ). Supposethe adversary ensures that 2 loses against all the 1’s and indeed all inputs have exactly ( n − / mn = − n . 6 Figure 3: Tournament for Lemma 4 when n = 5 In this section, we analyze two familiar maximum selection algorithms, the complete tournament and se-quential selection. We discuss their strengths and weaknesses, and show that there is a trade-off between thequery complexity and the approximation guarantees of these two algorithms. Another well-known algorithmfor maximum selection is knock-out algorithm and we discuss a variant of it in Section 5.1.
As its name evinces, a complete tournament involves a match between every pair of teams. Using thismetaphor to competitions, we compare all the (cid:0) n (cid:1) input pairs, and the input winning the maximum numberof times is declared as the output. If two or more inputs end up with the highest number of wins, any ofthem can be declared as the output. This algorithm is formally stated in compl . input: X compare all input pairs in X , count the number of times each input wins output: an input with the maximum number of wins Algorithm compl - Complete tournamentThe next lemma shows that this algorithm gives a 2-approximation against both adversaries. The result,although weaker than the deterministic guarantees of [AFHN09], is illustrative and useful in the algorithmsproposed later.
Lemma 5. q compl , adpt n = (cid:0) n (cid:1) and E compl , adpt n (2) = 0 .Proof. The number of queries is clearly (cid:0) n (cid:1) . To show that E compl , adpt n (2) = 0, note that if y < x ∗ − z that y wins over, z ≤ y + 1 < x ∗ −
1, and therefore x ∗ also beats them. Since x ∗ wins over y , itwins over more inputs than y and hence y cannot be the output of the algorithm. It follows that the inputwith the maximum number of wins is a 2-approximation of x ∗ . compl is deterministic and after (cid:0) n (cid:1) queries it outputs a 2-approximation of x ∗ . If the comparators arenoiseless, we can simply compare the inputs sequentially, discarding the loser at each step, thus requiringonly n − x ∗ . As mentioned earlier, [AFHN15] showed itis not achievable. They proved that any deterministic 2-approximation algorithm requires Ω( n / ) queries.They also showed a strictly superlinear lower bound on any deterministic constant-approximation algorithm.On the other hand, they designed a deterministic 2-approximation algorithm using O ( n / ) queries. Sequential selection first compares a random pair of inputs, and at each successive step, compares thewinner of the last comparison with a randomly chosen new input. It outputs the final remaining input. This7lgorithm uses n − input: X choose a random y ∈ X and remove it from X while X is not emptychoose a random x ∈ X and remove it from X y ← C ( x, y ) end whileoutput: y Algorithm seq - Sequential selectionThe next lemma shows that even against the non-adaptive adversarial comparators, the algorithm cannotoutput a constant-approximation of x ∗ . Lemma 6.
Let s = log n log log n . For all t < s , E seq , non n ( t ) ≥ − n . Proof.
Assume that s , log n , and log log n are integers and x i = s for i = 1 ,s − i = 2 , . . . , r,s − i = r + 1 , . . . , r , ... m for i = r s − m − + 1 , . . . , r s − m , ...0 for i = r s − + 1 , . . . , r s , where r = log n . Consider the following non-adaptive adversarial comparator, C ( x i , x j ) = (cid:26) max { x i , x j } if | x i − x j | > , min { x i , x j } if | x i − x j | ≤ . (3)The sequential algorithm takes a random permutation of the inputs. It then starts by comparing thefirst two elements, and then sequentially compares the winner with the next element, and so on. Let L j bethe location in the permutation where input j appears for the last time. The next two observations followfrom the construction of inputs and comparators respectively. Observation 1.
Input j appears at least (log n − times more than input j + 1 . Observation 2.
For the adversarial comparator defined in (3) , if L > L > . . . > L s then seq outputs . As a consequence of Observation 1, in the random permutation of inputs, L j > L j +1 with probability atleast 1 − n . By the union bound, L > L > . . . > L s with probability at least,1 − s log n = 1 − n . By applying Observation 2, seq outputs 0 with probability at least 1 − n .8 Algorithms
In the previous section we saw that the complete tournament, compl , always outputs a 2-approximation,but has quadratic query complexity, while the sequential selection, seq , has linear query complexity but apoor approximation guarantee. A natural question to ask is whether the benefits of these two algorithms canbe combined to derive bounded error with linear query complexity. In this section, we propose algorithmswith linear query complexity and approximation guarantees that compete with the best possible, i.e., x ∗ .We propose three algorithms, with varying performance guarantees: • Modified knock-out , described in Section 5.1, has linear query complexity and with high probabilityoutputs a 3-approximation of x ∗ against both the adaptive and non-adaptive adversaries. • Quick-select , described in Section 5.2, outputs a 2-approximation to x ∗ (against both adversaries).It also has a linear expected query complexity against non-adaptive adversarial comparators. • Knock-out and quick-select combination , described in Section 5.3, has linear query complexity,and with high probability outputs a 2-approximation of x ∗ even against adaptive adversarial compara-tors.We now go over these algorithms in detail. For simplification, in this section we assume that log n is an integer. The knock-out algorithm derives itsname from knock-out competitions where the tournament is divided into log n successive rounds. In eachround the inputs are paired at random and the winners advance to the next round. Therefore, in round i there are n i − inputs. The winner at the end of log n rounds is declared as the maximum.Under our adversarial model, at each round of the knock-out algorithm, the largest remaining inputdecreases by at most one. Therefore, knock-out algorithm finds at least log n -approximation of x ∗ . Analyzingthe precise approximation error of knock-out algorithm appears to be difficult. However, simulations suggestthat for any large n , for the set consisting of 0 . · n α · n . − α ) · n . · n < α < . x ∗ with positive constant probability. The problem with knock-out algorithm is that ifat any of the log n rounds, many inputs are within 1 from the largest input at that round, there is a fairchance that the largest input will be eliminated. If this elimination happens in several rounds, we will endup with a number significantly smaller than x ∗ .To circumvent the problem of discarding large inputs, we select a specified number of inputs at each roundand save them for the very end, thereby ensuring that at every round, if the largest input is eliminated, thenan input within 1 from it has been saved. We then perform a complete tournament on these saved inputs.The algorithm is explained in ko-mod . input: X pair the inputs of X randomly, let X ′ be the winners output: X ′ Algorithm ko-sub - Subroutine for ko-mod and comb
In Theorem 7, we show that ko-mod has 3-approximation error less than ǫ .We first explain the algorithm, and then state the result. Let n = (cid:6) ǫ ln ǫ · log n (cid:7) . At each round, weadd n of the remaining inputs at random to the multiset Y and run the knock-out subroutine ko-sub onthe multiset X . When |X | ≤ n , we perform a complete tournament on X ∪ Y , and declare the output asthe winner. We show that, with probability at least 1 − ǫ , the final set Y contains at least one input whichis a 1-approximation of x ∗ . probability greater than 1 − ǫ , an input within 1-approximation of x ∗ remains in9 nput: X , ǫ Y = ∅ , n = (cid:6) ǫ ln ǫ · log n (cid:7) while |X | > n randomly choose n inputs from X and copy them to YX ← ko-sub ( X ) end whileoutput: compl ( X ∪ Y ) Algorithm ko-mod - Modified knock-out algorithm
X ∪ Y . Since the complete tournament outputs a 2-approximation of its maximum input, ko-mod outputsa 3-approximation of x ∗ with probability greater than 1 − ǫ . Theorem 7.
For n ≥ , we have q ko-mod , adpt n < n + (log n ) · (cid:6) ǫ ln ǫ (cid:7) and E ko-mod , adpt n (3) < ǫ .Proof. The number of comparisons made by ko-sub is at most n + n + n + . . . < n . Observe that ko-sub is called m def = l log nn m times. Let X i be the multiset X at the start of i th call to ko-sub . Let X m +1 and Y m +1 be the multisets X and Y right before calling compl . Then, |X m +1 ∪ Y m +1 | ≤ |X m +1 | + |Y m +1 |≤ n + m X i =1 ( |Y i +1 | − |Y i | ) ≤ n + mn = (cid:18)(cid:24) log nn (cid:25) + 1 (cid:19) · (cid:24) ǫ ln 1 ǫ · log n (cid:25) ≤ (cid:18)(cid:24) log nn (cid:25) + 1 (cid:19) · (cid:24) ǫ ln 1 ǫ (cid:25) ⌈ log n ⌉≤ log n · (cid:24) ǫ ln 1 ǫ (cid:25) , where the last inequality follows as n ≥ n is an integer. Since the complete tournament is quadraticin the input size, the total number of queries is at most n + log n (cid:6) ǫ ln ǫ (cid:7) .Next, we bound the error of ko-mod . Let X ∗ def = { x ∈ X : x ≥ x ∗ − } , be the multiset of all inputs that are at least x ∗ −
1. For i ≤ m + 1, let X ∗ i = X i ∩X ∗ and Y ∗ m +1 = Y m +1 ∩X ∗ .Let α i def = |X ∗ i ||X i | and α = max { α , α , . . . , α m } . We show that with high probability, |X ∗ m +1 ∪ Y ∗ m +1 | ≥ i.e., some input in X m +1 ∪ Y m +1 belongs to X ∗ . In particular, we show that with probability 1 − ǫ , for large α ,10 Y ∗ m +1 | >
0, and for small α , x ∗ ∈ X m +1 . Observe that,Pr( x ∗ / ∈ X ∗ m +1 ) = m X i =1 Pr( x ∗ / ∈ X ∗ i +1 | x ∗ ∈ X i ) · Pr( x ∗ ∈ X i ) ≤ m X i =1 Pr( x ∗ / ∈ X ∗ i +1 | x ∗ ∈ X i ) ( a ) ≤ m X i =1 |X ∗ i | − |X i | − ≤ m X i =1 α i ≤ αm, where ( a ) follows since at round i , ko-sub randomly pairs the inputs and only inputs in X ∗ i \{ x ∗ } are ableto eliminate x ∗ . Next we discuss Pr( |Y ∗ m +1 | = 0). At round i , the probability that an input in X ∗ is notpicked up in Y is (cid:0) |X i |−|X ∗ i | n (cid:1)(cid:0) |X i | n (cid:1) ≤ (cid:18) − |X ∗ i ||X i | (cid:19) n = (1 − α i ) n . Therefore, Pr( |Y ∗ m +1 | = 0) ≤ m Y i =1 (1 − α i ) n ≤ min i (1 − α i ) n = (1 − α ) n . As a result, Pr( |X ∗ m +1 ∪ Y ∗ m +1 | = 0) = Pr( |X ∗ m +1 | = 0 ∧ |Y ∗ m +1 | = 0) ≤ Pr( x ∗ / ∈ X ∗ m +1 ∧ |Y ∗ m +1 | = 0) ≤ max α min { Pr( x ∗ / ∈ X ∗ m +1 ) , Pr( |Y ∗ m +1 | = 0) }≤ max α min { αm, (1 − α ) n } ( a ) ≤ max { αm, (1 − α ) n }| α = ǫ log n = max (cid:26) ǫm log n , (cid:18) − ǫ log n (cid:19) n (cid:27) ( b ) < ǫ, where ( a ) follows since the first argument of the min increases and the second argument decreases with α .Also, ( b ) follows since m ≤ log n and n = (cid:6) ǫ ln ǫ log n (cid:7) .So far, we have shown that with probability 1 − ǫ , there exists a 1-approximation of x ∗ in X m +1 ∪ Y m +1 .From Lemma 5, compl gives a 2-approximation of the maximum input. Consequently, with probability1 − ǫ , ko-mod outputs a 3-approximation of x ∗ .In Appendix A, we show that ko-mod cannot output better than 3-approximation of x ∗ with constantprobability.We end this subsection with the following open question. Open Question 8.
What is the best approximation that the simple knock-out algorithm can achieve? .2 Quick-select Motivated by quick-sort, we propose a quick-select algorithm q-select that at each round compares all theinputs with a random pivot to provide stronger performance guarantees against the non-adaptive adversary. input: X pick a pivot x p ∈ X at randomcompare x p with all other inputs in X let Y ⊂ X \{ x p } be the multiset of inputs that beat x p output: if Y 6 = ∅ output Y otherwise output { x p } Algorithm qs-sub - Subroutine for q-select and comb input: X while |X | > X ← qs-sub ( X ) end whileoutput: the unique input in X Algorithm q-select - Quick-selectWe show that q-select provides a 2-approximation with no error against both the adaptive and non-adaptive adversaries. To show this result, observe that x ∗ will only be eliminated if a 1-approximation of x ∗ is chosen as pivot and therefore, only inputs that are 2-approximation of x ∗ will survive. Lemma 9. E q-select , adpt n (2) = 0 .Proof. If the output is x ∗ , the lemma holds. Otherwise, x ∗ is discarded when it was chosen as a pivot orcompared with a pivot. Let x p be the pivot when x ∗ is discarded, hence x p ≥ x ∗ −
1. By the algorithm’sdefinition, all the surviving inputs are at least x p − ≥ x ∗ − q-select against a non-adaptive adversary is atmost 2 n . This result follows from the observation that the non-adaptive adversary fixes the comparisongraph from the start, and hence a random pivot wins against half of the inputs in expectation. This idea ismade rigorous in the proof of Lemma 10.We finally consider an example for which q-select requires (cid:0) n (cid:1) queries against the adaptive adversary. Lemma 10. q q-select , non n < n .Proof. Recall that the non-adaptive adversary can be modeled as a complete directed graph where each nodeis an input and there is an edge from x to y if C ( x, y ) = x . Let in( x ) be the in-degree of x .At round i the algorithm chooses a pivot x p at random and compares it to all the remaining inputs. Bykeeping the winners, max { in( x p ) , } inputs will remain for the next round. As a result, we have the followingrecursion for non-adaptive adversaries, q q-select n = E [ Q q-select n ]= n − n n X i =1 E h Q q-select in( x i ) i = n − n n X i =1 q q-select in( x i ) .
12y (2), q q-select , non n = max C∈C non max X q q-select n (4)= max C∈C non max X " n − n n X i =1 q q-select in( x i ) ≤ n − n n X i =1 max C∈C non max X q q-select in( x i ) = n − n n X i =1 q q-select , nonin( x i ) , where the inequality follows as maximum of sums is at most sum of maximums. We prove by strong inductionthat q q-select , non n ≤ n − n = 1. Suppose it holds for all n ′ < n , then, q q-select , non n ≤ n − n n X i =1 q q-select , nonin( x i ) ≤ n − n n X i =1 · in( x i )= n − n ( n − n ≤ n − , where the equality follows since the in-degrees sum to n ( n − .Lemma 10 shows that q q-select , non n < n . Next, we show a naive concentration bound for the querycomplexity of q-select . By Markov’s inequality, for a non-adaptive adversary,Pr( Q q-select n > n ) ≤ . Let k be an integer multiple of 4. Now suppose we run q-select , allowing kn queries. At each 4 n queries,the q-select ends with probability ≥ . Therefore,Pr( Q q-select n > kn ) ≤ − k . This naive bound is exponential in k . The next lemma shows a tighter super-exponential concentrationbound on the query complexity of the algorithm beyond its expectation. We defer the proof to appendix B. Lemma 11.
Let k ′ = max { e, k/ } . For a non-adaptive adversary, Pr ( Q q-select n > kn ) ≤ e − ( k − k ′ ) ln k ′ . While q-select has linear expected query complexity under the non-adaptive adversarial model, thefollowing example suggested to us by Jelani Nelson [Nel15] shows that it has a quadratic query complexityagainst an adaptive adversary.
Example 12.
Let X = { , , . . . , } . At each round, the adversary declares the pivot to be smaller than allthe other inputs. Consequently, only the pivot is eliminated and the query complexity is (cid:0) n (cid:1) . ko-mod has the benefit of reducing the number of inputs exponentially at each round and therefore maintain-ing a linear query-complexity while having only a 3-approximation guarantee. On the other side, q-select O ( n ) queries for some instances of inputs. In comb we combine the benefits of these algorithms and avoid their shortcomings. By carefully repeating qs-sub wetry to reduce the number of inputs by a fraction at each round and keep the largest element in the remainedset. If the number of inputs is not reduced by a fraction, most of them must be close to each other, thereforerepeating the ko-sub for a sufficient number of times and keeping the inputs with higher number of wins willguarantee the reduction of the input size without adversing the approximation error. Our final algorithm comb provides a 2-approximation of x ∗ even against the adaptive-adversarial comparator, and has linearquery complexity, therefore resolves an open question of [AFHN15]. input: X , ǫβ = 9, β = 25, i = 0 while |X | > i = i + 1 ( i is the round) n i = |X | run X ← qs-sub ( X ) for (cid:4) β log ǫ (cid:5) times X i = X if |X i | > n i run ko-sub on fixed X for j β (cid:0) (cid:1) i log ǫ k times if there exists an input with > j β (cid:0) (cid:1) i log ǫ k winslet X be a multiset of inputs with > j β (cid:0) (cid:1) i log ǫ k wins else let X be an input with highest number of wins end whileoutput: X Algorithm comb - Knock-out and quick-select combinationWe begin analysis of the algorithm with a few simple lemmas.
Lemma 13.
At each round |X | reduces at least by a third, i.e., n i +1 ≤ n i .Proof. If at any round |X i | ≤ n i , then the lemma holds and the algorithm does not call ko-sub . On theother hand, if ko-sub is called, then by Markov’s inequality at most rd of the inputs win more than thfraction of queries. As a result, at round i at least rd of the inputs in X will be eliminated.Recall that X ∗ = { x ∈ X : x ≥ x ∗ − } . The next lemma shows that choosing inputs inside X ∗ as a pivot,guarantees a 2-approximation of x ∗ . The proof is similar to Lemma 9 and is omitted. Lemma 14. If x ∗ ∈ X , at a call to qs-sub either x ∗ survives or a pivot from X ∗ is chosen where in thelater case, only inputs that are -approximation of x ∗ will survive. We showed that at each round, comb reduces |X | by at least a third. As a result, the number of inputsdecreases exponentially and the total number of queries is linear in n . We also show that if x ∗ is eliminatedat some round, then at that round, an input from X ∗ has been chosen as a pivot with high probability.Using Lemma 14, this implies that comb outputs a 2-approximation of x ∗ with high probability. Theorem 15. q comb , adpt n = O (cid:0) n log ǫ (cid:1) and E comb , adpt n (2) < ǫ .Proof. We start by analyzing the query complexity of comb . By Lemma 13, n i ≤ n · (cid:0) (cid:1) i − . Therefore, the total number of queries at round i is at most n (cid:0) (cid:1) i − β log ǫ + n (cid:0) (cid:1) i − β (cid:0) (cid:1) i log ǫ , qs-sub and the second term is for calls to ko-sub . Adding the querycomplexity of all the rounds, q comb , adpt n ≤ n log ǫ ∞ X i =1 (cid:16) β (cid:0) (cid:1) i − + β (cid:0) (cid:1) i − (cid:17) ≤ n (3 β + 6 β ) log ǫ = O (cid:0) n log ǫ (cid:1) . We now analyze the approximation guarantee of comb . We show that at least one of the following eventshappens with probability greater than 1 − ǫ . • comb outputs x ∗ . • An input inside X ∗ is chosen as a pivot at some round.Let X ∗ i def = X i ∩ X ∗ and α i def = |X ∗ i ||X i | . We consider the following two cases separately. • Case 1
There exists an i such that |X i | > n i and α i > . • Case 2
For all i , either |X i | ≤ n i or α i ≤ .First we consider case 1. We show that in this case a pivot from X ∗ is chosen with probability > − ǫ .Observe that at round i , |X | starts at n i < |X i | and gradually decreases. On the other hand, in all the (cid:4) β log ǫ (cid:5) calls to qs-sub , |X ∩ X ∗ | is at least |X ∗ i | = α i |X i | . Therefore, in all the calls to qs-sub at round i , |X ∩ X ∗ ||X | ≥ α i |X i | |X i | = 23 α i . Let E be the event of not choosing a pivot from X ∗ at round i . As a result,Pr( E ) ≤ (cid:0) − α i (cid:1) j β log 1 ǫ k ≤ (cid:0) (cid:1) ǫ < ǫ. Therefore, in case 1, with probability at least 1 − ǫ , a pivot from X ∗ is chosen.We now consider the case 2. By Lemma 14 during the calls to qs-sub , either x ∗ survives or an inputfrom X ∗ is chosen as a pivot. Therefore, we may only lose x ∗ without choosing a pivot from X ∗ , if at someround i , |X i | > n i and x ∗ wins less than th of its queries during the calls to ko-sub .Recall that in case 2, if |X i | > n i then α i ≤ . Observe that x ∗ wins against a random input in X i withprobability greater than > − α i which is at least . Let E ′ i be the event that x ∗ wins fewer than th of itsqueries at round i . By the Chernoff bound,Pr( E ′ i ) ≤ exp (cid:16) − j β (cid:0) (cid:1) i log ǫ k · D (cid:0) || (cid:1)(cid:17) ≤ ǫ (cid:16) (cid:17) i , where D ( p || q ) def = p ln pq +(1 − p ) ln − p − q is the Kullback-Leibler distance between Bernoulli distributed randomvariables with parameters p and q respectively. Assuming ǫ < , the total probability of missing x ∗ withoutchoosing a pivot form X ∗ is at most ∞ X i =1 Pr( E ′ i ) ≤ ∞ X i =1 ǫ (cid:16) (cid:17) i < ǫ. So far we showed that with probability > − ǫ , either x ∗ survives or an input inside X ∗ is chosen as apivot. The theorem follows from Lemma 14. 15 Application to density estimation
Our study of maximum selection with adversarial comparators was motivated by the following densityestimation problem.Given a known set P δ = { p , . . . , p n } of n distributions and k samples from an unknown distribution p ,output a distribution ˆ p ∈ P δ such that for a small constant C > k ˆ p − p k ≤ C · min p ∈P δ k p − p k + o k (1) . This problem was studied in [DL01] who showed that for n = 2, the scheffe-test , described below inpseudocode, takes k samples and with probability 1 − ε outputs a distribution ˆ p ∈ P δ such that || ˆ p − p || ≤ · min p ∈P δ || p − p || + s
10 log ε k . (5) input: distributions p and p , k i.i.d. samples of unknown distribution p let S = { x : p ( x ) > p ( x ) } let p ( S ) and p ( S ) be the probability mass that p and p assign to S let µ S be the frequency of samples in S output: if | p ( S ) − µ S | ≤ | p ( S ) − µ S | output p , otherwise output p Algorithm scheffe-test - Scheffe test for two distributions scheffe-test provides a factor-3 approximation with high probability. The algorithm, as stated in itspseudocode, requires computing p i ( S ) which can be hard since the distributions are not restricted. However,as noted in [SOAJ14], the algorithm can be made to run in time linear in k . the probability of the samplesunder the known distributions. [DL01] also extended scheffe-test for n >
2. Their proposed algorithm for n >
2, runs scheffe-test for each pair of distributions in P δ and outputs the distribution with maximumnumber of wins, where a distribution is a winner if it is the output of scheffe-test . This algorithm isreferred to as the Scheffe tournament. They showed that this algorithm finds a distribution ˆ p ∈ P δ such that || ˆ p − p || ≤ p ∈P δ || p − p || + o k (1) , and the running time is clearly Θ( n k ), quadratic in the number of distributions.[MS08] showed that the optimal coefficients for the Scheffe algorithms are indeed 3 and 9 for n = 2 and n > n >
2, howeverstill running in time Θ( n ). They also proposed a linear-time algorithm, but it requires a preprocessing stepthat runs in time exponential in n .Scheffe’s method has been used recently to obtain nearly sample optimal algorithms for learning PoissonBinomial distributions [DDS12], and Gaussian mixtures [DK14, SOAJ14].We now describe how our noisy comparison model can be applied to this problem to yield a linear-timealgorithm with the same estimation guarantee as Scheffe tournament. Our algorithm uses Scheffe test as asubroutine. Given a sufficient number of samples, k = Θ(log n ), the small term in the RHS of (5) vanishesand scheffe-test outputs p i if || p i − p || < || p j − p || ,p j if || p j − p || < || p i − p || , unknown otherwise.Let x i = − log || p i − p || , then analogously to the maximum selection with adversarial noise in (1), scheffe-test outputs (cid:26) max { x i , x j } if | x i − x j | > , unknown otherwise.16iven a fixed multiset of samples the tournament results are fixed, hence this setup is identical to the non-adaptive adversarial comparators. In particular, with probability 1 − ε , our quick-select algorithm can findˆ p ∈ P δ such that || ˆ p − p || ≤ · min p ∈P δ || p − p || , with running time Θ( nk ). Next, we consider the combination of scheffe-test and q-select in moredetails. Theorem 16.
Combination of scheffe-test and q-select algorithms, with probability − ε , results in ˆ p such that || ˆ p − p || ≤ · min p ∈P δ || p − p || + 4 s
10 log ( n ) ε k . Proof.
Let p ∗ def = argmin p ∈P δ || p − p || . Using (5), for each p i and p j in P δ , with probability 1 − ε/ (cid:0) n (cid:1) , scheffe-test outputs ˆ p such that || ˆ p − p || ≤ · min p ∈{ p i ,p j } || p − p || + s
10 log ( n ) ε k . (6)By the union bound (6) holds for all p i and p j with probability at least 1 − ε . Similar to Lemma 9, if p ∗ iseliminated, then at some round, q-select has chosen p ′ as a pivot such that || p ′ − p || ≤ · || p ∗ − p || + s
10 log ( n ) ε k . Now after choosing p ′ as a pivot, for any distribution p ′′ that survives, || p ′′ − p || ≤ · || p ′ − p || + s
10 log ( n ) ε k ≤ · || p ∗ − p || + 4 s
10 log ( n ) ε k . We now consider sorting with noisy comparators. The comparator model is the same as before, and the goalis to approximately sort the inputs in decreasing order.Consider an Algorithm A for sorting the inputs. The output of A is denoted by Y A ( X ) def = ( Y , Y , . . . , Y n )which is a particular ordering of the inputs. Similar to the maximum-selection problem, a t -approximationerror is E A n ( t ) def = Pr (cid:18) max i,j : i>j ( Y i − Y j ) > t (cid:19) , i.e., the probability of Y i appearing after Y j in Y A while Y i − Y j > t . Note that our definitions for E A , non n ( t ), E A , adpt n ( t ), q A , adpt n , and q A , non n hold same as before. 17n the following, we first revisit complete tournament with a small modification for the sake of sortingproblem and we show that under adaptive adversarial model, it has zero 2-approximation error and querycomplexity of (cid:0) n (cid:1) . Then we discuss quick-sort algorithm q-sort and show that it has zero 2-approximationerror but with improved query complexity for the non-adaptive adversary. We apply the known bounds forrunning time of general quick-sort algorithm with n distinct inputs to find the query complexity of q-sort . The algorithm is similar to compl in Section 4.2.1 and we refer to it as compl-sort . The only differenceis in the output of the algorithm. input: X compare all input pairs in X , count the number of times each input wins output: output the inputs in the order of their number of wins, breaking the ties randomly Algorithm compl-sort - Complete tournamentThe following lemma and its proof is similar to Lemma 5 and therefore we skip the proof.
Lemma 17. q compl-sort , adpt n = (cid:0) n (cid:1) and E compl-sort , adpt n (2) = 0 . Next, we discuss an algorithm with improved query complexity.
Quick-sort is a well known algorithm and here is denoted by q-sort . The expected query complexity ofquick-sort with noiseless comparisons and distinct inputs is f ( n ) def = 2 n ln n − (4 − γ ) n + 2 ln n + O (1) , (7)where γ is Euler’s constant [MH96]. Note that f ( n ) is a convex function of n .In the rest of this section we study the error guarantee of quick-sort and its query complexity in thepresence of noise. In Lemma 18 we show that the error guarantee of quick-sort for our noise model is sameas complete tournament, i.e., it can sort the inputs with zero 2-approximation error. Next in Lemma 19we show that the expected query complexity of quick-sort with non-adaptive adversarial noise is at most itsexpected query complexity in noiseless model. Lemma 18. E q-sort , adpt n (2) = 0 .Proof. The proof is by contradiction. Suppose x i > x j + 2 but x j appears before x i in the output of quick-sort algorithm. Then there must have been a pivot x p such that C ( x i , x p ) = x p while C ( x j , x p ) = x j . Since x i > x j + 2 no such a pivot exists.Quick-sort algorithm chooses a pivot randomly to divide the set of inputs into smaller-size sets. Theoptimal pivot for noiseless quick-sort is known to be the median of the inputs to balance the size of theremained sets. In fact, it is easy to show that if we choose the median of the inputs as pivot, the querycomplexity of quick-sort reduces to less than n log n . Observe that in a non-adaptive adversarial model, theprobability of having balanced sets after choosing pivot increases. As a result, in the next lemma we showthat the expected query complexity of quick-sort in the presence of noise is upper bounded by f ( n ). Lemma 19. q q-sort , non n = f ( n ) and is achieved when the queries are noiseless and inputs are distinct. roof. Let in( x ) and out( x ) be the in-degree and out-degree of node x in the complete tournament re-spectively. For the noiseless comparator with distinct inputs, the in-degrees and out-degrees of inputs arepermutation of (0 , , . . . , n − C∈C non max X q q-sort n , is a comparator whose complete tournament in-degrees and out-degrees are permutations of (0 , , . . . , n − q n = q q-sort , non n . We have the following recursion for quick-sort similarto (4). q n ≤ n − n n X i =1 q out( x i ) + q in( x i ) (8)By induction, we show that the solution to (8) is bounded above by f ( n ), a convex function of n . Theinduction holds for n = 0 , , and 2. Now suppose the induction holds for all i < n . Since f ( n ) is a convexfunction of n and P i in( x i ) = P i out( x i ) = n ( n − , the right hand side of (8) is maximized when thein-degrees and out-degrees take their extreme values, i.e., when they are permutation of (0 , , . . . , n − q n ≤ n − n n X i =1 f (in( x i )) + f (out( x i )) ≤ n − n n X i =1 f ( i −
1) + f ( n − i ) , where the solution to this recursion is f ( n ), given in (7). Hence q n is bounded above by f ( n ) and the equalityhappens when the in-degrees and out-degrees are permutations of (0 , , . . . , n − ǫ ) times its expectedquery complexity is n − ǫ ln ln n + O (ln ln ln n ) . Observe that for the non-adaptive adversarial model, the chanceof a random pivot cutting the set of inputs into balanced sets increases. As a result, one can show thatthe analysis in [MH92] follows automatically. In particular, Lemmas 2.1 and 2.2 in [MH92], which are thebasis of their analysis, are valid for our non-adaptive adversarial model. Therefore, their tight concentrationbound for quick-sort algorithm can be applied to our non-adaptive adversarial model. Acknowledgment
We thank Jelani Nelson for introducing us to the problem’s adaptive adversarial model.
References [AFHN09] Mikl´os Ajtai, Vitaly Feldman, Avinatan Hassidim, and Jelani Nelson. Sorting and selection withimprecise comparisons. In
Automata, Languages and Programming , pages 37–48. Springer, 2009.[AFHN15] Mikl´os Ajtai, Vitaly Feldman, Avinatan Hassidim, and Jelani Nelson. Sorting and selectionwith imprecise comparisons.
ACM Trans. Algorithms , 12(2):19:1–19:19, November 2015.[AGHB +
94] Micah Adler, Peter Gemmell, Mor Harchol-Balter, Richard M. Karp, and Claire Kenyon. Se-lection in the presence of noise: The design of playoff systems. In
Proceedings of the FifthAnnual ACM-SIAM Symposium on Discrete Algorithms. 23-25 January 1994, Arlington, Vir-ginia , pages 564–572, 1994. 19AJOS14] Jayadev Acharya, Ashkan Jafarpour, Alon Orlitksy, and Ananda Theertha Suresh. Sortingwith adversarial comparators and application to density estimation. In
Proceedings of the 2014IEEE International Symposium on Information Theory (ISIT) , 2014.[AMPP09] Gagan Aggarwal, S Muthukrishnan, D´avid P´al, and Martin P´al. General auction mechanismfor search advertising. In
Proceedings of the 18th international conference on World wide web ,pages 241–250. ACM, 2009.[BM08] Mark Braverman and Elchanan Mossel. Noisy sorting without resampling. In
Proceedings ofthe Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms. January 20-22, 2008,San Francisco, California, USA , pages 268–276, 2008.[BT52] Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs the methodof paired comparisons.
Biometrika , 39(3-4):324–345, 1952.[Dav63] Herbert Aron David.
The method of paired comparisons , volume 12. Defence Technical Infor-mation Center Document, 1963.[DDS12] Constantinos Daskalakis, Ilias Diakonikolas, and Rocco A Servedio. Learning poisson binomialdistributions. In
Proceedings of the 44th Symposium on Theory of Computing Conference,STOC 2012, New York, NY, USA, May 19 - 22, 2012 , pages 709–728, 2012.[DK14] Constantinos Daskalakis and Gautam Kamath. Faster and sample near-optimal algorithmsfor proper learning mixtures of gaussians. In
Proceedings of the 27th Annual Conference onLearning Theory (COLT) , 2014.[DKK +
16] Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, and Alistair Stewart.Robust estimators in high dimensions without the computational intractability. arXiv preprintarXiv:1604.06443 , 2016.[DL01] Luc Devroye and Gabor Lugosi.
Combinatorial Methods in Density Estimation . Springer -verlag, New York, 2001.[Ekm59] G ¨OSta Ekman. Weber’s law and related functions.
The Journal of Psychology , 47(2):343–352,1959.[Hen89] Pascal Hennequin. Combinatorial analysis of quicksort algorithm.
Informatique th´eorique etapplications , 23(3):317–333, 1989.[KK07] Richard M. Karp and Robert Kleinberg. Noisy binary search and its applications. In
Proceedingsof the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2007, NewOrleans, Louisiana, USA, January 7-9, 2007 , pages 881–890, 2007.[KMR +
94] Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, Robert E Schapire, and LindaSellie. On the learnability of discrete distributions. In
Proceedings of the Twenty-Sixth AnnualACM Symposium on Theory of Computing, 23-25 May 1994, Montr´eal, Qu´ebec, Canada , pages273–282, 1994.[Knu98] Donald Ervin Knuth.
The art of computer programming: sorting and searching , volume 3.Pearson Education, 1998.[MH92] Colin McDiarmid and Ryan Hayward. Strong concentration for quicksort. In
Proceedings of theThird Annual ACM/SIGACT-SIAM Symposium on Discrete Algorithms, 27-29 January 1992,Orlando, Florida , pages 414–421, 1992.[MH96] Colin McDiarmid and Ryan B Hayward. Large deviations for quicksort. journal of algorithms ,21(3):476–507, 1996. 20MS08] Satyaki Mahalanabis and Daniel Stefankovic. Density estimation in linear time. In
Proceedingsof the 21st Annual Conference on Learning Theory (COLT) , pages 503–512, 2008.[Nel15] Jelani Nelson. Personal communication. 2015.[NOS12] Sahand Negahban, Sewoong Oh, and Devavrat Shah. Iterative ranking from pair-wise com-parisons. In
Advances in Neural Information Processing Systems 25: 26th Annual Conferenceon Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6,2012, Lake Tahoe, Nevada, United States , pages 2483–2491, 2012.[Sch47] Henry Scheffe. A useful convergence theorem for probability distributions. In
The Annals ofMathematical Statistics , volume 18, pages 434–438, 1947.[SOAJ14] Ananda Theertha Suresh, Alon Orlitsky, Jayadev Acharya, and Ashkan Jafarpour. Near-optimal-sample estimators for spherical gaussian mixtures. In
Advances in Neural InformationProcessing Systems , pages 1395–1403, 2014.[Thu27] Louis L Thurstone. A law of comparative judgment.
Psychological review , 34(4):273, 1927.[Val84] Leslie G. Valiant. A theory of the learnable. In
Proceedings of the 16th Annual ACM Symposiumon Theory of Computing, April 30 - May 2, 1984, Washington, DC, USA , pages 436–445, 1984.21
For all t < , ko-mod cannot output a t -approximation The next example shows that the modified knock-out algorithm cannot achieve better than 3-approximationof x ∗ . Example 20.
Suppose n − is multiple of and n is a large number. Let X be a random permutation of { , , , . . . , | {z } n − , , , . . . , | {z } n − , , , . . . , | {z } n − , ∗ } . This multiset consists of an input with value zero but specified with ∗ , since this input is going to behavedifferently from other s. Let the adversarial comparator be such that all s, except ∗ , and all s lose toall s, and loses to all s. By the properties of comparator it is obvious that any will defeat all zeros,including ∗ . In order to prove our main claim, we make the following arguments and show that each ofthem happens with high probability. • Pr(input with value is not present in the final multiset) > . • Pr(input ∗ is present in the final multiset) > . • With high probability, the fraction of s in the final multiset is close to .Before proving each argument we show why all the above statements are sufficient to prove our claim. Con-sider the final multiset; with high probability it mainly consists of s and there are small number of s and s. Moreover, with probability greater than × , input with value has been removed before reaching thefinal multiset and ∗ has survived to reach the final multiset. Therefore, if we run Algorithm compl on thefinal multiset, the input ∗ will have the most number of wins and declared as the output of the algorithm.Hence for all t < , we have E ko-mod , non n ( t ) > constant. Note that we did not try to optimize this constant.Now we show why each of the arguments above are true. Note that all the discussions made below is inexpected value. However, the concentration bounds for all these claims are straightforward but not discussedbecause it is out of the scope of this paper. Also we assume that n is sufficiently large. Lemma 21.
With high probability, the fraction of s in the final multiset is close to 1 and the fraction of s and s are very small.Proof. We calculate the expected number of 0s, 1s, and 2s at each step. Let f i ( j ) be the fraction of j ’s atthe end of step i . After each step, we lose an input with value 1 if and only if they are paired with eachother. So we have the following recursion, f i +1 (1) = 2 · f i (1) (cid:16) f i (1)2 + 1 − f i (1) (cid:17) , where the factor 2 on the RHS of the recursion above is due to the fact that at each step we are reducing thenumber of inputs to half. Starting with f (1) = 1 /
3, we get the set of values { / , / , / , / ∼ . , ... } for f i (1)s. We can see that the ratio is approaching 1 very fast. More precisely, the fraction of 0s isdecreasing quadratically since their only chance of survival is to get paired among themselves. As a result,after a couple of steps, the fraction of zeros is extremely small, and henceforth the only chance of survivalfor 2s becomes getting paired among themselves and also their fraction is going to decrease quadraticallyafterwards. As a result, more samples of 1s will be in the final Y with high probability. Lemma 22.
Pr(input with value is not present in the final multiset) > . Proof.
The input with value 3 is going to be removed when it is compared against one of the 2s. There isa slight chance of surviving for it if it is chosen randomly for being in the output. Thus the probability ofinput 3 being removed from the multiset in the first round isPr(input 3 is being removed in the first round) = n − n (cid:16) − n n (cid:17) > , where n = (cid:6) ǫ ln ǫ log n (cid:7) . 22 emma 23. Pr(input ∗ is present in the final multiset) > . Proof.
Similar to the argument made in the proof of Lemma 21, we have the following recursion for f i (2). f i +1 (2) = 2 · f i (2) (cid:16) f i (2)2 + 1 − f i (2) − f i (1) (cid:17) Thus we have f (2) = 1 / f (2) = 1 / f (2) = 5 / f (2) = 85 / ∗ surviving) = (1 − )(1 − )(1 − )(1 − ) · · · > , proving the lemma. B Proof of Lemma 11
Abbreviate Q q-select n by Q n . As in the Chernoff bound proof, for all λ > Q n > kn ) ≤ E [ e λQ n ] e kλn . (9)Let λ = n ln k ′ and Φ( i ) def = E [ e λQ i ]. We prove by induction that Φ( i ) ≤ e k ′ λi . The induction holds for i = 0.Similar to (4), we have the following recursion for Φ( n ),Φ( n ) ≤ e λ ( n − n n X j =1 Φ(in( x j )) ≤ e λn n n X j =1 Φ(in( x j )) . Since in( x j ) < n , using induction, e λn n n X j =1 Φ(in( x j )) ≤ e λn n n X j =1 e k ′ λ in( x j ) . (10)Observe that e k ′ λ in( x j ) is a convex function of in( x j ) and P nj =1 in( x j ) = n ( n − . As a result, the RHSof (10) is maximized when the in-degrees take their extreme values, i.e., any permutation of (0 , , . . . , n − e λn n n X j =1 e k ′ λ in( x j ) ≤ e λn n n − X j =0 e k ′ λj = e λn n e k ′ λn − e k ′ λ − . Combining the above equations, Φ( n ) ≤ e λn n e k ′ λn − e k ′ λ − . Similarly, by induction on 1 ≤ i < n , Φ( i ) ≤ e λi i e k ′ λi − e k ′ λ − .
23n Lemma 24 we show that for 1 ≤ i ≤ n , e λi i e k ′ λi − e k ′ λ − ≤ e k ′ λi . (11)Therefore, Φ( i ) ≤ e k ′ λi for 1 ≤ i ≤ n and in particular, Φ( n ) ≤ e k ′ λn . Substituting E [ e λQ n ] = Φ( n ) in (9),Pr( Q n > kn ) ≤ e k ′ λn e kλn = e k ′ ln k ′ e k ln k ′ = e − ( k − k ′ ) ln k ′ . This proves the Lemma.We now prove (11). Let k ′ = max { e, k } and λ = n ln k ′ . Lemma 24.
For all ≤ i ≤ n , e λi i e k ′ λi − e k ′ λ − ≤ e k ′ λi .Proof. It suffices to show that for all 0 < t ≤ n , f ( t ) def = e λt t − e − k ′ λt e k ′ λ − < . Observe that, lim t → f ( t ) = k ′ λe k ′ λ − ≤ . On the other hand, f ( n ) = e λn n − e − k ′ λn e k ′ λ − ≤ k ′ n e k ′ ln k ′ /n − ≤ k ′ n nk ′ ln k ′ ≤ . Next, we show that f ( t ) is convex. One can show that,ln 1 − e − u u , is a convex function of u . As a result, ln 1 − e − k ′ λt t , is a convex function of t . Observe that ln e λt is also convex. Therefore,ln 1 − e − k ′ λt t + ln e λt , is convex. As a result, logarithm of f ( t ) is convex and therefore, f ( t ) is convex.We showed that f ( t ) is convex, f ( t → ≤
1, and f ( n ) ≤
1. Therefore, for all 0 < t ≤≤
1. Therefore, for all 0 < t ≤≤ n , f ( t ) ≤≤