A Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergences
aa r X i v : . [ m a t h . S T ] M a y A Finite-Time Analysis of Multi-armed Bandits Problemswith Kullback-Leibler Divergences
Odalric-Ambrym Maillard
INRIA Lille Nord-EuropeFrance [email protected]
R´emi Munos
INRIA Lille Nord-EuropeFrance [email protected]
Gilles Stoltz
Ecole normale sup´erieure ∗ , Paris& HEC ParisFrance [email protected] Abstract
We consider a Kullback-Leibler-based algorithm for the stochastic multi-armed bandit prob-lem in the case of distributions with finite supports (not necessarily known beforehand),whose asymptotic regret matches the lower bound of Burnetas and Katehakis (1996). Ourcontribution is to provide a finite-time analysis of this algorithm; we get bounds whose mainterms are smaller than the ones of previously known algorithms with finite-time analyses(like UCB-type algorithms).
The stochastic multi-armed bandit problem, introduced by Robbins (1952), formalizes the prob-lem of decision-making under uncertainty, and illustrates the fundamental tradeoff that appearsbetween exploration , i.e., making decisions in order to improve the knowledge of the environment,and exploitation , i.e., maximizing the payoff.
Setting.
In this paper, we consider a multi-armed bandit problem with finitely many armsindexed by A , for which each arm a ∈ A is associated with an unknown and fixed probabilitydistribution ν a over [0 , sequential and goes as follows: at each round t >
1, theplayer first picks an arm A t ∈ A and then receives a stochastic payoff Y t drawn at random accordingto ν A t . He only gets to see the payoff Y t .For each arm a ∈ A , we denote by µ a the expectation of its associated distribution ν a and welet a ⋆ be any optimal arm, i.e., a ⋆ ∈ argmax a ∈A µ a . We write µ ⋆ as a short-hand notation for the largest expectation µ a ⋆ and denote the gap of theexpected payoff µ a of an arm a ∈ A to µ ⋆ as ∆ a = µ ⋆ − µ a . In addition, the number of times eacharm a ∈ A is pulled between the rounds 1 and T is referred to as N T ( a ), N T ( a ) def = T X t =1 I { A t = a } . The quality of a strategy will be evaluated through the standard notion of expected regret , whichwe recall now. The expected regret (or simply regret) at round T > R T def = E " T µ ⋆ − T X t =1 Y t = E " T µ ⋆ − T X t =1 µ A t = X a ∈A ∆ a E (cid:2) N T ( a ) (cid:3) , (1)where we used the tower rule for the first equality. Note that the expectation is with respect to therandom draws of the Y t according to the ν A t and also to the possible auxiliary randomizations thatthe decision-making strategy is resorting to.The regret measures the cumulative loss resulting from pulling sub-optimal arms, and thus quan-tifies the amount of exploration required by an algorithm in order to find a best arm, since, as (1)indicates, the regret scales with the expected number of pulls of sub-optimal arms. Since the for-mulation of the problem by Robbins (1952) the regret has been a popular criterion for assessing thequality of a strategy. ∗ CNRS – Ecole normale sup´erieure, Paris – INRIA, within the project-team CLASSIC nown lower bounds.
Lai and Robbins (1985) showed that for some (one-dimensional) para-metric classes of distributions, any consistent strategy (i.e., any strategy not pulling sub-optimalarms more than in a polynomial number of rounds) will despite all asymptotically pull in expecta-tion any sub-optimal arm a at least E (cid:2) N T ( a ) (cid:3) > (cid:18) K ( ν a , ν ⋆ ) + o (1) (cid:19) log( T )times, where K ( ν a , ν ⋆ ) is the Kullback-Leibler (KL) divergence between ν a and ν ⋆ ; it measures howclose distributions ν a and ν ⋆ are from a theoretical information perspective.Later, Burnetas and Katehakis (1996) extended this result to some classes of multi-dimensionalparametric distributions and proved the following generic lower bound: for a given family P ofpossible distributions over the arms, E (cid:2) N T ( a ) (cid:3) > (cid:18) K inf ( ν a , µ ⋆ ) + o (1) (cid:19) log( T ) , where K inf ( ν a , µ ⋆ ) def = inf ν ∈P : E ( ν ) >µ ∗ K ( ν a , ν ) , with the notation E ( ν ) for the expectation of a distribution ν . The intuition behind this improvementis to be related to the goal that we want to achieve in bandit problems; it is not detecting whethera distribution is optimal or not (for this goal, the relevant quantity would be K ( ν a , ν ⋆ )), but ratherachieving the optimal rate of reward µ ⋆ (i.e., one needs to measure how close ν a is to any distribution ν ∈ P whose expectation is at least µ ⋆ ). Known upper bounds.
Lai and Robbins (1985) provided an algorithm based on the KLdivergence, which has been extended by Burnetas and Katehakis (1996) to an algorithm based on K inf ; it is asymptotically optimal since the number of pulls of any sub-optimal arm a satisfies E (cid:2) N T ( a ) (cid:3) (cid:18) K inf ( ν a , µ ⋆ ) + o (1) (cid:19) log( T ) . This result holds for finite-dimensional parametric distributions under some assumptions, e.g., thedistributions having a finite and known support or belonging to a set of Gaussian distributions withknown variance. Recently Honda and Takemura (2010a) extended this asymptotic result to the caseof distributions P with support in [0 ,
1] and such that µ ∗ <
1; the key ingredient in this case is that K inf ( ν a , µ ⋆ ) is equal to K min ( ν a , µ ⋆ ) def = inf ν ∈P : E ( ν ) > µ ∗ K ( ν a , ν ). Motivation.
All the results mentioned above provide asymptotic bounds only. However, anyalgorithm is only used for a finite number of rounds and it is thus essential to provide a finite-time analysis of its performance. Auer et al. (2002) initiated this work by providing an algorithm(UCB1) based on a Chernoff-Hoeffding bound; it pulls any sub-optimal arm, till any time T , atmost (8 / ∆ a ) log T + 1 + π / a = ( µ ⋆ − µ a ) but not on K inf ( ν a , µ ⋆ ), which canbe seen to be larger than ∆ a / E (cid:2) N T ( a ) (cid:3) σ a / ∆ a + 2 / ∆ a ) log T for any time T (where σ a is the variance of arm a ); it improvesover UCB1 in case of arms with small variance. Other variants include the MOSS algorithm byAudibert and Bubeck (2010) and Improved UCB by Auer and Ortner (2010).However, all these algorithms only rely on one moment (for UCB1) or two moments (for UCB-V) of the empirical distributions of the obtained rewards; they do not fully exploit the empiricaldistributions. As a consequence, the resulting bounds are expressed in terms of the means µ a andvariances σ a of the sub-optimal arms and not in terms of the quantity K inf ( ν a , µ ⋆ ) appearing in thelower bounds. The numerical experiments reported in Filippi (2010) confirm that these algorithmsare less efficient than those based on K inf . Our contribution.
In this paper we analyze a K inf -based algorithm inspired by the onesstudied in Lai and Robbins (1985), Burnetas and Katehakis (1996), Filippi (2010); it indeed takesinto account the full empirical distribution of the observed rewards. The analysis is performed(with explicit bounds) in the case of Bernoulli distributions over the arms. Less explicit but finite-time bounds are obtained in the case of finitely supported distributions (whose supports do notneed to be known in advance). Finally, we pave the way for handling the case of general finite-dimensional parametric distributions. These results improve on the ones by Burnetas and Katehakis(1996), Honda and Takemura (2010a) since finite-time bounds (implying their asymptotic results)2re obtained; and on Auer et al. (2002), Audibert et al. (2009) as the dependency of the main termscales with K inf ( ν a , µ ⋆ ). The proposed K inf -based algorithm is also more natural and more appealingthan the one presented in Honda and Takemura (2010a). Recent related works.
Since our initial submission of the present paper, we got aware of twopapers that tackle problems similar to ours. First, a revised version of Honda and Takemura (2010b,personal communication) obtains finite-time regret bounds (with prohibitively large constants) for a randomized (less natural) strategy in the case of distributions with finite supports (also not knownin advance). Second, another paper at this conference (Garivier and Capp´e, 2011) also deals withthe K –strategy which we study in Theorem 3; they however do not obtain second-order terms inclosed forms as we do and later extend their strategy to exponential families of distributions (whilewe extend our strategy to the case of distributions with finite supports). On the other hand, theyshow how the K –strategy can be extended in a straightforward manner to guarantee bounds withrespect to the family of all bounded distributions on a known interval; these bounds are suboptimalbut improve on the ones of UCB-type algorithms. Let X be a Polish space; in the next sections, we will consider X = { , } or X = [0 , P ( X ) the set of probability distributions over X and equip P ( X ) with the distance d induced bythe norm k · k defined by k ν k = sup f ∈L (cid:12)(cid:12)R X f d ν (cid:12)(cid:12) , where L is the set of Lipschitz functions over X ,taking values in [ − ,
1] and with Lipschitz constant smaller than 1.
Kullback-Leibler divergence:
For two elements ν, κ ∈ P ( X ), we write ν ≪ κ when ν is abso-lutely continuous with respect to κ and denote in this case by d ν/ d κ the density of ν with respectto κ . We recall that the Kullback-Leibler divergence between ν and κ is defined as K ( ν, κ ) = Z [0 , d ν d κ log d ν d κ d κ if ν ≪ κ ; and K ( ν, κ ) = + ∞ otherwise. (2) Empirical distribution:
We consider a sequence X , X , . . . of random variables taking values in X , independent and identically distributed according to a distribution ν . For all integers t >
1, wedenote the empirical distribution corresponding to the first t elements of the sequence by b ν t = 1 t t X s =1 δ X t . Non-asymptotic Sanov’s Lemma:
The following lemma follows from a straightforward adapta-tion of Dinwoodie (1992, Theorem 2.1 and comments on page 372). Details of the proof are providedin the appendix.
Lemma 1
Let C be an open convex subset of P ( X ) such that Λ( C ) = inf κ ∈C K ( κ, ν ) < ∞ . Then, for all t > , one has P ν (cid:8)b ν t ∈ C (cid:9) e − t Λ( C ) where C is the closure of C . This lemma should be thought of as a deviation inequality. The empirical distribution converges(in distribution) to ν . Now, if (and only if) ν is not in the closure of C , then Λ( C ) > b ν t is in this set C not containing the limit ν . The probabilityof interest decreases at a geometric rate, which depends on Λ( C ). In this section, we start with the case of Bernoulli distributions. Although this case is a special caseof the general results of Section 4, we provide here a complete and self-contained analysis of thiscase, where, in addition, we are able to provide closed forms for all the terms in the regret bound.Note however that the resulting bound is slightly worse than what could be derived from the generalcase (for which more sophisticated tools are used). This result is mainly provided as a warm-up.
We denote by B the subset of P (cid:0) [0 , (cid:1) formed by the Bernoulli distributions; it corresponds to B = P (cid:0) { , } (cid:1) . A generic element of B will be denoted by β ( p ), where p ∈ [0 ,
1] is the probabilitymass put on 1. We consider a sequence X , X , . . . of independent and identically distributed random3 arameters : A non-decreasing function f : N → R Initialization : Pull each arm of A once For rounds t + 1, where t > |A| ,– compute for each arm a ∈ A the quantity B + a,t = max (cid:26) q ∈ [0 ,
1] : N t ( a ) K (cid:16) β (cid:0)b µ a,N t ( a ) (cid:1) , β ( q ) (cid:17) f ( t ) (cid:27) , where b µ a,N t ( a ) = 1 N t ( a ) X s t : A s = a Y s ;– in case of a tie, pick an arm with largest value of b µ a,N t ( a ) ;– pull any arm A t +1 ∈ argmax a ∈A B + a,t . Figure 1: The K –strategy.variables, with common distribution β ( p ); for the sake of clarity we will index, in this subsectiononly, all probabilities and expectations with p .For all integers t >
1, we denote by b p t = 1 t t X s =1 X t the empirical average of the first t elementsof the sequence.The lemma below follows from an adaptation of Garivier and Leonardi (2010, Proposition 2).The details of the adaptation (and simplification) can be found in the appendix. Lemma 2
For all p ∈ [0 , , all ε > , and all t > , P p t [ s =1 (cid:26) s K (cid:16) β (cid:0)b p s (cid:1) , β ( p ) (cid:17) > ε (cid:27)! e (cid:6) ε log t (cid:7) e − ε . In particular, for all random variables N t taking values in { , . . . , t } , P p (cid:26) N t K (cid:16) β (cid:0)b p N t (cid:1) , β ( p ) (cid:17) > ε (cid:27) e (cid:6) ε log t (cid:7) e − ε . Another immediate fact about Bernoulli distributions is that for all p ∈ (0 , K · ,p : q ∈ (0 ,
7→ K (cid:0) β ( p ) , β ( q ) (cid:1) and K p, · : q ∈ [0 ,
7→ K (cid:0) β ( q ) , β ( p ) (cid:1) are continuous and takefinite values. In particular, we have, for instance, that for all ε > p ∈ (0 , n q ∈ [0 ,
1] : K (cid:0) β ( p ) , β ( q ) (cid:1) ε o is a closed interval containing p . This property still holds when p ∈ { , } , as in this case, theinterval is reduced to { p } . We consider the so-called K –strategy of Figure 1, which was already considered in the literature, seeBurnetas and Katehakis (1996), Filippi (2010). The numerical computation of the quantities B + a,t is straightforward (by convexity of K in its second argument, by using iterative methods) and isdetailed therein.Before proceeding, we denote by σ a = µ a (1 − µ a ) the variance of each arm a ∈ A (and take theshort-hand notation σ ⋆, for the variance of an optimal arm). Theorem 3
When µ ⋆ ∈ (0 , , for all non-decreasing functions f : N → R + such that f (1) > ,the expected regret R T of the strategy of Figure 1 is upper bounded by the infimum, as the ( c a ) a ∈A describe (0 , + ∞ ) , of the quantities X a ∈A ∆ a (1 + c a ) f ( T ) K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) + 4 e T − X t = |A| (cid:6) f ( t ) log t (cid:7) e − f ( t ) + (1 + c a ) c a ∆ a min (cid:8) σ a , σ ⋆, (cid:9) I { µ a ∈ (0 , } + 3 ! . For µ ⋆ = 0 , its regret is null. For µ ⋆ = 1 , it satisfies R T (cid:0) |A| − (cid:1) . f is f ( t ) = log (cid:0) ( et ) log ( et ) (cid:1) , which is non decreasing, satisfies f (1) >
1, and is such that the second term in the sum above is bounded (by a basic result aboutso-called Bertrand’s series). Now, as the constants c a in the bound are parameters of the analysis(and not of the strategy), they can be optimized. For instance, with the choice of f ( t ) mentionedabove, taking each c a proportional to (log T ) − / (up to a multiplicative constant that depends onthe distributions ν a ) entails the regret bound X a ∈A ∆ a log T K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) + ε T , where it is easy to give an explicit and closed-form expression of ε T ; in this conference version, weonly indicate that ε T is of order of (log T ) / but we do not know whether the order of magnitudeof this second-order term is optimal. Proof:
We first deal with the case where µ ⋆
6∈ { , } and introduce an additional notation. In viewof the remark at the end of Section 3.1, for all arms a and rounds t , we let B − a,t be the element in[0 ,
1] such that (cid:26) q ∈ [0 ,
1] : N t ( a ) K (cid:16) β (cid:0)b µ a,N t ( a ) (cid:1) , β ( q ) (cid:17) f ( t ) (cid:27) = (cid:2) B − a,t , B + a,t (cid:3) . (3)As (1) indicates, it suffices to bound N T ( a ) for all suboptimal arms a , i.e., for all arms such that µ a < µ ⋆ . We will assume in addition that µ a > µ a µ ⋆ < µ a = 0 will be handled separately. Step 1: A decomposition of the events of interest.
For t > |A| , when A t +1 = a , we havein particular, by definition of the strategy, that B + a,t > B + a ⋆ ,t . On the event (cid:8) A t +1 = a (cid:9) ∩ n µ ⋆ ∈ (cid:2) B − a ⋆ ,t , B + a ⋆ ,t (cid:3)o ∩ n µ a ∈ (cid:2) B − a,t , B + a,t (cid:3)o , we therefore have, on the one hand, µ ⋆ B + a ⋆ ,t B + a,t and on the other hand, B − a,t µ a µ ⋆ , thatis, the considered event is included in n µ ⋆ ∈ (cid:2) B − a,t , B + a,t (cid:3)o . We thus proved that (cid:8) A t +1 = a (cid:9) ⊆ n µ ⋆ (cid:2) B − a ⋆ ,t , B + a ⋆ ,t (cid:3)o ∪ n µ a (cid:2) B − a,t , B + a,t (cid:3)o ∪ n µ ⋆ ∈ (cid:2) B − a,t , B + a,t (cid:3)o . Going back to the definition (3), we get in particular the inclusion (cid:8) A t +1 = a (cid:9) ⊆ (cid:26) N t ( a ⋆ ) K (cid:16) β (cid:0)b µ a ⋆ ,N t ( a ⋆ ) (cid:1) , β ( µ ⋆ ) (cid:17) > f ( t ) (cid:27) ∪ (cid:26) N t ( a ) K (cid:16) β (cid:0)b µ a,N t ( a ) (cid:1) , β ( µ a ) (cid:17) > f ( t ) (cid:27) ∪ (cid:26) N t ( a ) K (cid:16) β (cid:0)b µ a,N t ( a ) (cid:1) , β ( µ ⋆ ) (cid:17) f ( t ) (cid:27) ∩ (cid:8) A t +1 = a (cid:9)! . Step 2: Bounding the probabilities of two elements of the decomposition.
We considerthe filtration ( F t ), where for all t >
1, the σ –algebra F t is generated by A , Y , . . . , A t , Y t . Inparticular, A t +1 and thus all N t +1 ( a ) are F t –measurable. We denote by τ a, the deterministic roundat which a was pulled for the first time and by τ a, , τ a, , . . . the rounds t > |A| + 1 at which a wasthen played; since for all k > τ a,k = min (cid:8) t > |A| + 1 : N t ( a ) = k (cid:9) , we see that (cid:8) τ a,k = t (cid:9) is F t − –measurable. Therefore, for each k >
1, the random variable τ a,k is a(predictable) stopping time. Hence, by a well-known fact in probability theory (see, e.g., Chow andTeicher 1988, Section 5.3), the random variables e X a,k = Y τ a,k , where k = 1 , , . . . are independentand identically distributed according to ν a . Since on (cid:8) N t ( a ) = k (cid:9) , we have the rewriting b µ a,N t ( a ) = e µ a,k where e µ a,k = 1 k k X j =1 e X a,j , t > |A| + 1, one has N t ( a ) > t > |A| + 1, P (cid:26) N t ( a ) K (cid:16) β (cid:0)b µ a,N t ( a ) (cid:1) , β ( µ a ) (cid:17) > f ( t ) (cid:27) e (cid:6) f ( t ) log t (cid:7) e − f ( t ) . A similar argument shows that for all t > |A| + 1, P (cid:26) N t ( a ⋆ ) K (cid:16) β (cid:0)b µ a ⋆ ,N t ( a ⋆ ) (cid:1) , β ( µ ⋆ ) (cid:17) > f ( t ) (cid:27) e (cid:6) f ( t ) log t (cid:7) e − f ( t ) . Step 3: Rewriting the remaining terms.
We therefore proved that E (cid:2) N T ( a ) (cid:3) e T − X t = |A| (cid:6) f ( t ) log t (cid:7) e − f ( t ) + T − X t = |A| P (cid:26) N t ( a ) K (cid:16) β (cid:0)b µ a,N t ( a ) (cid:1) , β ( µ ⋆ ) (cid:17) f ( t ) (cid:27) ∩ (cid:8) A t +1 = a (cid:9)! and deal now with the last sum. Since f is non decreasing, it is bounded by T − X t = |A| P (cid:16) K t ∩ (cid:8) A t +1 = a (cid:9)(cid:17) where K t = (cid:26) N t ( a ) K (cid:16) β (cid:0)b µ a,N t ( a ) (cid:1) , β ( µ ⋆ ) (cid:17) f ( T ) (cid:27) . Now, T − X t = |A| P (cid:16) K t ∩ (cid:8) A t +1 = a (cid:9)(cid:17) = E T − X t = |A| I (cid:8) A t +1 = a (cid:9) I K t = E X k > I (cid:8) τ a,k T (cid:9) I K τa,k − . We note that, since N τ a,k − ( a ) = k −
1, we have that K τ a,k − = (cid:26) ( k − K (cid:16) β (cid:0)e µ a,k − (cid:1) , β ( µ ⋆ ) (cid:17) f ( T ) (cid:27) . All in all, since τ a,k T implies k T − |A| + 1 (as each arm is played at least once during the first |A| rounds), we have E X k > I (cid:8) τ a,k T (cid:9) I K τa,k − E T −|A| +1 X k =2 I K τa,k − = T −|A| +1 X k =2 P (cid:26) ( k − K (cid:16) β (cid:0)e µ a,k − (cid:1) , β ( µ ⋆ ) (cid:17) f ( T ) (cid:27) . (4) Step 4: Bounding the probabilities of the latter sum via Sanov’s lemma.
For each γ >
0, we define the convex open set C γ = n β ( q ) ∈ B : K (cid:0) β ( q ) , β ( µ ⋆ ) (cid:1) < γ o , which is a non-emptyset (since µ ⋆ < K · ,µ ⋆ defined after the statement of Lemma 2 when µ ⋆ ∈ (0 , C γ = n β ( q ) ∈ B : K (cid:0) β ( q ) , β ( µ ⋆ ) (cid:1) γ o . In addition, since µ a ∈ (0 , K (cid:0) β ( q ) , β ( µ a ) (cid:1) < ∞ for all q ∈ [0 , γ >
0, the condition Λ (cid:0) C γ (cid:1) < ∞ of Lemma 1 is satisfied. Denoting this value by θ a ( γ ) = inf (cid:26) K (cid:0) β ( q ) , β ( µ a ) (cid:1) : β ( q ) ∈ B such that K (cid:0) β ( q ) , β ( µ ⋆ ) (cid:1) γ (cid:27) , we get by the indicated lemma that for all k > P (cid:26) K (cid:16) β (cid:0)e µ a,k (cid:1) , β ( µ ⋆ ) (cid:17) γ (cid:27) = P n β (cid:0)e µ a,k (cid:1) ∈ C γ o e − k θ a ( γ ) . Now, since (an open neighborhood of) β ( µ a ) is not included in C γ as soon as 0 < γ < K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) ,we have that θ a ( γ ) > γ . To apply the obtained inequality to the last sum in (4),we fix a constant c a > k the following upper integer part, k = & (1 + c a ) f ( T ) K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) ' , so that f ( T ) /k K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) / (1 + c a ) < K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) for k > k , hence, T −|A| +1 X k =2 P (cid:26) ( k − K (cid:16) β (cid:0)e µ a,k − (cid:1) , β ( µ ⋆ ) (cid:17) f ( T ) (cid:27) T X k =1 P (cid:26) K (cid:16) β (cid:0)e µ a,k (cid:1) , β ( µ ⋆ ) (cid:17) f ( T ) k (cid:27) k − T X k = k exp (cid:16) − k θ a (cid:0) f ( T ) /k (cid:1)(cid:17) . θ a is a non-increasing function, T X k = k exp (cid:16) − k θ a (cid:0) f ( T ) /k (cid:1)(cid:17) T X k = k exp (cid:16) − k θ a (cid:0) K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) / (1 + c a ) (cid:1)(cid:17) Γ a ( c a ) exp (cid:16) − k θ a (cid:0) K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) / (1 + c a ) (cid:1)(cid:17) Γ a ( c a ) , where Γ a ( c a ) = h − exp (cid:16) − θ a (cid:0) K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) / (1 + c a ) (cid:1)(cid:17)i − . Putting all pieces together, we thus proved so far that E (cid:2) N T ( a ) (cid:3) c a ) f ( T ) K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) + 4 e T − X t = |A| (cid:6) f ( t ) log t (cid:7) e − f ( t ) + Γ a ( c a )and it only remains to deal with Γ a ( c a ). Step 5: Getting an upper bound in closed form for Γ a ( c a ) . We will make repeated usesof Pinsker’s inequality: for p, q ∈ [0 , K (cid:0) β ( p ) , β ( q ) (cid:1) > p − q ) . In what follows, we use the short-hand notation Θ a = θ a (cid:0) K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) / (1 + c a ) (cid:1) and there-fore need to upper bound 1 / (cid:0) − e − Θ a (cid:1) . Since for all u >
0, one has e − u − u + u /
2, weget Γ a ( c a ) a (cid:0) − Θ a / (cid:1) a for Θ a , and Γ a ( c a ) − e − a > . It thus onlyremains to lower bound Θ a in the case when it is smaller than 1.By the continuity properties of the Kullback-Leibler divergence, the infimum in the definition of θ a is always achieved; we therefore let e µ be an element in [0 ,
1] such thatΘ a = K (cid:0) β ( e µ ) , β ( µ a ) (cid:1) and K (cid:0) β ( e µ ) , β ( µ ⋆ ) (cid:1) = K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) c ;it is easy to see that we have the ordering µ a < e µ < µ ⋆ . By Pinsker’s inequality, Θ a > (cid:0)e µ − µ a (cid:1) and we now lower bound the latter quantity. We use the short-hand notation f ( p ) = K (cid:0) β ( p ) , β ( µ ⋆ ) (cid:1) and note that the thus defined mapping f is convex and differentiable on (0 , f ′ ( p ) = log (cid:0) (1 − µ ⋆ ) / ( µ ⋆ ) (cid:1) +log (cid:0) p/ (1 − p ) (cid:1) for all p ∈ (0 ,
1) and is therefore non positive for p µ ⋆ . Bythe indicated convexity of f , using a sub-gradient inequality, we get f (cid:0)e µ (cid:1) − f ( µ a ) > f ′ ( µ a ) (cid:0)e µ − µ a (cid:1) , which entails, since f ′ ( µ a ) < e µ − µ a > f (cid:0)e µ (cid:1) − f ( µ a ) f ′ ( µ a ) = c a c a f ( µ a ) − f ′ ( µ a ) , (5)where the equality follows from the fact that by definition of µ , we have f (cid:0)e µ (cid:1) = f ( µ a ) / (1 + c a ).Now, since f ′ is differentiable as well on (0 ,
1) and takes the value 0 at µ ⋆ , a Taylor’s equality entailsthat there exists a ξ ∈ ( µ a , µ ⋆ ) such that − f ′ ( µ a ) = f ′ ( µ ⋆ ) − f ′ ( µ a ) = f ′′ ( ξ ) (cid:0) µ ⋆ − µ a ) where f ′′ ( ξ ) = 1 /ξ + 1 / (1 − ξ ) = 1 (cid:14)(cid:0) ξ (1 − ξ ) (cid:1) . Therefore, by convexity of τ τ (1 − τ ), we get that1 − f ′ ( µ a ) > min (cid:8) µ a (1 − µ a ) , µ ⋆ (1 − µ ⋆ ) (cid:9) µ ⋆ − µ a . Substituting this into (5) and using again Pinsker’s inequality to lower bound f ( µ a ), we have proved e µ − µ a > c a c a (cid:0) µ ⋆ − µ a (cid:1) min (cid:8) µ a (1 − µ a ) , µ ⋆ (1 − µ ⋆ ) (cid:9) . Putting all pieces together, we thus proved thatΓ a ( c a ) (1 + c a ) c a (cid:0) µ ⋆ − µ a (cid:1) (cid:16) min (cid:8) µ a (1 − µ a ) , µ ⋆ (1 − µ ⋆ ) (cid:9)(cid:17) , ;bounding the maximum of the two quantities by their sum concludes the main part of the proof.7 tep 6: For µ ⋆ ∈ { , } and/or µ a = 0 . When µ ⋆ = 1, then b µ a ⋆ ,N t ( a⋆ ) = 1 for all t > |A| + 1,so that B + a ⋆ ,t = 1 for all t > |A| + 1. Thus, the arm a is played after round t > |A| + 1 only if B + a,t = 1and b µ a,N t ( a ) = 1 (in view of the tie-breaking rule of the considered strategy). But this means that a is played as long as it gets payoffs equal to 1 and is stopped being played when it receives the payoff0 for the first time. Hence, in this case, we have that the sum of payoffs equals at least T − (cid:0) |A| − R T = E [ T µ ⋆ − ( Y + . . . + Y t )] is therefore bounded by 2 (cid:0) |A| − µ ⋆ = 0, a Dirac mass over 0 is associated with all arms and the regret of all strategies isequal to 0.We consider now the case µ ⋆ ∈ (0 ,
1) and µ a = 0, for which the first three steps go through; onlyin the upper bound of step 4 we used the fact that µ a >
0. But in this case, we have a deterministicbound on (4). Indeed, since K (cid:0) β (0) , β ( µ ⋆ ) (cid:1) = − log µ ⋆ , we have k K (cid:0) β (0) , β ( µ ⋆ ) (cid:1) f ( T ) if and onlyif k f ( T ) − log µ ⋆ = f ( T ) K (cid:0) β ( µ a ) , β ( µ ⋆ ) (cid:1) , which improves on the general bound exhibited in step 4. Remark 1
Note that Step 5 in the proof is specifically designed to provide an upper bound onΓ a ( c a ) in the case of Bernoulli distributions. In the general case, getting such an explicit boundseems more involved. Before stating and proving our main result, Theorem 9, we introduce the quantity K inf and list someof its properties. K inf and its level sets We now introduce the key quantity in order to generalize the previous algorithm to handle the caseof distributions with finite support. To that end, we introduce P F (cid:0) [0 , (cid:1) , the subset of P (cid:0) [0 , (cid:1) that consists of distributions with finite support. Definition 4
For all distributions ν ∈ P F (cid:0) [0 , (cid:1) and µ ∈ [0 , , we define K inf ( ν, µ ) = inf n K ( ν, ν ′ ) : ν ′ ∈ P F (cid:0) [0 , (cid:1) s.t. E ( ν ′ ) > µ o , where E ( ν ′ ) = R [0 , x d ν ′ ( x ) denotes the expectation of the distribution ν ′ . We now remind some useful properties of K inf . Honda and Takemura (2010b, Lemma 6) can bereformulated in our context as follows. Lemma 5
For all ν ∈ P F (cid:0) [0 , (cid:1) , the mapping K inf ( ν, · ) is continuous and non decreasing in itsargument µ ∈ [0 , . Moreover, the mapping K inf ( · , µ ) is lower semi-continuous on P F (cid:0) [0 , (cid:1) forall µ ∈ [0 , . The next two lemmas bound the variation of K inf , respectively in its first and second arguments.(For clarity, we denote the expectations with respect to ν by E ν .) Their proofs are both deferred tothe appendix. We denote by k · k the ℓ –norm on P (cid:0) [0 , (cid:1) and recall that the ℓ –norm of ν − ν ′ corresponds to twice the distance in variation between ν and ν ′ . Lemma 6
For all µ ∈ (0 , and for all ν, ν ′ ∈ P F (cid:0) [0 , (cid:1) , the following holds true.– In the case when E ν (cid:2) (1 − µ ) / (1 − X ) (cid:3) > , then K inf ( ν, µ ) − K inf ( ν ′ , µ ) M ν,µ k ν − ν ′ k , forsome constant M ν,µ > .– In the case when E ν (cid:2) (1 − µ ) / (1 − X ) (cid:3) , the fact that K inf ( ν, µ ) − K inf ( ν ′ , µ ) > α K inf ( ν, µ ) for some α ∈ (0 , entails that k ν − ν ′ k > − µ (2 /α ) (cid:0) (2 /α ) − (cid:1) . Lemma 7
We have that for any ν ∈ P F (cid:0) [0 , (cid:1) , provided that µ > µ − ε > E ( ν ) , the followinginequalities hold true: ε/ (1 − µ ) > K inf ( ν, µ ) − K inf ( ν, µ − ε ) > ε Moreover, the first inequality is also valid when E ( ν ) > µ > µ − ε or µ > E ( ν ) > µ − ε . arameters : A non-decreasing function f : N → R Initialization : Pull each arm of A once For rounds t + 1, where t > |A| ,– compute for each arm a ∈ A the quantity B + a,t = max n q ∈ [0 ,
1] : N t ( a ) K inf (cid:0)b ν a,N t ( a ) , q (cid:1) f ( t ) o , where b ν a,N t ( a ) = 1 N t ( a ) X s t : A s = a δ Y s ;– in case of a tie, pick an arm with largest value of b µ a,N t ( a ) ;– pull any arm A t +1 ∈ argmax a ∈A B + a,t . Figure 2: The strategy K inf . Level sets of K inf : For each γ > µ ∈ (0 , C µ,γ = n ν ′ ∈ P F (cid:0) [0 , (cid:1) : K inf ( ν ′ , µ ) < γ o = n ν ′ ∈ P F (cid:0) [0 , (cid:1) : ∃ ν ′ µ ∈ P F (cid:0) [0 , (cid:1) s.t. E (cid:0) ν ′ µ (cid:1) > µ and K (cid:0) ν ′ , ν ′ µ (cid:1) < γ o . We detail a property in the following lemma, whose proof is also deferred to the appendix.
Lemma 8
For all γ > and µ ∈ (0 , , the set C µ,γ is a non-empty open convex set. Moreover, C µ,γ ⊇ n ν ′ ∈ P F (cid:0) [0 , (cid:1) : K inf ( ν ′ , µ ) γ o . K inf –strategy and a general performance guarantee For each arm a ∈ A and round t with N t ( a ) >
0, we denote by b ν a,N t ( a ) the empirical distribution ofthe payoffs obtained till round t when picking arm a , that is, b ν a,N t ( a ) = 1 N t ( a ) X s t : A s = a δ Y s , where for all x ∈ [0 , δ x the Dirac mass on x . We define the corresponding empiricalaverages as b µ a ⋆ ,N t ( a ⋆ ) = E (cid:0)b ν a ⋆ ,N t ( a ⋆ ) (cid:1) = 1 N t ( a ) X s t : A s = a Y s . We then consider the K inf –strategy defined in Figure 2. Note that the use of maxima in the definitionsof the B + a,t is justified by Lemma 5.As explained in Honda and Takemura (2010b), the computation of the quantities K inf can bedone efficiently in this case, i.e., when we consider only distributions with finite supports. Thisis because in the computation of K inf , it is sufficient to consider only distributions with the samesupport as the empirical distributions (up to one point). Note that the knowledge of the support ofthe distributions associated with the arms is not required. Theorem 9
Assume that ν ⋆ is finitely supported, with expectation µ ⋆ ∈ (0 , and with supportdenoted by S ⋆ . Let a ∈ A be a suboptimal arm such that µ a > and ν a is finitely supported. Then,for all c a > and all < ε < min (cid:26) ∆ a , c a /
21 + c a (1 − µ ⋆ ) K inf ( ν a , µ ⋆ ) (cid:27) , the expected number of times the K inf –strategy, run with f ( t ) = log t , pulls arm a satisfies E (cid:2) N T ( a ) (cid:3)
1+ (1 + c a ) log T K inf ( ν a , µ ⋆ ) + 11 − e − Θ a ( c a ,ε ) + 1 ε log (cid:18) − µ ∗ + ε (cid:19) T X k =1 ( k +1) |S ⋆ | e − kε + 1(∆ a − ε ) , here Θ a ( c a , ε ) = θ a (cid:18) log Tk + ε − µ ⋆ (cid:19) with k = (cid:24) (1 + c a ) log T K inf ( ν a , µ ⋆ ) (cid:25) . and for all γ > , θ a ( γ ) = inf n K ( ν ′ , ν a ) : ν ′ s.t. K inf ( ν ′ , µ ⋆ ) < γ o . As a corollary, we get (by taking some common value for all c a ) that for all c > R T X a ∈A ∆ a (1 + c ) log T K inf ( ν a , µ ⋆ ) + h ( c ) , where h ( c ) < ∞ is a function of c (and of the distributions associated with the arms), which ishowever independent of T . As a consequence, we recover the asymptotic results of Burnetas andKatehakis (1996), Honda and Takemura (2010a), i.e., the guarantee thatlim sup T →∞ R T log T X a ∈A ∆ a K inf ( ν a , µ ⋆ ) . Of course, a sharper optimization can be performed by carefully choosing the constants c a ,that are parameters of the analysis; similarly to the comments after the statement of Theorem 3, wewould then get a dominant term with a constant factor 1 instead of 1 + c as above, plus an additionalsecond-order term. Details are left to a journal version of this paper. Proof:
By arguments similar to the ones used in the first step of the proof of Theorem 3, we have (cid:8) A t +1 = a (cid:9) ⊆ n µ ⋆ − ε < b µ a,N t ( a ) o ∪ n µ ⋆ − ε > B + a ⋆ ,t o ∪ n µ ⋆ − ε ∈ (cid:2)b µ a,N t ( a ) , B + a,t (cid:3)o ;indeed, on the event (cid:8) A t +1 = a (cid:9) ∩ n µ ⋆ − ε > b µ a,N t ( a ) o ∩ n µ ⋆ − ε B + a ⋆ ,t o , we have, b µ a,N t ( a ) µ ⋆ − ε B + a ⋆ ,t B + a,t (where the last inequality is by definition of the strategy).Before proceeding, we note that n µ ⋆ − ε ∈ (cid:2)b µ a,N t ( a ) , B + a,t (cid:3)o ⊆ n N t ( a ) K inf (cid:0)b ν a,N t ( a ) , µ ⋆ − ε (cid:1) f ( t ) o , since K inf is a non-decreasing function in its second argument and K inf (cid:0) ν, E ( ν ) (cid:1) = 0 for all distri-butions ν . Therefore, E (cid:2) N T ( a ) (cid:3) T − X t = |A| P n µ ⋆ − ε < b µ a,N t ( a ) and A t +1 = a o + T − X t = |A| P n µ ⋆ − ε > B + a ⋆ ,t o + T − X t = |A| P n N t ( a ) K inf (cid:0)b ν a,N t ( a ) , µ ⋆ − ε (cid:1) f ( t ) and A t +1 = a o ;now, the two sums with the events “and A t +1 = a ” can be rewritten by using the stopping times τ a,k introduced in the proof of Theorem 3; more precisely, by mimicking the transformations performedin its step 3, we get the simpler bound E (cid:2) N T ( a ) (cid:3) T −|A| +1 X k =2 P n µ ⋆ − ε < e µ a,k − o + T − X t = |A| P n µ ⋆ − ε > B + a ⋆ ,t o + T −|A| +1 X k =2 P n ( k − K inf (cid:0)e ν a,k − , µ ⋆ − ε (cid:1) f ( t ) o , (6)where the e ν a,s and e µ a,s are respectively the empirical distributions and empirical expectations com-puted on the first s elements of the sequence of the random variables e X a,j = Y τ a,j , which are i.i.d.according to ν a . 10 tep 1: The first sum in (6) is bounded by resorting to Hoeffding’s inequality, whose appli-cation is legitimate since µ ⋆ − µ a − ε > T −|A| +1 X k =2 P n µ ⋆ − ε < e µ a,k − o = T −|A| X k =1 P n µ ⋆ − µ a − ε < e µ a,k − µ a o T −|A| X k =1 e − k ( µ ⋆ − µ a − ε ) − e − µ ⋆ − µ a − ε ) µ ⋆ − µ a − ε ) , where we used for the last inequality the general upper bounds provided at the beginning of step 5in the proof of Theorem 3. Step 2: The second sum in (6) is bounded by first using the definition of B + a ⋆ ,t , then,decomposing the event depending on the values taken by N t ( a ⋆ ); and finally using the fact that on (cid:8) N t ( a ⋆ ) = k (cid:9) , we have the rewriting b ν a,N t ( a ) = e ν a,k and b µ a,N t ( a ) = e µ a,k ; more precisely, T − X t = |A| P n µ ⋆ − ε > B + a ⋆ ,t o T − X t = |A| P n N t ( a ⋆ ) K inf (cid:0)b ν a ⋆ ,N t ( a ⋆ ) , µ ⋆ − ε (cid:1) > f ( t ) o = T − X t = |A| t X k =1 P n N t ( a ⋆ ) = k and k K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1) > f ( t ) o T X k =1 T − X t = |A| P n k K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1) > f ( t ) o . Since f = log is increasing, we can rewrite the bound, using a Fubini-Tonelli argument, as T − X t = |A| P n µ ⋆ − ε > B + a ⋆ ,t o T X k =1 T − X t = |A| P (cid:26) f − (cid:16) k K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1)(cid:17) > t (cid:27) T X k =1 E (cid:20) f − (cid:16) k K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1)(cid:17) I (cid:8) K inf ( e ν a⋆,k , µ ⋆ − ε ) > (cid:9)(cid:21) . Now, Honda and Takemura (2010a, Lemma 13) indicates that, since µ ⋆ − ε ∈ [0 , ν ∈P F ([0 , K inf (cid:0) ν, µ ⋆ − ε (cid:1) log (cid:0) / (1 − µ ⋆ + ε ) (cid:1) def = K max ;we define Q = K max /ε and introduce the following sets ( V q ) q Q : V q = n ν ∈ P F (cid:0) [0 , (cid:1) : ( q − ε < K inf (cid:0) ν, µ ∗ − ε ) qε o . A peeling argument (and by using that f − = exp is increasing as well) entails, for all k > E (cid:20) f − (cid:16) k K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1)(cid:17) I (cid:8) K inf ( e ν a⋆,k , µ ⋆ − ε ) > (cid:9)(cid:21) (7)= Q X q =1 E (cid:20) f − (cid:16) k K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1)(cid:17) I (cid:8) e ν a⋆,k ∈ V q (cid:9)(cid:21) Q X q =1 P (cid:8)e ν a ⋆ ,k ∈ V q (cid:9) f − ( kqε ) Q X q =1 P n K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1) > ( q − ε o f − ( kqε ) , (8)where we used the definition of V q to obtain each of the two inequalities. Now, by Lemma 7, when E (cid:0)e ν a ⋆ ,k (cid:1) < µ ⋆ − ε , which is satisfied whenever K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1) >
0, we have K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1) K inf (cid:0)e ν a ⋆ ,k , µ ⋆ (cid:1) − ε K (cid:0)e ν a ⋆ ,k , ν ⋆ (cid:1) − ε , where the last inequality is by mere definition of K inf . Therefore, P n K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1) > ( q − ε o P n K (cid:0)e ν a ⋆ ,k , ν ⋆ (cid:1) > ( q + 1) ε o .
11e note that for all k > P n K (cid:0)e ν a ⋆ ,k , ν ⋆ (cid:1) > ( q + 1) ε o ( k + 1) |S ⋆ | e − k ( q +1) ε , where we recall that S ⋆ denotes the finite support of ν ⋆ and where we applied Corollary 12 of theappendix. Now, (8) then yields, via the choice f = log and thus f − = exp, that E (cid:20) f − (cid:16) k K inf (cid:0)e ν a ⋆ ,k , µ ⋆ − ε (cid:1)(cid:17) I (cid:8) K inf ( e ν a⋆,k , µ ⋆ − ε ) > (cid:9)(cid:21) Q X q =1 ( k + 1) |S ⋆ | e − k ( q +1) ε e kqε | {z } = Q ( k +1) |S ⋆ | e − kε . Substituting the value of Q , we therefore have proved that T − X t = |A| P n µ ⋆ − ε > B + a ⋆ ,t o ε log (cid:18) − µ ∗ + ε (cid:19) T X k =1 ( k + 1) |S ⋆ | e − kε . Step 3: The third sum in (6) is first upper bounded by Lemma 7, which states that K inf (cid:0)e ν a,k − , µ ⋆ (cid:1) − ε/ (1 − µ ⋆ ) K inf (cid:0)e ν a,k − , µ ⋆ − ε (cid:1) , for all k >
1, and by using f ( t ) f ( T ); this gives T −|A| X k =1 P n k K inf (cid:0)e ν a,k , µ ⋆ − ε (cid:1) f ( t ) o T −|A| X k =1 P (cid:26) k K inf (cid:0)e ν a,k , µ ⋆ (cid:1) f ( T ) + k ε − µ ⋆ (cid:27) = T −|A| X k =1 P ne ν a,k ∈ C µ ⋆ ,γ k o , where γ k = f ( T ) /k + ε/ (1 − µ ⋆ ) and where the set C µ ⋆ ,γ k was defined in Section 4.1. For all γ > θ a ( γ ) = inf n K ( ν ′ , ν a ) : ν ′ ∈ C µ ⋆ ,γ o = inf n K ( ν ′ , ν a ) : ν ′ ∈ C µ ⋆ ,γ o , (where the second equality follows from the lower semi-continuity of K ) and aim at bounding P ne ν a,k ∈ C µ ⋆ ,γ o .As shown in Section 4.1, the set C µ ⋆ ,γ is a non-empty open convex set. If we prove that θ a ( γ )is finite for all γ >
0, then all the conditions will be required to apply Lemma 1 and get the upperbound T −|A| X k =1 P ne ν a,k ∈ C µ ⋆ ,γ k o T −|A| X k =1 e − k θ a ( γ k ) . To that end, we use the fact that ν a is finitely supported. Now, either the probability of interestis null and we are done; or, it is not null, which implies that there exists a possible value of e ν a,k thatis in C µ ⋆ ,γ ; since this value is a distribution with a support included in the one of ν a , it is absolutelycontinuous with respect to ν a and hence, the Kullback-Leibler divergence between this value and ν a is finite; in particular, θ a ( γ ) is finite.Finally, we bound the θ a ( γ k ) for values of k larger than k = (cid:24) (1 + c a ) f ( T ) K inf ( ν a , µ ⋆ ) (cid:25) ;we have that for all k > k , in view of the bound put on ε , γ k γ k = f ( T ) k + ε − µ ⋆ < K inf ( ν a , µ ⋆ )1 + c a + c a /
21 + c a K inf ( ν a , µ ⋆ ) = 1 + c a /
21 + c a K inf ( ν a , µ ⋆ ) . (9)Since θ a is non increasing, we have T −|A| X k =1 e − k θ a ( γ k ) k − T −|A| X k = k e − k θ a ( γ k ) k − − e − Θ a ( c a ,ε ) , provided that the quantity Θ a ( c a , ε ) = θ a (cid:0) γ k (cid:1) is positive, which we prove now.Indeed for all ν ′ ∈ C µ ⋆ ,γ k , we have by definition and by (9) that K inf ( ν ′ , µ ⋆ ) − K inf ( ν a , µ ⋆ ) < γ k − K inf ( ν a , µ ⋆ ) < − (cid:0) ( c a / (cid:14) (1 + c a ) (cid:1) K inf ( ν a , µ ⋆ ) . E ν a (cid:2) (1 − µ ⋆ ) / (1 − X ) (cid:3) >
1, we have, first by application of Pinsker’s inequalityand then by Lemma 6, that K (cid:0) ν ′ , ν a (cid:1) > k ν ′ − ν a k > M ν a ,µ ⋆ (cid:0) K inf ( ν a , µ ⋆ ) − K inf ( ν ′ , µ ⋆ ) (cid:1) > c a (cid:0) K inf ( ν a , µ ⋆ ) (cid:1) c a ) M ν a ,µ ⋆ ;since, again by Pinsker’s inequality, K inf ( ν a , µ ⋆ ) > ( µ a − µ ⋆ ) / >
0, we have exhibited a lowerbound independent of ν ′ in this case. In the case where E ν a (cid:2) (1 − µ ⋆ ) / (1 − X ) (cid:3)
1, we apply thesecond part of Lemma 6, with α a = ( c a / / (1 + c a ), and get K (cid:0) ν ′ , ν a (cid:1) > k ν ′ − ν a k > − µ ⋆ (2 /α a ) (cid:0) (2 /α a ) − (cid:1) ! > . Thus, in both cases we found a positive lower bound independent of ν ′ , so that the infimum over ν ′ ∈ C µ ⋆ ,γ k of the quantities K inf ( ν ′ , µ ⋆ ), which precisely equals θ a (cid:0) γ k (cid:1) , is also positive. Thisconcludes the proof. Conclusion.
We provided a finite-time analysis of the (asymptotically optimal) K inf –strategy in the caseof finitely supported distributions. One could think that the extension to the case of general distributionsis straightforward. However this extension appears somewhat difficult (at least when using the currentdefinition of K inf ) for the following reasons: (1) Step 2 in the proof uses the method of types, that wouldrequire some extension of Sanov’s non-asymptotic Theorem to this case. (2) Step 3 requires to have both θ a ( γ ) < ∞ for all γ > θ a ( γ ) > γ < K inf ( ν a , µ ⋆ ), which does not seem to be always the casefor general distributions. Exploring other directions for such extensions is left for future work; for instance,histogram-based approximations of general distributions could be considered. Acknowledgements.
The authors wish to thank
Peter Auer and
Daniil Ryabko for insightful discus-sions. They acknowledge support from the French National Research Agency (ANR) under grant EX-PLO/RA (“Exploration–exploitation for efficient resource allocation”) and by the PASCAL2 Network ofExcellence under EC grant no. 506778. eferences J-Y. Audibert, R. Munos, and C. Szepesvari. Exploration-exploitation trade-off using variance estimates inmulti-armed bandits.
Theoretical Computer Science , 410:1876–1902, 2009.J.Y. Audibert and S. Bubeck. Regret bounds and minimax policies under partial monitoring.
Journal ofMachine Learning Research , 11:2635–2686, 2010.P. Auer and R. Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed banditproblem.
Periodica Mathematica Hungarica , 61(1-2):55–65, 2010.P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite time analysis of the multiarmed bandit problem.
MachineLearning , 47(2-3):235–256, 2002.A.N. Burnetas and M.N. Katehakis. Optimal adaptive policies for sequential allocation problems.
Advancesin Applied Mathematics , 17(2):122–142, 1996.Y. Chow and H. Teicher.
Probability Theory . Springer, 1988.I.H. Dinwoodie. Mesures dominantes et th´eor`eme de Sanov.
Annales de l’Institut Henri Poincar´e – Proba-bilit´es et Statistiques , 28(3):365–373, 1992.S. Filippi.
Strat´egies optimistes en apprentissage par renforcement . PhD thesis, T´el´ecom ParisTech, 2010.A. Garivier and O. Capp´e. The KL-UCB algorithm for bounded stochastic bandits and beyond. In
Proceed-ings of COLT , 2011.A. Garivier and F. Leonardi. Context tree selection: A unifying view. arXiv:1011.2424, 2010.J. Honda and A. Takemura. An asymptotically optimal bandit algorithm for bounded support models. In
Proceedings of COLT , pages 67–79, 2010a.J. Honda and A. Takemura. An asymptotically optimal policy for finite support models in the multiarmedbandit problem. arXiv:0905.2776, 2010b.T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules.
Advances in Applied Mathe-matics , 6:4–22, 1985.H. Robbins. Some aspects of the sequential design of experiments.
Bulletin of the American MathematicsSociety , 58:527–535, 1952. Appendix beyond the COLT page limit
A conference version of this paper was published in the
Proceedings of the Twenty-Fourth AnnualConference on Learning Theory (COLT’11); this appendix details some material which was alludedat in this conference version but could not be published therein because of the page limit.
A.1 Proof of Lemma 2
We only provide it for the convenience of the readers since it is similar to the one presented in Garivierand Leonardi (2010, Proposition 2) or in Garivier and Capp´e (2011); it was however somewhat sim-plified by noting that the proof technique used leads to a maximal inequality, as stated in Lemma 2,and not only to an inequality for a self-normalized average, as stated in the original reference.
Proof:
The result is straightforward in the cases p = 0 or p = 1, since then, b p s = p almost surely;in the rest of the proof, we therefore only consider the case where p ∈ (0 , N t . Actually, we will show P p t [ s =1 (cid:26) s K (cid:16) β (cid:0)b p s (cid:1) , β ( p ) (cid:17) > ε and b p s > p (cid:27)! e (cid:6) ε log t (cid:7) e − ε , and the desired result will follow by symmetry and a union bound. Step 1: A martingale.
For all λ >
0, we consider the log-Laplace transform ψ p ( λ ) = log E p (cid:2) e λX (cid:3) = log (cid:0) (1 − p ) + p e λ (cid:1) , with which we define the martingale W s ( λ ) = exp (cid:0) λ ( X + . . . + X s ) − s ψ p ( λ ) (cid:1) . Step 2: A peeling argument.
We introduce t = 1 and t k = ⌊ γ k ⌋ , for some γ > K = (cid:6) (log t ) / (log γ ) (cid:7) an upper bound on the numberof elements in the peeling.We also note that by continuity of the Kullback-Leibler divergence in the case of Bernoullidistributions, for all ε >
0, there exists a unique element p ε ∈ ( p,
1) such that K (cid:0) β ( q ε ) , β ( p ) (cid:1) = ε ;this element satisfies that K (cid:0) β ( q ) , β ( p ) (cid:1) > ε and q > p entails q > p ε . Denoting by ε k = ε/t k , a union bound using the described peeling then yields P p t [ s =1 (cid:26) s K (cid:16) β (cid:0)b p s (cid:1) , β ( p ) (cid:17) > ε and b p s > p (cid:27)! K X k =1 P p t k [ s = t k − (cid:26) s K (cid:16) β (cid:0)b p s (cid:1) , β ( p ) (cid:17) > ε and b p s > p (cid:27) K X k =1 P p t k [ s = t k − (cid:26) K (cid:16) β (cid:0)b p s (cid:1) , β ( p ) (cid:17) > εt k and b p s > p (cid:27) = K X k =1 P p t k [ s = t k − nb p s > p ε k o = K X k =1 P p t k [ s = t k − n X + . . . + X s − s p ε k > o Now, the variational formula for Kullback-Leibler divergences shows that for all k , there exists a λ k such that ε k = K (cid:0) β ( p ε k ) , β ( p ) (cid:1) = λ k p ε k − ψ p ( λ k ) ;actually, a straightforward calculation shows that λ k = log (cid:0) p ε k (1 − p (cid:1) − log (cid:0) p (1 − p ε k ) (cid:1) > K X k =1 P p t k [ s = t k − n X + . . . + X s − s p ε k > o = K X k =1 P p t k [ s = t k − n exp (cid:0) λ k ( X + . . . + X s ) − λ k s p ε k (cid:1) > o = K X k =1 P p t k [ s = t k − n exp (cid:0) λ k ( X + . . . + X s ) − s ψ p ( λ k ) (cid:1) > e s ε k o K X k =1 P p t k [ s = t k − n W s ( λ k ) > e t k − ε k o K X k =1 e − t k − ε k = Ke − ε/γ , where in the last step, we resorted to Doob’s maximal inequality. Step 3: Choosing γ . The obtained bound equals, by substituting the value of K and bychoosing γ = ε/ ( ε − Ke − ε/γ = (cid:6) (log t ) / (log γ ) (cid:7) e − ε +1 = & log t log (cid:0) ε/ ( ε − (cid:1) ' e − ε +1 ;the proof is concluded by noting that ε > log (cid:0) ε/ ( ε − (cid:1) − /ε is decreasing (its derivative isnegative), with limit 0 at + ∞ . A.2 Details of the adaptation leading to Lemma 1
The exact statement of Dinwoodie (1992, Theorem 2.1 and comments on page 372) is the following.
Lemma 10 (Non-asymptotic Sanov’s lemma)
Let C be an open convex subset of P ( X ) suchthat Λ( C ) = inf κ ∈C K ( κ, ν ) < ∞ . Then, for all t > , P ν (cid:8)b ν t ∈ C (cid:9) e − t Λ( C ) . We show how it entails Lemma 1. Let C be an open convex subset of P ( X ) and let C be itsclosure. We denote by C δ = (cid:8) ν ∈ C : d ( ν, C ) < δ (cid:9) the δ –open neighborhood of C , we have C ⊆ C δ for all δ >
0. Therefore, by the lemma above, sinceΛ( C δ ) Λ( C ) < ∞ , P ν (cid:8)b ν t ∈ C (cid:9) P ν (cid:8)b ν t ∈ C δ (cid:9) e − t Λ( C δ ) . We pick for each integer n > κ n such that Λ (cid:0) C /n (cid:1) = K ( κ n , ν ) − /n ; by Dinwoodie(1992, proof of Proposition 1.1), the sequence of the κ n admits a converging subsequence κ ϕ ( n ) ,whose limit point κ ∞ belongs to C and which satisfies K ( κ ∞ , ν ) lim inf n →∞ K ( κ n , ν ) = lim inf δ → Λ (cid:0) C δ (cid:1) . Therefore, by taking limits in the above inequality, we have proved the desired inequality, P ν (cid:8)b ν t ∈ C (cid:9) e − t K ( κ ∞ ,ν ) e − t Λ( C ) . .3 Useful properties of K inf and its level setsProof of Lemma 6: We resort to the formulation of K inf in terms of a convex optimizationproblem as introduced in Honda and Takemura (2010b); more precisely, it is shown therein that K inf ( ν, µ ) = max (cid:26) E ν h log (cid:0) λ ( µ − X ) (cid:1)i : λ ∈ (cid:2) , / (1 − µ ) (cid:3)(cid:27) (10)(where X denotes a random variable distributed according to ν ), as well as the following alternative.The optimal value λ ν of the parameter λ indexing the set is equal to 1 / (1 − µ ) if and only if E ν (cid:2) (1 − µ ) / (1 − X ) (cid:3)
1, and lies in (cid:2) , / (1 − µ ) (cid:1) if E ν (cid:2) (1 − µ ) / (1 − X ) (cid:3) > λ ∈ (cid:2) , / (1 − µ ) (cid:3) , we now introduce the function φ λ : x ∈ [0 , log (cid:0) λ ( µ − x ) (cid:1) , which is always continuous on [0 , x = 1 when λ < / (1 − µ ). In the latter case, φ λ is bounded; since it is decreasing, it is easy to get a uniformbound: for all x , (cid:12)(cid:12) φ λ ( x ) (cid:12)(cid:12) (cid:12)(cid:12) φ (0) (cid:12)(cid:12) + (cid:12)(cid:12) φ (1) (cid:12)(cid:12) = log 1 + λµ λ ( µ − def = M λ . It then follows that for all λ ∈ (cid:2) , / (1 − µ ) (cid:1) , E ν (cid:2) φ λ ( X ) (cid:3) − E ν ′ (cid:2) φ λ ( X ) (cid:3) M λ k ν − ν ′ k . (11)In the case when λ ν < / (1 − µ ), we have from the variational formulation (10) that K inf ( ν, µ ) − K inf ( ν ′ , µ ) E ν (cid:2) φ λ ν ( X ) (cid:3) − E ν ′ (cid:2) φ λ ν ( X ) (cid:3) M λ ν k ν − ν ′ k . Thus, the constant M ν,µ in the statement of the lemma corresponds to our quantity M λ ν in thiscase.We now consider the case where λ ν = 1 / (1 − µ ). By (11) and variational formulation (10), wehave that for all λ ∈ (cid:2) , / (1 − µ ) (cid:1) , K inf ( ν, µ ) − K inf ( ν ′ , µ ) K inf ( ν, µ ) − E ν ′ (cid:2) φ λ ( X ) (cid:3) = (cid:16) K inf ( ν, µ ) − E ν (cid:2) φ λ ( X ) (cid:3)(cid:17) + (cid:16) E ν (cid:2) φ λ ( X ) (cid:3) − E ν ′ (cid:2) φ λ ( X ) (cid:3)(cid:17) . The second difference is bounded according to (11); the first difference is bounded by concavity of λ < / (1 − µ ) φ λ ( x ), for all x : E ν (cid:2) φ λ ( X ) (cid:3) > (cid:0) − λ (1 − µ ) (cid:1) E ν (cid:2) φ ( X ) (cid:3) + λ (1 − µ ) E ν (cid:2) φ ( X ) (cid:3) = λ (1 − µ ) E ν (cid:2) φ / (1 − µ ) ( X ) (cid:3) = λ (1 − µ ) K inf ( ν, µ ) , since φ is the null function and λ ν = 1 / (1 − µ ). Putting all pieces together, we have proved thatfor all λ ∈ (cid:2) , / (1 − µ ) (cid:1) , K inf ( ν, µ ) − K inf ( ν ′ , µ ) (cid:0) − λ (1 − µ ) (cid:1) K inf ( ν, µ ) + M λ k ν − ν ′ k . (12)We recall that by assumption, K inf ( ν, µ ) − K inf ( ν ′ , µ ) > α K inf ( ν, µ ) with α ∈ (0 , λ = (1 − α/ / (1 − µ ), which indeed lies in (cid:0) , / (1 − µ ) (cid:1) , is such that M λ = log (cid:18) λ λ ( µ − (cid:19) = log (cid:18) λα/ (cid:19) λα , so that (12) entails α K inf ( ν, µ ) α K inf ( ν, µ ) + 2 λα k ν − ν ′ k , and finally k ν − ν ′ k > α λ = α (1 − µ )1 − α/ − µ (2 /α ) (cid:0) (2 /α ) − (cid:1) ;which concludes the proof. 17 roof of Lemma 7: In Honda and Takemura (2010b) it is shown that in this case, K inf ( ν, µ ) isdifferentiable in µ ∈ ( E ( ν ) ,
1) with11 − µ > ∂∂µ K inf ( ν, µ ) > µ − E ( ν ) µ (1 − µ ) . (13)We apply this result to the rewriting K inf ( ν, µ ) − K inf ( ν, µ − ε ) = Z µµ − ε ∂∂µ K inf ( ν, u ) d u , which already gives one part of the bound. For the lower bound, we note that by assumption − E ( ν ) > − ( µ − ε ) and that u (1 − u ) / , u , u − E ( ν ) u (1 − u ) > (cid:0) u − ( µ − ε ) (cid:1) . Integrating the bound concludes the main part of the proof.Now, to see that the first inequality in the statement is always valid, we need to consider thecase when E ( ν ) > µ , for which the statement is trivial since then K inf ( ν, µ ) = 0, and the case when µ > E ( ν ) > µ − ε . But in the latter case, it is shown in Honda and Takemura (2010b, Lemma 6,case 2) that K inf ( ν, µ ) µ − E ( ν )1 − µ , which concludes the proof. Proof of Lemma 8:
First, C µ ( γ ) is non empty as it always contains δ µ , the Dirac mass on µ .The fact that C µ ( γ ) is convex follows from the convexity of K in the pair of probability distri-butions that it takes as an argument. Indeed, for all α ∈ [0 , ν ′ , ν ′′ ∈ C µ ( γ ), denoting by ν ′ µ , ν ′′ µ some distributions such that the defining conditions in C µ ( γ ) are satisfied, we have that E (cid:0) αν ′ µ + (1 − α ) ν ′′ µ (cid:1) > µ and K (cid:0) αν ′ + (1 − α ) ν ′′ , αν ′ µ + (1 − α ) ν ′′ µ (cid:1) α K (cid:0) ν ′ , ν ′ µ (cid:1) + (1 − α ) K (cid:0) ν ′′ , ν ′′ µ (cid:1) < γ . We prove that C µ ( γ ) is an open set. With each ν ′ ∈ C µ ( γ ), we associate a distribution ν ′ µ satisfying the defining constraints in C µ ( γ ); by choosing α = 1 − µ (cid:14) E (cid:0) ν ′ µ (cid:1) ∈ (0 , / , we have that the open set formed by the(1 − α ) ν ′ + α ν ′′ , ν ′′ ∈ B( ν ′ , C µ,γ , where B( ν ′ ,
1) denotes the ball with center ν ′ and radius 1 in the norm k · k over P ( X ). Indeed, we have on the one hand, E (cid:0) (1 − α ) ν ′ µ + α ν ′′ (cid:1) > (1 − α ) E (cid:0) ν ′ µ (cid:1) > − − µ (cid:14) E (cid:0) ν ′ µ (cid:1) ! E (cid:0) ν ′ µ (cid:1) = E (cid:0) ν ′ µ (cid:1) + µ > µ , and on the other hand, by convexity of the Kullback-Leibler divergence, K (cid:0) (1 − α ) ν ′ + α ν ′′ , (1 − α ) ν ′ µ + α ν ′′ (cid:1) (1 − α ) K (cid:0) ν ′ , ν ′ µ (cid:1) < (1 − α ) γ . To prove the desired inclusion, we first note that in the case of P F (cid:0) [0 , (cid:1) , Honda and Takemura(2010b) show that one has the rewriting K inf ( ν, µ ) = min n K ( ν, ν ′ ) : ν ′ ∈ P F (cid:0) [0 , (cid:1) s.t. E ( ν ′ ) > µ o ;in particular, the infimum is achieved with this new formulation. Hence, C µ,γ = n ν ′ ∈ P F (cid:0) [0 , (cid:1) : ∃ ν ′ µ ∈ P F (cid:0) [0 , (cid:1) s.t. E (cid:0) ν ′ µ (cid:1) > µ and K (cid:0) ν ′ , ν ′ µ (cid:1) < γ o . ν ′ ∈ P F (cid:0) [0 , (cid:1) such that K inf ( ν ′ , µ ) γ , thatis, such that there exists ν ′ µ ∈ P (cid:0) [0 , (cid:1) with E (cid:0) ν ′ µ (cid:1) > µ and K (cid:0) ν ′ , ν ′ µ (cid:1) γ . Now, the distributions ν ′ n = (cid:18) − n (cid:19) ν ′ + 1 n δ , thanks to the ν ′ µ,n = (cid:18) − n (cid:19) ν ′ µ + 1 n δ , all belong to C γ , as, similarly to the above argument, E (cid:0) ν ′ n (cid:1) > µ + 1 − µn > µ and K (cid:0) ν ′ n , ν ′ µ,n (cid:1) (cid:18) − n (cid:19) K (cid:0) ν ′ , ν ′ µ (cid:1) < γ . In addition, we have by construction that the ν ′ n converge to ν ′ , hence, ν ′ ∈ C γ . A.4 The method of types
Let X , X , . . . be a sequence of random variables that are i.i.d. according to a distribution denotedby ν . In this subsection, we will index all probabilities and expectations by ν .For all k > , we denote by E k the set of possible values (the so-called types) of the empiricaldistribution b ν k = k X j =1 δ X j . If ν has a finite support denoted by S , then the cardinality |E k | of E k is bounded by ( k + 1) |S| . Lemma 11
In the case where ν has a finite support, for all k > and κ ∈ E k , P ν (cid:8)b ν k = κ (cid:9) e − k K ( κ,ν ) . Corollary 12
In the case where ν has a finite support, for all k > , all γ > , P n K (cid:0)b ν k , ν (cid:1) > γ o = X κ ∈E k I {K ( κ,ν ) >γ } P ν (cid:8)b ν k = κ (cid:9) X κ ∈E k I {K ( κ,ν ) >γ } e − k K ( κ,ν ) |E k | e − kγ ( k + 1) |S| e − kγ ..