Randomized Block Subgradient Methods for Convex Nonsmooth and Stochastic Optimization
aa r X i v : . [ m a t h . O C ] S e p Randomized Block Subgradient Methods for Convex Nonsmoothand Stochastic Optimization
Qi Deng ∗ [email protected] Guanghui Lan † [email protected] Anand Rangarajan ∗ [email protected] Abstract
Block coordinate descent methods and stochastic subgradient methods have been extensivelystudied in optimization and machine learning. By combining randomized block sampling withstochastic subgradient methods based on dual averaging ([22, 36]), we present stochastic blockdual averaging (SBDA)—a novel class of block subgradient methods for convex nonsmooth andstochastic optimization. SBDA requires only a block of subgradients and updates blocks of vari-ables and hence has significantly lower iteration cost than traditional subgradient methods. Weshow that the SBDA-based methods exhibit the optimal convergence rate for convex nonsmoothstochastic optimization. More importantly, we introduce randomized stepsize rules and blocksampling schemes that are adaptive to the block structures, which significantly improves theconvergence rate w.r.t. the problem parameters. This is in sharp contrast to recent block sub-gradient methods applied to nonsmooth deterministic or stochastic optimization ([3, 24]). Forstrongly convex objectives, we propose a new averaging scheme to make the regularized dualaveraging method optimal, without having to resort to any accelerated schemes.
In this paper, we mainly focus on the following convex optimization problem: min x ∈ X φ ( x ) , (1)where the feasible set X is embedded in Euclidean space R N for some integer N > . Letting N , N , . . . , N n be n positive integers such that P ni =1 N i = N , we assume X can be partitioned as X = X × X × . . . X n , where each X i ⊆ R N i . We denote x ∈ X , by x = x (1) × x (2) . . . × x ( n ) where x ( i ) ∈ X i . The objective φ ( x ) consists of two parts: φ ( x ) = f ( x ) + ω ( x ) . We stress that both f ( x ) and ω ( x ) can be nonsmooth. ω ( x ) is a convex function with block separable structure: ω ( x ) = P ni =1 ω i ( x i ) , where each ω i : X i → R is convex and relatively simple. In composite optimization orregularized learning, the term ω ( x ) imposes solutions with certain preferred structures. Commonexamples of ω ( · ) include the ℓ norm or squared ℓ norm regularizers. f ( x ) is a general convexfunction. In many important statistical learning problems, f ( x ) has the form of f ( x ) = E ξ [ F ( x, ξ )] ,where F ( x, ξ ) is a convex loss function of x ∈ X with ξ representing sampled data. When it isdifficult to evaluate f ( x ) exactly, as in batch learning or sample average approximation (SAA), f ( x ) is approximated with finite data. Firstly, a large number of samples ξ , ξ , . . . , ξ m are drawn, andthen f ( x ) is approximated by ˜ f ( x ) = m P mi =1 F ( x, ξ i ) , with the alternative problem: ∗ Department of Computer and Information Science and Engineering, University of Florida, FL, 32611 † Department of Industrial and Systems Engineering, University of Florida, FL, 32611 in x ∈ X ˜ φ ( x ) := ˜ f ( x ) + ω ( x ) . (2)However, although classic first order methods can provide accurate solutions to (2), the majordrawback of these approaches is the poor scalability to large data. First order deterministic methodsrequire full information of the (sub)gradient and scan through the entire dataset many times, whichis prohibitive for applications where scalability is paramount. In addition, due to the statisticalnature of the problem, solutions with high precision may not even be necessary.To solve the aforementioned problems, stochastic methods—stochastic (sub)gradient descent(SGD) or block coordinate descent (BCD) have received considerable attention in the machinelearning community. Both of them confer new advantages in the trade offs between speed andaccuracy. Compared to deterministic and full (sub)gradient methods, they are easier to implement,have much lower computational complexity in each iteration, and often exhibit sufficiently fastconvergence while obtaining practically good solutions.SGD was first studied in [29] in the 1950s, with the emphasis mainly on solving strongly convexproblems; specifically it only needs the gradient/subgradient on a few data samples while iterativelyupdating all the variables. In the approach of online learning or stochastic approximation (SA),SGD directly works on the objective (1), and obtains convergence independent of the sample size.While early work emphasizes asymptotic properties, recent work investigate complexity analysisof convergence. Many works ([21, 13, 33, 26, 6, 2, 8]) investigate the optimal SGD under variousconditions. Proximal versions of SGD, which explicitly incorporate the regularizer ω ( x ) , have beenstudied, for example in [15, 4, 5, 36].The study of BCD also has a long history. BCD was initiated in [18, 19], but the application ofBCD to linear systems dates back to even earlier (for example see the Gauss-Seidel method in [7]).It works on the approximated problem (2) and makes progress by reducing the original probleminto subproblems using only a single block coordinate of the variable at a time. Recent works[23, 28, 30, 17] study BCD with random sampling (RBCD) and obtain non-asymptotic complexityrates. For the regularized learning problem as in (2), RBCD on the dual formulation has beenproposed [31, 11, 32]. Although most of the work on BCD focuses on smooth (composite) objectives,some recent work ([3, 37, 35, 39]) seeks to extend the realm of BCD in various ways. The worksin [24, 3] discuss (block) subgradient methods for nonsmooth optimization. Combining the ideas ofSGD and BCD, the works in [3, 37, 35, 39, 27] employ sampling of both features and data instancesin BCD.In this paper, we propose a new class of block subgradient methods, namely, stochastic blockdual averaging (SBDA), for solving nonsmooth deterministic and stochastic optimization prob-lems. Specifically, SBDA consists of a new dual averaging step incorporating the average of allpast (stochastic) block subgradients and variable updates involving only block components. Webring together two strands of research, namely, the dual averaging algorithm (DA) [36, 22] whichwas studied for nonsmooth optimization and randomized coordinate descent (RCD) [23], employedfor smooth deterministic problems. Our main contributions consist of the following: • Two types of SBDA have been proposed for different purposes. For regularized learning,we propose SBDA-u which performs uniform random sampling of blocks. For more generalnonsmooth learning problems, we propose SBDA-r which applies an optimal sampling schemewith improved convergence. Compared with existing subgradient methods for nonsmooth andstochastic optimization, both SBDA-u and SBDA-r have significantly lower iteration cost whenthe computation of block subgradients and block updates are convenient. • We contribute a novel scheme of randomized stepsizes and optimized sampling strategies which2re truly adaptive to the block structures. Selecting block-wise stepsizes and optimal blocksampling have been critical issues for speeding up BCD for smooth regularized problems,please see [23, 25, 31, 28] for some recent advances. For nonsmooth or stochastic optimization,the most closely related work to ours are [3, 24] which do not apply block-wise stepsizes.To the best of our knowledge, this is the first time block subgradient methods with blockadaptive stepsizes and optimized sampling have been proposed for nonsmooth and stochasticoptimization. • We provide new theoretical guarantees of convergence of SBDA methods. SBDA obtains theoptimal rate of convergence for general convex problems, matching the state of the art resultsin the literature of stochastic approximation and online learning. More importantly, SBDAexhibits a significantly improved convergence rate w.r.t. the problem parameters. When theregularizer ω ( x ) is strongly convex, our analysis provides a simple way to make the regularizeddual averaging methods in [36] optimal. We show an aggressive weighting is sufficient to obtain O (cid:0) T (cid:1) convergence where T is the iteration count, without the need for any accelerated schemes.This appears to be a new result for simple dual averaging methods. Related work
Extending BCD to the realm of nonsmooth and stochastic optimization has been ofinterest lately. Efficient subgradient methods for a class of nonsmooth problems has been proposedin [24]. However, to compute the stepsize, the block version of this subgradient method requirescomputation of the entire subgradient and knowledge of the optimal value; hence, it may be notefficient in a more general setting. The methods in [3, 24] employ stepsizes that are not adaptive tothe block selection and have therefore suboptimal bounds to our work. For SA or online learning,SBDA applies double sampling of both blocks and data. A similar approach has also been employedfor new stochastic methods in some very recent work ([3, 39, 35, 27, 37]). It should be noted herethat if the assumptions are strengthened, namely, in the batch learning formulation, and if ˜ φ issmooth, it is possible to obtain a linear convergence rate O (cid:0) e − T (cid:1) . Nesterov’s randomized blockcoordinate methods [23, 28] consider different stepsize rules and block sampling but only for smoothobjectives with possible nonsmooth regularizers. Recently, nonuniform sampling in BCD has beenaddressed in [25, 38, 16] and shown to have advantages over uniform sampling. Although our workdiscusses block-wise stepsizes and nonuniform sampling as well, we stress the nonsmooth objectivesthat appear in deterministic and stochastic optimization . The proposed algorithms employ verydifferent proof techniques, thereby obtaining different optimized sampling distributions. Outline of the results.
We introduce two versions of SBDA that are appropriate in different contexts. The first algo-rithm, SBDA with uniform block sampling (SBDA-u) works for a class of convex composite func-tions, namely, ω ( x ) is explicate in the proximal step. When ω ( x ) is a general convex function,for example, the sparsity regularizer k x k , we show that SBDA-u obtains the convergence rate of O (cid:18) √ n P ni √ M i D i √ T (cid:19) , which improves the rate of O (cid:18) √ n √ P ni M i · √ P ni D i √ T (cid:19) by SBMD. Here { M i } and { D i } are some parameters associated with the blocks of coordinates to be specified later. When ω ( x ) is a strongly convex function, by using a more aggressive scheme to be later specified, SBDA-uobtains the optimal rate of O (cid:16) n P i M i λT (cid:17) , matching the result from SBMD. In addition, for gen-eral convex problems in which ω ( x ) = 0 , we propose a variant of SBDA with nonuniform randomsampling (SBDA-r) which achieves an improved convergence rate O (cid:16)P nj =1 M / j D / j (cid:17) / √ T ! . These3lgorithm Objective ComplexitySBDA-u Convex composite O (cid:18) √ n P ni √ M i D i √ T (cid:19) SBDA-u Strongly convex composite O (cid:16) n P i M i λT (cid:17) SBDA-r Convex nonsmooth O (cid:16)P nj =1 M / j D / j (cid:17) / √ T ! Table 1: Iteration complexity of our SBDA algorithms.computational results are summarized in Table (1).
Structure of the Paper
The paper proceeds as follows. Section 2 introduces the notation usedin this paper. Section 3 presents and analyzes SBDA-u. Section 4 presents SBDA-r, and discussesoptimal sampling and its convergence. Experimental results to demonstrate the performance ofSBDA are provided in section 6. Section 7 draws conclusion and comments on possible futuredirections.
Let R N be a Euclidean vector space, N , N , . . . N n be n positive integers such that N + . . . N n = N .Let I be the identity matrix in R N × N , U i be a N × N i -dim matrix such that I = [ U U . . . U n ] . For each x ∈ R N , we have the decomposition: x = U x (1) + U x (2) + . . . + U n x ( n ) , where x ( i ) ∈ R N i .Let k · k ( i ) denote the norm on the R N i , and k · k ( i ) , ∗ be the induced dual norm. We define thenorm k · k in R N by: k x k = P ni =1 k x ( i ) k i ) and its dual norm: k · k ∗ by k x k ∗ = P ni =1 k x ( i ) k i ) , ∗ Let d i : X i → R be a distance transform function with modulus k · k ( i ) with respect to ρ .. d i ( · ) is continuously differentiable and strongly convex: d i ( αx + (1 − α ) y ) ≤ αd i ( x ) + (1 − α ) d i ( y ) − ρα (1 − α ) k x − y k i ) , x, y ∈ X i ,i = 1 , , ..., n .Let us assume there exists a solution x ∗ ∈ X to the problem (1) , and d i ( x ∗ ( i ) ) ≤ D i < ∞ , i = 1 , , . . . n, (3)Without loss of generality, we assume d i ( · ) is nonnegative, and write d ( x ) = n X i d i (cid:16) x ( i ) (cid:17) (4)for simplicity. Further more, we define the Bregman divergence associated with d i ( · ) by V i ( z, x ) = d i ( x ) − d i ( z ) − h∇ i d ( z ) , x − z i , z, x ∈ X i . and V ( z, x ) = P ni V i (cid:0) z ( i ) , x ( i ) (cid:1) . 4e denote f ( x ) = E ξ [ F ( x, ξ )] , and let G ( x, ξ ) be a subgradient of F ( x, ξ ) , and g ( x ) = E ξ [ G ( x, ξ )] ∈ ∂f ( x ) be a subgradient of f ( x ) . Let g ( i ) ( · ) , G ( i ) ( x, ξ ) denote their i -th block com-ponents , for i = 1 , , . . . , n . Throughout the paper, we assume the (stochastic) block coordinatesubgradient of f satisfying: k g ( i ) ( x ) k i ) , ∗ = E h k G ( i ) ( x, ξ ) k ( i ) , ∗ i ≤ E h k G ( i ) ( x, ξ ) k i ) , ∗ , i ≤ M i , ∀ x ∈ X (5)for i = 1 , , . . . , n . Note that although we make assumptions of stochastic objective , the followinganalysis and conclusions naturally extend to deterministic optimization. To see that, we can simplyassume g ( x ) ≡ G ( x, ξ ) , and f ( x ) ≡ F ( x, ξ ) , for any ξ .Before introducing the main convergence properties, we first summarize several useful results inthe following lemmas. Lemma 1, 2, and 3 slightly generalize the results in [34, 14], [22], and [13]respectively; their proofs are left in Appendix. Lemma 1.
Let f ( · ) be a lower semicontinuous convex function and d ( · ) be defined by (4). If z = arg min x Ψ ( x ) := f ( x ) + d ( x ) , then Ψ ( x ) ≥ Ψ ( z ) + V ( z, x ) , ∀ x ∈ X. Moreover, if f ( x ) is λ -strongly convex with norm k · k ( i ) , and x = z + U i y ∈ X where y ∈ X i , z ∈ X ,then Ψ ( x ) ≥ Ψ ( z ) + V ( z, x ) + λ k y k i ) , ∀ x ∈ X. Lemma 2.
Let
Ψ : X → R be convex, block separable, and ρ i -strongly convex with modulus ρ i w.r.t. k · k ( i ) , ρ i > , ≤ i ≤ n , and g ∈ R N . If x ∈ arg min x ∈ X { Ψ ( x ) } , and z ∈ arg min x ∈ X nD U i g ( i ) , x E + Ψ ( x ) o , then D U i g ( i ) , x E + Ψ ( x ) ≤ D U i g ( i ) , z E + Ψ ( z ) + 12 ρ i k g k i ) , ∗ . Lemma 3. If f satisfies the assumption (5), let x = z + U i y ∈ X where y ∈ X i , x ∈ X , then f ( z ) ≤ f ( x ) + h g ( i ) ( x ) , y i + 2 M i k y k ( i ) . (6) In this section, we describe uniformly randomized SBDA (SBDA-u) for the composite problem (1).We consider the formulation proposed in [36], since it incorporates the regularizers for compositeproblems. The main update of the DA algorithm has the form x t +1 = arg min x ∈ X ( t X s =1 h G s , x i + tω ( x ) + β t d ( x ) ) , (7)where { β t } is a parameter sequence and G s is shorthand for G ( x s , ξ s ) , and d ( x ) is a strongly convexproximal function. When ω ( x ) = 0 , this reduces to a version of Nesterov’s primal-dual subgradientmethod [22]. 5et ¯ G = P ts =0 α s U i s G ( i s ) ( x s , ξ s ) , where { α t } is a sequence of positive values, { i t } is a sequenceof sampled indices. The main iteration step of SBDA has the form x ( i t ) t +1 = arg min x ∈ X it n h ¯ G ( i t ) , x i + l ( i t ) t ω i t ( x ) + γ ( i t ) t d i t ( x ) o , (8)and x ( i ) t +1 = x ( i ) t , i = i t .We highlight two important aspects of the proposed iteration (8). Firstly, the update in (8)incorporates the past randomly sampled block (stochastic) subgradients (cid:8) G ( i t ) ( x t , ξ t ) (cid:9) , rather thanthe full (stochastic) subgradients. Meanwhile, the update of the primal variable is restricted tothe same block ( i t ), leaving the other blocks untouched. Such block decomposition significantlyreduces the iteration cost of the dual averaging method when the block-wise operation is convenient.Secondly, (8) employs a novel randomized stepsize sequence { γ t } where γ t ∈ R n . More specifically, γ t depends not only on the iteration count t , but also on the block index i t . { γ t } satisfies theassumptions, γ ( j ) t = γ ( j ) t − , j = i t , and γ ( j ) t ≥ γ ( j ) t − , j = i t . (9)The most important aspect of (9) is that stepsizes can be specified for each block of coordinates,thereby allowing for aggressive descent. As will be shown later, the rate of convergence, in termsof the problem parameters, can be significantly reduced by properly choosing these control param-eters. In addition, we allow the sequence { α t } and the related { l t } to be variable, hence offer theopportunity of different averaging schemes in composite settings. To summarize, the full SBDA-u isdescribed in Algorithm 1. Input : convex composite function φ ( x ) = f ( x ) + ω ( x ) , a sequence of samples { ξ t } ;initialize α − ∈ R , γ − ∈ R n , l − = n , ¯ G = N , x = arg min x ∈ X P ni =1 γ ( i ) − d i (cid:0) x ( i ) (cid:1) ; for t = 0 , , . . . , T − do sample a block i t ∈ { , , . . . , n } with uniform probability n ;set γ ( i ) t , i = 1 , , . . . , n ;set l ( i t ) t = l ( i t ) t − + α t and l ( j ) t = l ( j ) t − for j = i t ;update ¯ G : ¯ G = ¯ G + α t U i t G ( i t ) ( x t , ξ t ) ;update x ( i t ) t +1 = arg min x ∈ X it n h ¯ G ( i t ) , x i + l ( i t ) t ω i t ( x ) + γ ( i t ) t d i t ( x ) o ; x ( j ) t +1 = x ( j ) t , for j = i t ; endOutput : ¯ x = hP Tt =1 (cid:0) α t − − n − n α t (cid:1) x t i / P Tt =1 (cid:0) α t − − n − n α t (cid:1) ; Algorithm 1:
Uniformly randomized stochastic block dual averaging (SBDA-u) method.The following theorem illustrates an important relation to analyze the convergence of SBDA-u.Throughout the analysis we assume the simple function ω ( x ) is λ -strongly convex with modulus λ ,where λ ≥ . Theorem 4.
In algorithm 1, if the sequence { γ t } satisfies the assumption (9) , then for any x ∈ X , e have T X t =1 (cid:18) α t − − n − n α t (cid:19) E [ φ ( x t ) − φ ( x )] ≤ α n − n [ φ ( x ) − φ ( x )] + n X i =1 E h γ ( i ) T − i d i ( x )+ n X i =1 M i n T − X t =0 E α t (cid:16) γ ( i ) t − ρ + l ( i ) t − λ (cid:17) . (10) Proof.
Firstly, to simplify the notation, when there is no ambiguity, we use the terms ω i ( x ) and ω i (cid:0) x ( i ) (cid:1) , d i ( x ) and d i (cid:0) x ( i ) (cid:1) , V i ( x, y ) and V i (cid:0) x ( i ) , y ( i ) (cid:1) interchangeably. In addition, we denote ω i c ( x ) = ω ( x ) − ω i ( x ) , and an auxiliary function by Ψ t ( x ) = (P ts =0 α s h F ( x s , ξ s ) + h G s , x − x s i ( i s ) + ω i s ( x ) i + P ni =1 γ ( i ) t d i (cid:0) x ( i ) (cid:1) , t ≥ P ni =1 γ ( i ) t d i (cid:0) x ( i ) (cid:1) t = − . (11)It can be easily seen from the definition that x t +1 is the minimizer of the problem min x ∈ X Ψ t ( x ) .Moreover, by the assumption on { γ t } , we obtain Ψ t ( x ) − Ψ t − ( x ) ≥ α t h F ( x t , ξ t ) + h G t , x − x t i ( i t ) + ω i t ( x ) i . t = 0 , , , . . . (12)Applying Lemma 3 and the property equation (12) at x = x t +1 , we have φ ( x t +1 ) ≤ f ( x t ) + h g t , x t +1 − x t i + 2 M i t k x t +1 − x t k ( i t ) + ω ( x t +1 )= F ( x t , ξ t ) + h G t , x t +1 − x t i ( i t ) + 2 M i t k x t +1 − x t k ( i t ) + f ( x t ) − F ( x t , ξ t ) + h g t − G t , x t +1 − x t i ( i t ) + ω ( x t +1 ) ≤ α t " Ψ t ( x t +1 ) − Ψ t − ( x t +1 ) + γ ( i t ) t − ρ + l ( i t ) t − λ k x t +1 − x t k i t ) ∆ + f ( x t ) − F ( x t , ξ t ) + ω i ct ( x t +1 )+ h g t − G t , x t +1 − x t i ( i t ) − γ ( i t ) t − ρ + l ( i t ) t − λ α t k x t +1 − x t k i t ) + 2 M i t k x t +1 − x t k ( i t ) | {z } ∆ . We proceed with the analysis by separately taking care of ∆ and ∆ . We first provide a concretebound on ∆ . Applying Lemma 1 for Ψ = Ψ t − with x t being the optimal point x = x t +1 , we obtain Ψ t − ( x t +1 ) ≥ Ψ t − ( x t ) + n X i =1 γ ( i ) t − V i ( x t , x t +1 ) + l ( i t ) t − λ k x t − x t +1 k i t ) . (13)In view of (13) and the assumption V i ( x t , x t +1 ) ≥ ρ k x t +1 − x t k i ) , we obtain an upper bound on ∆ : ∆ ≤ Ψ t ( x t +1 ) − Ψ t − ( x t ) . On the other hand, from the Cauchy-Schwarz inequality, we have h g t − G t , x t +1 − x t i ( i t ) ≤ k g t − G t k ( i t ) , ∗ · k x t +1 − x t k ( i t ) . Then ∆ ≤ k x t +1 − x t k ( i t ) · (cid:0) k g t − G t k ( i t ) + 2 M i t (cid:1) − γ ( i t ) t − ρ + l ( i t ) t − λ α t k x t +1 − x t k i t ) . k x t +1 − x t k ( i t ) . By maximizing it,we obtain ∆ ≤ α t (cid:0) k g t − G t k ( i t ) + 2 M i t (cid:1) (cid:16) γ ( i t ) t − ρ + l ( i t ) t − λ (cid:17) . In view of these bounds on ∆ and ∆ , and the fact that ω i ct ( x t ) = ω i ct ( x t +1 ) , we have α t φ ( x t +1 ) ≤ Ψ t ( x t +1 ) − Ψ t − ( x t ) + α t (cid:2) f ( x t ) − F ( x t , ξ t ) + ω i ct ( x t ) (cid:3) + α t (cid:0) k g t − G t k ( i t ) + 2 M i (cid:1) γ ( i t ) t − ρ + l ( i t ) t − λ . (14)Summing up the above for t = 0 , , . . . , T − , and observing that Ψ − ≥ , d i ( x ) ≥ ≤ i ≤ n ) ,we obtain T − X t =0 α t φ ( x t +1 ) ≤ Ψ T − ( x T ) + T − X t =0 α t (cid:0) k g t − G t k ( i t ) + 2 M i (cid:1) γ ( i t ) t − ρ + l ( i t ) t − λ + T − X t =0 α t (cid:2) f ( x t ) − F ( x t , ξ t ) + ω i ct ( x t ) (cid:3) . (15)Due to the optimality of x T , for x ∈ X , we have Ψ T − ( x T ) ≤ Ψ T − ( x )= T − X t =0 α t (cid:20) f ( x t ) + 1 n h g t , x − x t i + ω i t ( x ∗ ) (cid:21) + n X i =1 γ ( i ) T − d i ( x )+ T − X t =0 α t (cid:20) h G t , x − x t i ( i t ) − n h g t , x − x t i (cid:21) ≤ T − X t =0 α t (cid:20) n − n f ( x t ) + 1 n f ( x ) + ω i t ( x ) (cid:21) + n X i =1 γ ( i ) T − d i ( x )+ T − X t =0 α t (cid:20) h G t , x − x t i ( i t ) − n h g t , x − x t i (cid:21) , (16)where the last inequality follows from the convexity of f : h g t , x − x t i ≤ f ( x ) − f ( x t ) . Putting (15)and (16) together yields T − X t =0 α t φ ( x t +1 ) ≤ T − X t =0 α t (cid:20) n − n φ ( x t ) + 1 n φ ( x ) (cid:21) + n X i =1 γ ( i ) T − d i ( x )+ T − X α t (cid:0) k g t − G t k ( i t ) + 2 M i (cid:1) (cid:16) γ ( i t ) t − ρ + l ( i t ) t − λ (cid:17) + δ T , (17)where δ T is defined by δ T = T − X t =0 α t (cid:20) h G t , x − x t i ( i t ) − n h g t , x − x t i + f ( x t ) − F ( x t , ξ t ) (cid:21) + T − X t =0 α t (cid:20) ω i ct ( x t ) − n − n ω ( x t ) + ω i t ( x ) − n ω ( x ) (cid:21) . (18)8n (17), subtracting P T − t =0 φ ( x ) , and then n − n P Tt =1 α t [ φ ( x t ) − φ ( x )] on both sides , one has T X t =1 (cid:18) α t − − n − n α t (cid:19) [ φ ( x t ) − φ ( x )] ≤ n − n α [ φ ( x ) − φ ( x )] + n X i =1 γ ( i ) T − d i ( x )+ δ T + T − X α t (cid:0) k g t − G t k ( i t ) + 2 M i t (cid:1) (cid:16) γ ( i t ) t − ρ + l ( i t ) t − λ (cid:17) . (19)Now let us take the expectation on both sides of (19). Firstly, taking the expectation with re-spect to i t , for t = 0 , , ..., T − , we have E i t h h G t , x ∗ − x t i ( i t ) i = n h G t , x ∗ − x t i , and E i t (cid:2) ω i ct ( x t ) (cid:3) = ω ( x t ) − E i t [ ω i t ( x t )] = n − n ω ( x t ) . Moreover, by the assumptions E ξ t [ F ( x t , ξ t )] = f ( x t ) , E ξ t [ G ( x t , ξ t )] = g ( x t ) . Together with the definition (18), we see E [ δ t ] = 0 . In addition, from the Cauchy-Schwarz inequality, we have (cid:0) k g t − G t k ( i t ) + 2 M i t (cid:1) ≤ (cid:16) k g t − G t k i t ) + 4 M i t (cid:17) , and the expectation E ξ t h k g t − G t k i t ) i ≤ E ξ t k G t k i t ) ≤ M i t . Furthermore, since ξ t is independent of γ t − and l t − , wehave E " (cid:0) k g t − G t k ( i t ) + 2 M i (cid:1) γ ( i t ) t − ρ + l ( i t ) t − λ ≤ E E ξ t (cid:16) k g t − G t k i t ) + 4 M i t (cid:17) γ ( i t ) t − ρ + l ( i t ) t − λ ≤ E " M i t γ ( i t ) t − ρ + l ( i t ) t − λ ! = n X i =1 E M i n (cid:16) γ ( i ) t − ρ + l ( i ) t − λ (cid:17) . Using these results, we obtain T X t =1 (cid:18) α t − − n − n α t (cid:19) E [ φ ( x t ) − φ ( x )] ≤ α n − n [ φ ( x ) − φ ( x )] + n X i =1 E h γ ( i ) T − i d i ( x )+ n X i =1 M i n T − X t =0 E α t (cid:16) γ ( i ) t − ρ + l ( i ) t − λ (cid:17) . In Theorem 4 we presented some general convergence properties of SBDA-u for both stochasticconvex and strongly convex functions. It should be noted that the right side of (10) employsexpectations since both { γ t } and { l t } can be random. In the sequel, we describe more specializedconvergence rates for both cases. Let us take x = x ∗ and use the assumption (3) throughout theanalysis. Convergence rate when ω ( x ) is a simple convex function Firstly, we consider a constant stepsize policy and assume that γ ( i ) t depends on i and T where T is the iteration number. More specifically, let α t ≡ , and γ ( i ) t ≡ β i for some β i > , ≤ i ≤ n ,9 ≤ t ≤ T . Then E (cid:20) α t γ ( i ) t − ρ (cid:21) = ρβ i , for ≤ i ≤ n , and hence T X t =1 E [ φ ( x t ) − φ ( x ∗ )] ≤ ( n −
1) [ φ ( x ) − φ ( x ∗ )] + n n X i =1 β i D i + T n X i =1 M i ρβ i . Let us choose β i = q T M i nρD i for i = 1 , , . . . , p , to optimize the above function. We obtain an upperbound on the error term: T X t =1 E [ φ ( x t ) − φ ( x ∗ )] ≤ ( n −
1) [ φ ( x ) − φ ( x ∗ )] + 2 s T nρ n X i =1 q M i D i . If we use the average point ¯ x = P Tt =1 x t /T as the output, we obtain the expected optimization error: E [ φ (¯ x ) − φ ( x ∗ )] ≤ n − T [ φ ( x ) − φ ( x ∗ )] + 2 √ n hP ni =1 q M i D i i √ ρ √ T .
In addition, we can also choose varying stepsizes without knowing ahead the iteration number T . Differing from traditional stepsize policies where γ t is usually associated with t , here n γ ( i ) t o is arandom sequence dependent on both t and i t . In order to establish the convergence rate with sucha randomized γ t , we first state a useful technical result. Lemma 5.
Let p be a real number with < p < , { a s } and { b t } be sequences of nonnegativenumbers satisfying the relation: a t = pb t + (1 − p ) a t − , t = 1 , , . . . Then t X s =0 a s ≤ t X s =1 b s + a p . We first let α t ≡ , and define { γ t } recursively as γ ( i ) t = ( u i √ t + 1 i = i t γ ( i ) t − i = i t , for some u i > , i = 1 , , ..., n , t = 0 , , , . . . , T . From this definition, we obtain E " γ ( i ) t − = 1 n u i √ t + n − n E " γ ( i ) t − . Observing the fact that P tτ =1 1 √ τ ≤ ´ t +11 1 √ x dx = 2 √ t + 1 and applying Lemma 5 with a t = E (cid:20) γ ( i ) t − (cid:21) and b t = u i √ t , we have t X τ =0 E " γ ( i ) τ − ≤ u i t X τ =1 √ τ + nγ ( i ) − ≤ √ t + 1 u i + nγ ( i ) − . T − X t =0 E " γ ( i ) t − ρ ≤ ρ " √ Tu i + nγ ( i ) − , i = 1 , , . . . n. (20)With respect to (20) and Theorem 1 , we obtain n X i =1 E h γ ( i ) T − i d i ( x ∗ ) + T − X t =0 n X i =1 E " α t M i nγ ( i ) t − ρ ≤ n X i =1 u i √ T D i + n X i =1 ( M i nρ " √ Tu i + nγ ( i ) − . Choosing u i = q M i nρD i , we have E [ φ (¯ x ) − φ ( x ∗ )] ≤ n − T [ φ ( x ) − φ ( x ∗ )] + n X i =1 nM i ργ ( i ) − T + 2 P ni =1 q nM i D i √ ρ √ T .
We summarize the results in the following corollary:
Corollary 6.
In algorithm 1, let
T > , ¯ x be the average point ¯ x = P Tt =1 x t /T , and α t ≡ .
1. If γ ( i ) t = q T M i nρD i , for t = 0 , , , ..., T − , i = 1 , , ..., n , then E [ φ (¯ x ) − φ ( x ∗ )] ≤ ( n −
1) [ φ ( x ) − φ ( x ∗ )] T + 2 P ni =1 q nM i D i √ ρ √ T ;
2. If γ ( i ) t = q M i ( t +1) nρD i if i = i t γ ( i ) t − o.w. , for t = 0 , , , ..., T − , and γ ( i ) − = q M i nρD i , i = 1 , , ..., n ,then E [ φ (¯ x ) − φ ( x ∗ )] ≤ n − T [ φ ( x ) − φ ( x ∗ )] + n X i =1 nM i ργ ( i ) − T + 2 P ni =1 q nM i D i √ ρ √ T .
Corollary 6 provides both constant and adaptive stepsizes and SBDA-u obtains a rate of conver-gence of O (cid:16) / √ T (cid:17) for both, which matches the optimal rate for nonsmooth stochastic approximation[please see (2.48) in [21]]. In the context of nonsmooth deterministic problem, it also matches theconvergence rate of the subgradient method. However, it is more interesting to compare this withthe convergence rate of BCD methods [please see, for example, Corollary 2.2 part b) in [3]]. Ignoringthe higher order terms, their convergence rate reads: O qP ni =1 M i √ T p n P ni =1 D i ! . Although the rate of O (cid:16) / √ T (cid:17) is unimprovable, it can be seen (using the Cauchy-Schwarz inequality) that n X i =1 q M i D i ≤ vuut n X i =1 M i vuut n X i =1 D i , with the equality holding if and only if the ratio M i /D i is equal to some positive constant, ≤ i ≤ n .However, if this ratio is very different in each coordinate block, SBDA-u is able to obtain a muchtighter bound. To see this point, consider the sequences { M i } and { D i } such that k items in { M i } are O ( ˜ M ) for some integer k , < k ≪ n , while the rest are o (1 /n ) and D i is uniformly boundedby ˜ D , ≤ i ≤ n . Then the constant in SBDA-u is O ( √ nk ˜ M p ˜ D ) while the one in SBMD is O ( n √ k ˜ M p ˜ D ) , which is p n/k times larger. 11 onvergence rate when ω ( x ) is strongly convex In this section, we investigate the convergence of SBDA-u when ω ( x ) is strongly convex with modulus λ , λ > . More specifically, we consider two averaging schemes and stepsize selections. In thefirst approach, we apply a simple averaging scheme similar to [36]. By setting α t ≡ , all thepast stochastic block subgradients are weighted equally. In the second approach we apply a moreaggressive weighting scheme, which puts more weights on the later iterates.To prove the convergence of SBDA-u when ω ( x ) is strongly convex, we introduce in the followinglemma, a useful “coupling” property for Bernoulli random variables: Lemma 7.
Let r , r , r be i.i.d. samples from Bernoulli ( p ) , < p < , a, b > , and any x , suchthat ≤ x ≤ a , then E (cid:20) r x + r ( a − x ) + b (cid:21) ≤ E (cid:20) r a + b (cid:21) . (21)In the next corollary, we derive these specific convergence rates for strongly convex problems. Corollary 8.
In algorithm 1: if ω ( x ) is λ -strongly convex with modulus λ > , then
1. if α t ≡ , γ ( i ) t = λ/ρ , for t = 0 , , , ..., T − , and ¯ x = P Tt =1 x t /T , then E [ φ (¯ x ) − φ ( x ∗ )] ≤ ( n −
1) [ φ ( x ) − φ ( x ∗ )] + nλ/ρ P ni =1 d i ( x ∗ ) T + 5 n (cid:0)P ni =1 M i (cid:1) log ( T + 1) λT .
2. if α t = n + t , for t = 0 , , , . . . , and α − = 0 , γ ( i ) t = λ (2 n + T ) /ρ , for t = 0 , , , ..., T − ,then E [ φ (¯ x ) − φ ( x ∗ )] ≤ n ( n −
1) [ φ ( x ) − φ ( x ∗ )] + 2 n (2 n + T ) λ/ρ P ni d i ( x ∗ ) T ( T + 1)+ 10 n (cid:0)P ni =1 M i (cid:1) λ ( T + 1) (cid:20) n + ( n + 1) log TT (cid:21) . Proof.
In part 1), let α t ≡ , γ ( i ) t ≡ λ/ρ , it can be observed that l ( i ) t − ∼ Binomial (cid:0) t, n , n − n (cid:1) t ≥ ,we have E " l ( i ) t − λ + γ ( i ) t − ρ = t X i =0 (cid:18) ti (cid:19) (cid:18) n (cid:19) i (cid:18) n − n (cid:19) t − i λ ( i + 1)= nλ ( t + 1) t X i =0 (cid:18) t + 1 i + 1 (cid:19) (cid:18) n (cid:19) i +1 (cid:18) n − n (cid:19) t − i = nλ ( t + 1) " − (cid:18) n − n (cid:19) t +1 ≤ nλ ( t + 1) . Observing the fact that P tτ =0 1 τ +1 ≤ ´ t +21 1 x dx ≤ log ( t + 2) , we obtain E [ φ (¯ x ) − φ ( x ∗ )] ≤ ( n −
1) [ φ ( x ) − φ ( x ∗ )] + λn/ρ P ni =1 d i ( x ∗ ) T + 5 n (cid:0)P ni =1 M i (cid:1) log ( T + 1) λT .
12n part 2), let α t = n + t , for t = 0 , , , . . . , and α − = 0 , γ ( i ) t = λ (2 n + T ) /ρ , then for t ≥ ,for any fixed i , let r s = i s = i , hence r s ∼ Bernoulli ( p ) . In addition, we assume a sequence of ghosti.i.d. samples { r ′ s } ≤ s ≤ T . For t > , E " l ( i ) t λ + γ ( i ) t ρ = E " λ P ts =0 r s ( n + s ) + γ ( i ) t ρ ≤ E " λ P ⌈ t/ ⌉− s =0 { r s ( n + s ) + r t − s ( n + t − s ) } + λ (2 n + T ) ≤ E λ (2 n + t ) (cid:16)P ⌈ t/ ⌉− s =0 r ′ s + 1 (cid:17) ≤ nλ (2 n + t ) (max {⌈ t/ ⌉ , } ) (22)where the second inequality follows from the independence of { r s } and { r ′ s } and the coupling propertyin Lemma 7. It can be seen that the conclusion in (22) holds when t = − , as well. Hence T − X t =0 E " α t γ ( i ) t − ρ + l ( i ) t − λ ≤ T − X t =0 ( n + t ) λ (2 n + t −
1) (max {⌈ ( t − / ⌉ , } ) ≤ T − X t =0 ( n + t ) λ (max {⌈ ( t − / ⌉ , } )= 2 n + 1 λ + T − X t =2 ( n + t ) λ ⌈ ( t − / ⌉≤ n + 1 λ + T − X t =2 n + t ) λ ( t − n + 2 T − λ + T − X t =2 n + 1) λ ( t − ≤ n + 2 Tλ + 2 ( n + 1) λ ˆ T − x dx ≤ λ ( n + T + ( n + 1) log T ) . Let ¯ x = P Tt =1 tx t P Tt =1 t be the weighted average point, then E [ φ (¯ x ) − φ ( x ∗ )] ≤ n ( n −
1) [ φ ( x ) − φ ( x ∗ )] + 2 n (2 n + T ) λ/ρ P ni D i T ( T + 1)+ 10 n (cid:0)P ni =1 M i (cid:1) λ ( T + 1) (cid:20) n + ( n + 1) log TT (cid:21) . For nonsmooth and strongly convex objectives, we presented two options to select { α t } and { γ t } .These results seem to provide new insights on the dual averaging approach as well. To see this,13e consider SBDA-u when n = 1 . In the first scheme, when α t ≡ , the convergence rate of O (log T /T ) is similar to the one in [36]. In the second scheme of Corollary 8, it shows that regularizeddual averaging methods can be easily improved to be optimal while being equipped with a moreaggressive averaging scheme. Our observation suggests an alternative with rate O (1 /T ) to the morecomplicated accelerated scheme ([6, 2]). Such results seems new to the world of simple averagingmethods, and is on par with the recent discoveries for stochastic mirror descent methods ([20, 3, 8,26, 12]). In this section we consider the general nonsmooth convex problem when ω ( x ) = 0 or ω ( x ) is lumpedinto f ( · ) : min x ∈ X φ ( x ) = f ( x ) , and show a variant of SBDA in which block coordinates are sampled non-uniformly. More specifically,we assume the block coordinates are i.i.d. sampled from a discrete distribution { p i } ≤ i ≤ n , < p i < , ≤ i ≤ n . We describe in Algorithm 2 the nonuniformly randomized stochastic block dual averagingmethod (SBDA-r). Input : convex function f , sequence of samples { ξ t } , distribution { p i } ≤ i ≤ n ;initialize α ∈ R , γ − ∈ R n , ¯ G = N and x = arg min x ∈ X P ni =1 γ ( i ) − p i d i (cid:0) x ( i ) (cid:1) ; for t = 0 , , . . . , T − do sample a block i t ∈ { , , . . . , n } with probability Prob ( i t = i ) = p i ;set γ ( i ) t , i = 1 , , ..., n ;receive sample ξ t and update ¯ G : ¯ G = ¯ G + α t p it U i G ( i t ) ( x t , ξ t ) ;update x ( i t ) t +1 = arg min x ∈ X it (cid:26)(cid:10) ¯ G ( i t ) , x (cid:11) + γ ( it ) t p it d i t ( x ) (cid:27) ;set x ( j ) t +1 = x ( j ) t , j = i t ; endOutput : ¯ x = (cid:16)P Tt =0 α t x t (cid:17) / (cid:16)P Tt =0 α t (cid:17) ; Algorithm 2:
Nonuniformly randomized stochastic block dual averaging (SBDA-r) methodIn the next theorem, we present the main convergence property of SBDA-r, which expresses thebound of the expected optimization error as a joint function of the sampling distribution { p i } , andthe sequences { α t } , { γ t } . Theorem 9.
In algorithm 2, let { x t } be the generated solutions and x ∗ be the optimal solution, { α t } be a sequence of positive numbers, { γ t } be a sequence of vectors satisfying the assumption (9) . Let ¯ x = P Tt =0 α t x t P Tt =0 α t be the average point, then E [ f (¯ x ) − f ( x )] ≤ P Tt =0 α t T X t =0 n X i =1 E " α t k G t k i ) , ∗ ργ ( i ) t − + n X i =1 E h γ ( i ) T i p i d i ( x ) . (23) Proof.
For the sake of simplicity, let us denote A t = P tτ =0 α τ , for t = 0 , , . . . . Based on theconvexity of f , we have f (cid:16) P Tt =0 α t x t A T (cid:17) ≤ P Tt =0 α t f ( x t ) A T and f ( x t ) ≤ f ( x ) + h g t , x t − x i for x ∈ X .Then 14 T [ f (¯ x ) − f ( x )] ≤ T X t =0 α t h g t , x t − x i≤ T X t =0 α t p i t D U i t G ( i t ) t , x t − x E| {z } ∆ + T X t =0 α t (cid:28) g t − p i t U i t G ( i t ) t , x t − x (cid:29)| {z } ∆ . (24)It suffices to provide precise bounds on the expectation of ∆ , ∆ separately.We define the auxiliary function Ψ t ( x ) = P ts =0 α s p is D U i s G ( i s ) s , x E + P ni =1 γ ( i ) t p i d i (cid:0) x ( i ) (cid:1) t ≥ P ni =1 γ ( i ) t p i d i (cid:0) x ( i ) (cid:1) t = − . Thus Ψ t ( x t +1 ) = min x Ψ t ( x ) ≥ min x ( t X s =0 α s p i s D U i t G ( i t ) s , x E + n X i =1 γ ( i ) t − p i d i (cid:16) x ( i ) (cid:17)) = min x (cid:26) α t p i t D U i t G ( i t ) t , x E + Ψ t − ( x ) (cid:27) (25)The first inequality follows from the property (9). Next, using (25) and Lemma 2, we obtain α t p i t D U i t G ( i t ) t , x t E ≤ Ψ t ( x t +1 ) − Ψ t − ( x t ) + α t ρp i t γ ( i t ) t − k G t k i t ) , ∗ . Summing up the above inequality for t = 0 , . . . , T , we have T X t =0 α t p i t D U i t G ( i t ) t , x t E ≤ Ψ T ( x T +1 ) − Ψ − ( x ) + T X t =0 α t ρp i t γ ( i t ) t − k G t k i t ) , ∗ . (26)Moreover, by the optimality of x T +1 in solving min x Ψ T ( x ) , for all x ∈ X , we have Ψ T ( x T +1 ) ≤ T X t =0 α t p i t D U i t G ( i t ) t , x E + n X i =1 γ ( i ) T p i d i ( x ) . (27)Putting (26) and (27) together, and using the fact that Ψ − ( x ) ≥ , we obtain: ∆ ≤ n X i =1 γ ( i ) T p i d i ( x ) + T X t =0 α t ρp i t γ ( i t ) t − k G t k i t ) , ∗ . For each t , taking expectation w.r.t. i t , we have 15 " α t ρp i t γ ( i t ) t − k G t k i t ) , ∗ = E " E i t " α t ρp i t γ ( i t ) t − k G t k i t ) , ∗ = n X i =1 E " α t ργ ( i ) t − k G t k i ) , ∗ . As a consequence, one has E [∆ ] ≤ n X i =1 E h γ ( i ) T i p i d i ( x ) + T X t =0 n X i =1 E " α t k G t k i ) , ∗ ργ ( i ) t − . (28)In addition, taking the expectation with respect to i t , ξ t and noting that E ξ t ,i t h p it U i t G t i − g t = E ξ t [ G t ] − g t = 0 , we obtain E [∆ ] = 0 . (29)In view of (28) and (29), we obtain the bound on the expected optimization error: E [ f (¯ x ) − f ( x )] ≤ P Tt =0 α t T X t =0 n X i =1 E " α t k G t k i ) , ∗ ργ ( i ) t − + n X i =1 E h γ ( i ) T i p i d i ( x ) . Block Coordinates Sampling and Analysis
In view of Theorem 4, the obtained upper bound can be conceived as a joint function of probabilitymass { p i } , and the control sequences { α t } , { γ t } . Firstly, throughout this section, let x = x ∗ andassume α t = 1 , t = 0 , , , . . . . (30)Naturally, we can choose the distribution and stepsizes by optimizing the bound min { γ t } ,p L ( { γ t } , p ) = T X t =0 n X i =1 E " M i ργ ( i ) t − + n X i E [ γ ( i ) T ] p i D i . (31)This is a joint problem on two groups of variables. Let us first discuss how to choose { γ t } for anyfixed p i . Let us assume p i has the form: p i = M ai D bi C a,b , i = 1 , , . . . , n, where a, b ≥ , and define C a,b = P ni =1 M ai D bi . We derive two stepsizes rules, depending on whether the iteration number T is known or not. We assume γ ( i ) t = β i , for some constant β i , i = 1 , , . . . n , t = 1 , , ..., T . Theequivalent problem with p , β , has the form min p,β L ( p, β ) = n X i =1 ( T + 1) M i ρβ i + n X i β i p i D i . (32)By optimizing w.r.t. β , we obtain the optimal solutions γ ( i ) t = β i = s (1 + T ) p i M i ρD i . (33)16n addition, we can also select stepsizes without assuming the iteration number T . Let us denote γ ( i ) t = ( √ t + 1 u i if i = i t ,γ ( i ) t − otherwise , (34)for some unspecified u i , ≤ i ≤ n . Applying Lemma 5 with a t = E (cid:20) γ ( i ) t − (cid:21) , b t = u i √ t , we have T X t =0 E " γ ( i ) t − ≤ T X t =1 u i √ t + 1 γ ( i ) − p i ≤ √ T + 1 u i + 1 γ ( i ) − p i . In view of the above analysis, we can relax the problem to the following: min p,u n X i =1 " M i √ T + 1 ρu i + u i √ T + 1 p i D i + M i ργ ( i ) − p i . Note that the third term above is o (cid:16) √ T (cid:17) and hence can be ignored for the sake of simplicity. Thuswe have the approximate problem min p,u n X i =1 (cid:20) M i √ T + 1 ρu i + u i √ T + 1 p i D i (cid:21) , (35)we apply the similar analysis and obtain u i = q p i M i ρD i and hence the second stepsize rule γ ( i ) t = ( q ( t +1) p i M i ρD i if i = i t γ ( i ) t − otherwise , t ≥ . (36)We have established the relation between the optimized sampling probability and stepsizes. Nowwe are ready to discuss specific choices of the probability distribution. Firstly, the simplest way isto set p i = 1 n , i = 1 , , . . . ., n, (37)which implies that SBDA-r reduces to the uniform sampling method SBDA-u with the obtainedstepsizes entirely similar to the ones we derived earlier. However, from the above analysis, it ispossible to choose the sampling distribution properly and obtain a further improved convergencerate. Next we show how to obtain the optimal sampling and stepsize policies from solving the jointproblem (31). We first describe an important property in the following lemma. Lemma 10.
Let S n be the n -dimensional simplex. The optimal solution x ∗ , y ∗ of the nonlinearproblem min x ∈ R n ++ ,y ∈S n P ni =1 h a i x i + x i b i y i i where a i , b i > , ≤ i ≤ n is y ∗ i = ( a i /b i ) W, and x ∗ i = a i b i √ W , where i = 1 , , . . . n and W = (cid:16)P ni ( a i /b i ) (cid:17) − . Applying Lemma 10 to the problem (32) , we obtain the optimal sampling probability p i = M i D i /C, i = 1 , , . . . n (38)where C is the normalizing constant. This is also the optimal probability in problem (35). In viewof these results, we obtain the specific convergence rates in the following corollary:17 orollary 11. In algorithm 2, let α t = 1 , t ≥ . Denote C = (cid:16)P nj =1 M / j D / j (cid:17) , with blockcoordinates sampled from distribution (38). Then:
1. if γ ( i ) t = q (1+ T )2 ρC M / i D − / i , t ≥ − , i = 1 , , . . . , n , then E [ f (¯ x ) − f ( x ∗ )] ≤ √ √ ρ C / √ T + 1 . (39)2. if γ ( i ) − = q ρC M / i D − / i and γ ( i ) t = q ( t +1) ρC M / i D − / i if i = i t ,γ ( i ) t − o.w. , t ≥ , i = 1 , , . . . , n ,then E [ f (¯ x ) − f ( x ∗ )] ≤ C / √ ρ (cid:20) √ T + 1 + 12 ( T + 1) (cid:21) . (40) Proof.
It remains to plug the value of { γ t } , p back into L ( , ) .It is interesting to compare the convergence properties of SBDA-r with that of SBDA-u and SBMD.SBDA with uniform sampling of block coordinates only yields suboptimal dependence on the mul-tiplicative constants. Nevertheless, the rate can be further improved by employing optimal nonuni-form sampling. To develop further intuition, we relate the two rates of convergence with the help ofHÃ ¶ lder’s inequality: " n X i =1 (cid:16) M / i D / i (cid:17) / ≤ " n X i =1 (cid:16) M / i D / i (cid:17) / / · " n X i =1 / / = n X i =1 (cid:16) M i p D i (cid:17) · √ n. The inequality is tight if and only if for some constant c > and i , ≤ i ≤ n : M i √ D i = c . In addi-tion, we compare SBDA-r with a nonuniform version of SBMD , which obtains O qP ni =1 M i · P ni =1 √ D i √ T ! ,assuming blocks are sampled based on the distribution p i ∝ √ D i . Again, applying HÃ ¶ lder’s in-equality, we have " n X i =1 (cid:16) M / i D / i (cid:17) / ≤ " n X i =1 (cid:16) M / i (cid:17) / · " n X i =1 (cid:16) D / i (cid:17) / / / = vuut n X i =1 M i · n X i =1 p D i . In conclusion, SBDA-r, equipped with an optimized block sampling scheme, obtains the best iteration complexity among all the block subgradient methods.
In this section, we examine the theoretical advantages of SBDA through several preliminary exper-iments. For all the algorithms compared, we estimate the parameters and tune the best stepsizesusing separate validation data. We first investigate the performance of SBDA on nonsmooth deter-ministic problems by comparing its performance against other nonsmooth algorithms. We comparewith the following algorithms: SM1 and SM2 are subgradient mirror decent methods with step-sizes γ ∝ √ t and γ ∝ k g ( x ) k respectively. Finally, SGD is stochastic mirror descent and SDA a See Corollary 2.2, part a) of [3]
10 20 30 40 5010 SM1SM2SBDA-uSBDA-rSGDSDA SM1SM2SBDA-uSBDA-rSGDSDA SM1SM2SBDA-uSBDA-rSGDSDA SM1SM2SBDA-uSBDA-rSGDSDA
Figure 1: Tests on ℓ regression.stochastic subgradient dual averaging method. We study the problem of robust linear regression( ℓ regression) with the objective φ ( x ) = m P mi =1 (cid:12)(cid:12) b i − a Ti x (cid:12)(cid:12) . The optimal solution x ∗ and each a i are generated from N (0 , I n × n ) . In addition, we define a scaling vector s ∈ R n and S a diagonalmatrix s.t. S ii = s i . We let b = ( AS ) x ∗ + σ , where A = [ a , a , . . . , a m ] T ∈ R m × n , and the noise σ ∼ N (0 , ρI ) . We set ρ = 0 . and m = n = 5000 .We plot the optimization objective with the number of passes of the dataset in Figure 1, forfour different choices of s . In the first test case (leftmost subfigure), we let s = [1 , , . . . , T sothat columns of A correspond to uniform scaling. We find that SBDA-u and SBDA-r have slightlybetter performance than the other algorithms while exhibiting very similar performance. In thenext three cases, s is generated from the distribution p ( x ; a ) = a (1 − x ) a − , ≤ x ≤ , a > . Weset a = 1 , , respectively. Employing a large a ensures that the bounds on the norms of blocksubgradients follow the power law. We observe that stochastic methods outperform the deterministicmethods, and SBDA-based algorithms have comparable and often better performance than SGDalgorithms. In particular, SBDA-r exhibits the best performance, which clearly shows the advantageof SBDA with the nonuniform sampling scheme.Next, we examine the performance of SBDA for online learning and stochastic approximation.We conduct simulated experiments on the problem: φ ( x ) = E a,b (cid:2) ( b − h La, x i ) (cid:3) , where the aim isto fit linear regression under a linear transform L . The transform matrix L ∈ M n × n is generatedas follows: we first sample a matrix ˜ L for which each entry ˜ L i,j ∼ N (0 , . L is obtained from ˜ L with of the rows being randomly rescaled by a factor ρ . To obtain the optimal solution x ∗ , wefirst generate a random vector from the distribution N (0 , I n × n ) and then truncate each coordinatein [ − , . Simulated samples are generated according to b = h La, x ∗ i + ε where ε ∈ N (0 , . I n × n ) .We let n = 200 , and generate 3000 independent samples for training and 10000 independent samplesfor testing.To compare the performances of these algorithms under various conditions, we tune the parameter ρ in [1 , . , . , . . As can be seen from above, ρ affects the estimation of block-wise parameters { M i } . In Figure 2, we show the objective function for the average of 20 runs. The experimentalresults show the advantages of SBDA over SBMD. When ρ = 1 , SBDA-u, SBDA-r, and SBMD havethe same theoretical convergence rate, and exhibit similar performance. However, as ρ decreases, the“importance” of of the blocks is diminishing and we find SBDA-u and SBDA-r both outperformSBMD. Moreover, SBDA-r seems to perform the best, suggesting the advantage of our proposedstepsize and sampling schemes which are adaptive to the block structures. These observations lendsempirical support to our theoretical analysis.Our next experiment considers online ℓ regularized linear regression (Lasso): min w ∈ R n E ( y,x ) h(cid:0) y − w T x (cid:1) i + λ k w k (41) While linear regression has been well studied in the literature, recent work is interested in efficientregression algorithms under different adversarial circumstances [1, 9, 10]. Under the assumptions19 SBDA-rSBMDSBDA-u -2 -1 SBDA-rSBMDSBDA-u -2 -1 SBDA-rSBMDSBDA-u -3 -2 -1 SBDA-rSBMDSBDA-u
Figure 2: Tests on linear regression, Left to right: ρ = 1 , . , . , . . -4 -3 -2 -1 o b j e c t i v e MDSBMDSBDA-uDA -4 -3 -2 -1 nu m b e r o f z e r o s MDSBMDSBDA-uDA (a) Test on covtype dataset. -4 -3 -2 -1 o b j e c t i v e MDSBMDSBDA-uDA -4 -3 -2 -1 nu m b e r o f z e r o s MDSBMDSBDA-uDA (b) Test on mnist dataset.
Figure 3: Tests on online lasso with limited budgetsof limit budgets, the learner only partially observes the features for each incoming instance, but isallowed to choose the sampling distribution of the features. In addition, we explicitly enforce the ℓ penalty, expecting to learn a sparse solution that effectively reduces testing cost. To apply stochasticmethods, we estimate the stochastic coordinate gradient of the least squares loss. For the sake ofsimplicity, we assume for each input sample instance ( y, x ) , two features ( i t , j t ) are revealed. Whenwe sample one coordinate j t from some distribution { p j } , then p jt w ( j t ) x ( j t ) is an unbiased estimatorof w T x . Hence the defined value G ( i t ) = p jt x ( i t ) x ( j t ) w ( j t ) − yx ( i t ) is an unbiased estimator of the i t -th coordinate gradient.We adapt both SBMD and SBDA-u to these problems and conduct the experiments on datasets covtype and mnist (digit “3 vs 5”). We also implement MD (composite mirror descent) and DA(regularized dual averaging method). For all the methods, the training uses the same total numberof features. However, SBMD and SBDA-u obtain features sampled using a uniform distribution; bothMD and DA have “unfair” access to observe full feature vectors and therefore have the advantagesof lower variance. We plot in Figures 3a and 3b, the optimization error and sparsity patterns withrespect to the penalty weights λ on the two datasets. It can be seen that SBDA-u has comparableand often better optimization accuracy than SBMD. In addition, we also plot the sparsity patternsfor different values of λ . It can be seen that SBDA-u is very effective in enhancing sparsity, moreefficient than SBMD, MD, and comparable to DA which doesn’t have such budget constraints. In this paper we introduced SBDA, a new family of block subgradient methods for nonsmooth andstochastic optimization, based on a novel extension of dual averaging methods. We specializedSBDA-u for regularized problems with nonsmooth or strongly convex regularizers, and SBDA-rfor general nonsmooth problems. We proposed novel randomized stepsizes and optimal samplingschemes which are truly block adaptive, and thereby obtain a set of sharper bounds. Experimentsdemonstrate the advantage of SBDA methods compared with subgradient methods on nonsmooth20eterministic and stochastic optimization. In the future, we will extend SBDA to an importantclass of regularized learning problems consisting of the finite sum of differentiable losses. On suchproblems, recent work [31, 32] shows efficient BCD convergence at linear rate. The works in [39, 35]propose randomized BCD methods that sample both primal and dual variables. However bothmethods apply conservative stepsizes which take the maximum of the block Lipschitz constant. Itwould be interesting to see whether our techniques of block-wise stepsizes and nonuniform samplingcan be applied in this setting as well to obtain improved performance.
Proof of Lemma 1
Proof.
The first part comes from [34]. Let g ( z ) denote any subgradient of f at z . Since f ( x ) isstrongly convex, we have f ( x ) ≥ f ( z ) + h U i g ( i ) ( z ) , x − z i + λ k x − z k i ) . By the definition of z andoptimality condition, we have g ( i ) ( z ) = −∇ i d ( z ) . Thus f ( x ) + h∇ i d ( z ) , x − z i ≥ f ( z ) + λ k x − z k i ) . It remains to apply the definition x = z + U i y and V ( z, x ) = d ( x ) − d ( z ) − h∇ d ( z ) , x − z i . Proof of Lemma 2
Proof.
Let h ( y ) = max x ∈ X {h y, x i − Ψ ( x ) } , since Ψ ( · ) is strongly convex and separable, h ( · ) isconvex and differentiable and its i -th block gradient ∇ i h ( · ) is ρ i -smooth . Moreover, we have ∇ h (0) = x by the definition of x . Thus h (cid:16) − U i g ( i ) (cid:17) ≤ h (0) + D x , − U i g ( i ) E + 12 ρ i k g k i ) , ∗ . It remains to plug in the definition of h ( · ) , z , x . Proof of Lemma 3
Conjecture.
By convexity of f ( · ) , we have f ( z ) ≤ f ( x ) + h g ( z ) , z − x i . In addition, h g ( z ) , z − x i = h g ( x ) , z − x i + h g ( z ) − g ( x ) , z − x i = h g ( i ) ( x ) , y i ( i ) + h g ( i ) ( z ) − g ( i ) ( x ) , y i ( i ) ≤ h g ( i ) ( x ) , y i ( i ) + k g ( i ) ( z ) − g ( i ) ( x ) k ( i ) , ∗ · k y k ( i ) . The second equation follows from the relation between x, y, z and the last one from the Cauchy-Schwarz inequality. Finally the conclusion directly follows from (5).
Proof of Lemma 5
Proof.
Let A t = P ts =0 a t , B t = P ts =1 b t . It is equivalent to show A t ≤ B t + A p . Then21 t = pB t + A + (1 − p ) A t − = [ p + (1 − p )] [ pB t − + A ] + (1 − p ) A t − = h p + (1 − p ) + (1 − p ) i [ pB t − + A ] + (1 − p ) A t − = ... ≤ [ pB t + A ] " t X s =0 (1 − p ) s . The last inequality follows from the assumption that B t ≥ B s where ≤ s ≤ t and A = a . Itremains to apply the inequality P ts =0 (1 − p ) s ≤ P ∞ s =0 (1 − p ) s = p . Proof of Lemma 7
Proof. If r , r , r ∼ Bernoulli ( p ) , c > , < p < , E (cid:20) r x + r ( a − x ) + b (cid:21) = (1 − p ) b + p (1 − p ) a − x + b + p (1 − p ) x + b + p a + b ≤ (1 − p ) b + p (1 − p ) a + b + p (1 − p ) b + p a + b = 1 − pb + pa + b = E (cid:20) r a + b (cid:21) . To see the first inequality, let f ( x ) = Ax + c + Ba − x + c , where A, B > , it can be seen that f ( · ) isconvex in [0 , a ] , then max x ∈ [0 ,a ] f ( x ) = max { f (0) , f ( a ) } . Proof of Lemma 10
Proof.
Let x ∗ , y ∗ be the optimal solution of min x,y L ( x, y, a, b ) . We consider two subproblems.Firstly, x ∗ = arg min x L ( x, y ∗ , a, b ) . Since a i x i + x i b i y ∗ i ≥ q a i b i y ∗ i , at optimality a i x ∗ i = x ∗ i b i y ∗ i . (42)On the other hand, y ∗ is the minimizer of the problem min y L ( x ∗ , y, a, b ) . Applying the Cauchy-Schwarz inequality to L ( x ∗ , y, a, b ) , we obtain n X i =1 x ∗ i b i y i · n X i =1 x ∗ i b i y i n X y i ≥ n X i s x ∗ i b i y i √ y i = n X i p x ∗ i . At optimality, the equality holds for some scalar
C > , x ∗ i b i y ∗ i = Cy ∗ i , i = 1 , , . . . , n. (43)It remains to solve the equations (42) and (43) with the simplex constraint on y .22 eferences [1] N. Cesa-Bianchi, S. Shalev-Shwartz, and O. Shamir. Efficient learning with partially observedattributes. The Journal of Machine Learning Research (JMLR) , 12:2857–2878, 2011.[2] X. Chen, Q. Lin, and J. Pena. Optimal regularized dual averaging methods for stochasticoptimization. In
Advances in Neural Information Processing Systems (NIPS) 25 , 2012.[3] C. D. Dang and G. Lan. Stochastic block mirror descent methods for nonsmooth and stochasticoptimization.
SIAM Journal on Optimization , 25(2):856–881, 2015.[4] J. Duchi and Y. Singer. Efficient online and batch learning using forward backward splitting.
Journal of Machine Learning Research (JMLR) , 10:2899–2934, 2009.[5] J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari. Composite objective mirror descent.In
The 23rd Conference on Learning Theory (COLT) , 2010.[6] S. Ghadimi and G. Lan. Optimal stochastic approximation algorithms for strongly convexstochastic composite optimization I: a generic algorithmic framework.
SIAM Journal on Opti-mization , 22(4):1469–1492, 2012.[7] L. A. Hageman and D. M. Young.
Applied iterative methods . Courier Corporation, 2012.[8] E. Hazan and S. Kale. Beyond the regret minimization barrier: optimal algorithms for stochasticstrongly-convex optimization.
The Journal of Machine Learning Research (JMLR) , 15(1):2489–2512, 2014.[9] E. Hazan and T. Koren. Linear regression with limited observation. In
Proceedings of the 29thInternational Conference on Machine Learning (ICML) , 2012.[10] D. Kukliansky and O. Shamir. Attribute efficient linear regression with data-dependent sam-pling. arXiv preprint arXiv:1410.6382 , 2014.[11] S. Lacoste-Julien, M. Jaggi, M. Schmidt, and P. Pletscher. Block-coordinate Frank-Wolfe opti-mization for structural SVMs. In
Proceedings of The 30th International Conference on MachineLearning (ICML) , 2013.[12] S. Lacoste-Julien, M. Schmidt, and F. Bach. A simpler approach to obtaining an O (1 /t ) con-vergence rate for the projected stochastic subgradient method. arXiv preprint arXiv:1212.2002 ,2012.[13] G. Lan. An optimal method for stochastic composite optimization. Mathematical Programming ,133(1-2):365–397, 2012.[14] G. Lan, Z. Lu, and R. D. Monteiro. Primal-dual first-order methods with {\ mathcal { O } (1/ \ epsilon) } iteration-complexity for cone programming. Mathematical Programming , 126(1):1–29,2011.[15] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient.
Journal ofMachine Learning Research (JMLR) , 10:719–743, 2009.[16] Y. T. Lee and A. Sidford. Efficient accelerated coordinate descent methods and faster algorithmsfor solving linear systems. In
Proceedings of the 54th Annual IEEE Symposium on Foundationsof Computer Science , pages 147–156, 2013. 2317] Z. Lu and L. Xiao. On the complexity analysis of randomized block-coordinate descent methods.
Mathematical Programming , pages 1–28, 2014.[18] Z. Q. Luo and P. Tseng. On the convergence of a matrix splitting algorithm for the symmet-ric monotone linear complementarity problem.
SIAM Journal on Control and Optimization ,29(5):1037–1060, 1991.[19] Z. Q. Luo and P. Tseng. On the convergence of the coordinate descent method for convexdifferentiable minimization.
Journal of Optimization Theory and Applications , 72(1):7–35, Jan.1992.[20] A. Nedic and S. Lee. On stochastic subgradient mirror-descent algorithm with weighted aver-aging.
SIAM Journal on Optimization , 24(1):84–107, 2014.[21] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approachto stochastic programming.
SIAM Journal on Optimization , 19(4):1574–1609, Jan. 2009.[22] Y. Nesterov. Primal-dual subgradient methods for convex problems.
Mathematical Program-ming , 120(1):221–259, 2009.[23] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems.
SIAM Journal on Optimization , 22(2):341–362, 2012.[24] Y. Nesterov. Subgradient methods for huge-scale optimization problems.
Mathematical Pro-gramming , 146(1-2):275–297, 2014.[25] Z. Qu and P. Richtárik. Coordinate descent with arbitrary sampling, I: Algorithms and com-plexity. arXiv preprint arXiv:1412.8060 , 2014.[26] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly con-vex stochastic optimization. In
Proceedings of the 29th International Conference on MachineLearning (ICML-12) , pages 449–456, 2012.[27] S. Reddi, A. Hefny, C. Downey, A. Dubey, and S. Sra. Large-scale randomized-coordinatedescent methods with non-separable linear constraints. arXiv preprint arXiv:1409.2617 , 2014.[28] P. Richtárik and M. Takáč. Iteration complexity of randomized block-coordinate descent meth-ods for minimizing a composite function.
Mathematical Programming , 144(1-2):1–38, 2014.[29] H. Robbins and S. Monro. A stochastic approximation method.
The Annals of MathematicalStatistics , pages 400–407, 1951.[30] S. Shalev-Shwartz and A. Tewari. Stochastic methods for ℓ -regularized loss minimization. TheJournal of Machine Learning Research (JMLR) , 12:1865–1892, 2011.[31] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularizedloss minimization. arXiv preprint arXiv:1209.1873 , 2012.[32] S. Shalev-Shwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent forregularized loss minimization. arXiv preprint arXiv:1309.2375 , 2013.[33] Y. Singer and J. C. Duchi. Efficient learning using forward-backward splitting. In
Advances inNeural Information Processing Systems (NIPS) 22 , pages 495–503, 2009.2434] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. submittedto SIAM Journal on Optimization , 2008.[35] H. Wang and A. Banerjee. Randomized block coordinate descent for online and stochasticoptimization. arXiv preprint arXiv:1407.0107 , 2014.[36] L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization.
The Journal of Machine Learning Research (JMLR) , 11:2543–2596, 2010.[37] Y. Xu and W. Yin. Block stochastic gradient iteration for convex and nonconvex optimization. arXiv preprint arXiv:1408.2597 , 2014.[38] P. Zhao and T. Zhang. Stochastic optimization with importance sampling. arXiv preprintarXiv:1401.2753 , 2014.[39] T. Zhao, M. Yu, Y. Wang, R. Arora, and H. Liu. Accelerated mini-batch randomized blockcoordinate descent method. In