Lasso-type recovery of sparse representations for high-dimensional data
aa r X i v : . [ m a t h . S T ] M a r The Annals of Statistics (cid:13)
Institute of Mathematical Statistics, 2009
LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONSFOR HIGH-DIMENSIONAL DATA
By Nicolai Meinshausen and Bin Yu University of Oxford and University of California, Berkeley
The Lasso is an attractive technique for regularization and vari-able selection for high-dimensional data, where the number of pre-dictor variables p n is potentially much larger than the number ofsamples n . However, it was recently discovered that the sparsity pat-tern of the Lasso estimator can only be asymptotically identical tothe true sparsity pattern if the design matrix satisfies the so-called irrepresentable condition . The latter condition can easily be violatedin the presence of highly correlated variables.Here we examine the behavior of the Lasso estimators if the irrep-resentable condition is relaxed. Even though the Lasso cannot recoverthe correct sparsity pattern, we show that the estimator is still con-sistent in the ℓ -norm sense for fixed designs under conditions on (a)the number s n of nonzero components of the vector β n and (b) theminimal singular values of design matrices that are induced by se-lecting small subsets of variables. Furthermore, a rate of convergenceresult is obtained on the ℓ error with an appropriate choice of thesmoothing parameter. The rate is shown to be optimal under thecondition of bounded maximal and minimal sparse eigenvalues. Ourresults imply that, with high probability, all important variables areselected. The set of selected variables is a meaningful reduction onthe original set of variables. Finally, our results are illustrated withthe detection of closely adjacent frequencies, a problem encounteredin astrophysics.
1. Introduction.
The Lasso was introduced by [29] and has since beenproven to be very popular and well studied [18, 35, 41, 42]. Some reasons forthe popularity might be that the entire regularization path of the Lasso canbe computed efficiently [11, 25], that Lasso is able to handle more predictor
Received December 2006; revised December 2007. Supported by DFG (Deutsche Forschungsgemeinschaft). Supported in part by a Guggenheim fellowship and Grants NSF DMS-06-05165 (06-08), NSF DMS-03-036508 (03-05) and ARO W911NF-05-1-0104 (05-07).
AMS 2000 subject classifications.
Primary 62J07; secondary 62F07.
Key words and phrases.
Shrinkage estimation, lasso, high-dimensional data, sparsity.
This is an electronic reprint of the original article published by theInstitute of Mathematical Statistics in
The Annals of Statistics ,2009, Vol. 37, No. 1, 246–270. This reprint differs from the original in paginationand typographic detail. 1
N. MEINSHAUSEN AND B. YU variables than samples and produces sparse models which are easy to inter-pret. Several extensions and variations have been proposed [5, 21, 36, 40, 42].1.1.
Lasso-type estimation.
The Lasso estimator, as introduced by [29],is given by ˆ β λ = arg min β k Y − Xβ k ℓ + λ k β k ℓ , (1)where X = ( X , . . . , X p ) is the n × p matrix whose columns consist of the n -dimensional fixed predictor variables X k , k = 1 , . . . , p . The vector Y con-tains the n -dimensional set of real-valued observations of the response vari-able.The distribution of Lasso-type estimators has been studied in Knight andFu [18]. Variable selection and prediction properties of the Lasso have beenstudied extensively for high-dimensional data with p n ≫ n , a frequently en-countered challenge in modern statistical applications. Some studies Bunea,Tsybakov and Wegkamp, for example, [2], Greenshtein and Ritov, for exam-ple, [13], van de Geer, for example, [34] have focused mainly on the behaviorof prediction loss. Much recent work aims at understanding the Lasso esti-mates from the point of view of model selection, including Candes and Tao[5], Donoho, Elad and Temlyakov [10], Meinshausen and B¨uhlmann [23],Tropp [30], Wainwright [35], Zhao and Yu [41], Zou [42]. For the Lasso esti-mates to be close to the model selection estimates when the data dimensionsgrow, all the aforementioned papers assumed a sparse model and used vari-ous conditions that require the irrelevant variables to be not too correlatedwith the relevant ones. Incoherence is the terminology used in the determin-istic setting of Donoho, Elad and Temlyakov [10] and “irrepresentability” isused in the stochastic setting (linear model) of Zhao and Yu [41]. Here wefocus exclusively on the properties of the estimate of the coefficient vectorunder squared error loss and try to understand the behavior of the estimateunder a relaxed irrepresentable condition (hence we are in the stochasticor linear model setting). The aim is to see whether the Lasso still givesmeaningful models in this case.More discussions on the connections with other works will be covered inSection 1.5 after notions are introduced to state explicitly what the irrepre-sentable condition is so that the discussions are clearer.1.2. Linear regression model.
We assume a linear model for the obser-vations of the response variable Y = ( Y , . . . , Y n ) T , Y = Xβ + ε, (2)where ε = ( ε , . . . , ε n ) T is a vector containing independently and identicallydistributed noise with ε i ∼ N (0 , σ ) for all i = 1 , . . . , n . The assumption of ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS Gaussianity could be relaxed and replaced with exponential tail bounds onthe noise if, additionally, predictor variables are assumed to be bounded.When there is a question of nonidentifiability for β , for p n > n , we define β as β = arg min { β : EY = Xβ } k β k ℓ . (3)The aim is to recover the vector β as well as possible from noisy obser-vations Y . For the equivalence between ℓ - and ℓ -sparse solutions see, forexample, Donoho [8], Donoho and Elad [9], Fuchs [12], Gribonval and Nielsen[14], Tropp [30, 31].1.3. Recovery of the sparsity pattern and the irrepresentable condition.
There is empirical evidence that many signals in high-dimensional spacesallow for a sparse representation. As an example, wavelet coefficients ofimages often exhibit exponential decay, and a relatively small subset of allwavelet coefficients allow a good approximation to the original image [17, 19,20]. For conceptual simplicity, we assume in our regression setting that thevector β is sparse in the ℓ -sense and many coefficients of β are identicallyzero. The corresponding variables have thus no influence on the responsevariable and could be safely removed. The sparsity pattern of β is understoodto be the sign function of its entries, with sign( x ) = 0 if x = 0, sign( x ) = 1 if x > x ) = − x <
0. The sparsity pattern of a vector might thuslook like sign( β ) = (+1 , − , , , +1 , +1 , − , +1 , , , . . . ) , distinguishing whether variables have a positive, negative or no influence atall on the response variable. It is of interest whether the sparsity pattern ofthe Lasso estimator is a good approximation to the true sparsity pattern. Ifthese sparsity patterns agree asymptotically, the estimator is said to be signconsistent [41]. Definition 1 (Sign consistency). An estimator ˆ β λ is sign consistent ifand only if P { sign( β ) = sign( ˆ β ) } → n → ∞ . It was shown independently in Zhao and Yu [41] and Zou [42] in the lin-ear model case and [23] in a Gaussian Graphical Model setting that signconsistency requires a condition on the design matrix. The assumption wastermed neighborhood stability in Meinshausen and B¨uhlmann [23] and irrep-resentable condition in Zhao and Yu [41]. Let C = n − X T X. The dependenceon n is neglected notationally. N. MEINSHAUSEN AND B. YU
Definition 2 (Irrepresentable condition). Let K = { k : β k = 0 } be theset of relevant variables and let N = { , . . . , p } \ K be the set of noise vari-ables. The sub-matrix C HK is understood as the matrix obtained from C by keeping rows with index in the set H and columns with index in K . The irrepresentable condition is fulfilled if k C NK C − KK sign( β K ) k ℓ ∞ < . In Zhao and Yu [41], an additional strong irrepresentable condition is de-fined which requires that the above elements are not merely smaller than1 but are uniformly bounded away from 1. Zhao and Yu [41], Zou [42] andMeinshausen and B¨uhlmann [23] show that the Lasso is sign consistent onlyif the irrepresentable condition holds.
Proposition 1 (Sign consistency).
Assume that the irrepresentable con-dition or neighborhood stability is not fulfilled. Then there exists no sequence λ = λ n such that the estimator ˆ β λ is sign consistent. It is worth noting that a slightly stronger condition has been used inTropp [30, 31] in a deterministic study of Lasso’s model selection propertieswhere 1 − C NK C − KK is called ERC (exact recovery coefficient). A positiveERC implies the irrepresentable condition for all β values.In practice, it might be difficult to verify whether the condition is ful-filled. This led various authors to propose interesting extensions to the Lasso[22, 39, 42]. Before giving up on the Lasso altogether, however, we want toexamine in this paper in what sense the original Lasso procedure still givessensible results, even if the irrepresentable condition or, equivalently, neigh-borhood stability is not fulfilled.1.4. ℓ -consistency. The aforementioned studies showed that if the ir-representable condition is not fulfilled, the Lasso cannot select the correctsparsity pattern. In this paper we show that the Lasso selects in these casesthe nonzero entries of β and some not-too-many additional zero entries of β under relaxed conditions than the irrepresentable condition. The nonzeroentries of β are in any case included in the selected model. Moreover, thesize of the estimated coefficients allows to separate the few truly zero andthe many nonzero coefficients. However, we note that in extreme cases, whenthe variables are linearly dependent, even these relaxed conditions will beviolated. In these situations, it is not sensible to use the ℓ -metric on β toassess Lasso.Our main result shows the ℓ -consistency of the Lasso, even if the irrep-resentable condition is violated. To be precise, an estimator is said to be ℓ -consistent if k ˆ β − β k ℓ → n → ∞ . (4) ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS Rates of convergence results will also be derived and under the conditionof bounded maximal and minimal sparse eigenvalues, the rate is seen op-timal. An ℓ -consistent estimator is attractive, as important variables arechosen with high probability and falsely chosen variables have very smallcoefficients. The bottom line will be that even if the sparsity pattern of β cannot be recovered by the Lasso, we can still obtain a good approximation.1.5. Related work.
Prediction loss for high-dimensional regression underan ℓ -penalty has been studied for quadratic loss function in Greenshtein andRitov [13] and for general Lipschitz loss functions in van de Geer [34]. Witha focus on aggregation, similarly interesting results are derived in Bunea,Tsybakov and Wegkamp [3]. Both van de Geer [34] and Bunea, Tsybakovand Wegkamp [3] obtain impressive results for random design and sharpbounds for the ℓ -distance between the vector β and its Lasso estimate ˆ β λ .In the current manuscript, we focus on the ℓ -estimation loss on β . As aconsequence, we can derive consistency in the sense of (4) under the condi-tion that s n log p n /n → n → ∞ (ignoring log n factors). An implicationof our work is thus that the sparsity s n is allowed to grow almost as fastas the sample size if one is interested to obtain convergence in ℓ -norm. Incontrast, the results in [3, 34] require s n = o ( √ n ) to obtain convergence in ℓ -norm.The recent independent work of Zhang and Huang [38] shows that thesubspace spanned by the variables selected by Lasso is close to an optimalsubspace. The results also imply that important variables are chosen withhigh probability and provides a tight bound on the ℓ -distance between thevector β and its Lasso estimator. A “partial Riesz condition” is employedin [38], which is rather similar to our notion of incoherent design , definedfurther below in (6).We would like to compare the results of this manuscript briefly with resultsin Donoho [8] and Candes and Tao [5], as both of these papers derive boundson the ℓ -norm distance between β and ˆ β for ℓ -norm constrained estimators.In Donoho [8] the design is random and the random predictor variables areassumed to be independent. The results are thus not directly comparableto the results derived here for general fixed designs. Nevertheless, results inMeinshausen and B¨uhlmann [23] suggest that the irrepresentable condition iswith high probability fulfilled for independently normal distributed predictorvariables. The results in Donoho [8] can thus not directly be used to studythe behavior of the Lasso under a violated irrepresentable condition , whichis our goal in the current manuscript.Candes and Tao [5] study the properties of the so-called “Dantzig selec-tor,” which is very similar to the Lasso, and derive bounds on the ℓ -distancebetween the vector β and the proposed estimator ˆ β . The results are derived N. MEINSHAUSEN AND B. YU under the condition of a
Uniform Uncertainty Principle (UUP), which wasintroduced in Candes and Tao [4]. The UUP is related to our assumptionson sparse eigenvalues in this manuscript. A comparison between these twoassumptions is given after the formulation (10) of the UUP. The bounds onthe ℓ -distance between the true coefficient vector β and its Lasso estima-tor (obtained in the current manuscript) or, respectively, “Dantzig selector”(obtained in [5]) are quite similar in nature. This comes maybe as no surprisesince the formulation of the “Dantzig selector” is quite similar to the Lasso[24]. However, it does not seem straightforward to translate the boundsobtained for the “Dantzig selector” into bounds for the Lasso estimatorand vice versa. We employ also somewhat different conditions because therecould be situations of design matrix arising in statistical practice where thedependence between the predictors is stronger than what is allowed by theUUP, but would satisfy our condition of “incoherent design” to be definedin the next section. It would certainly be of interest to study the connectionbetween the Lasso and “Dantzig selector” further, as the solutions sharemany similarities.Final note: a recent follow-up work [1] provides similar bounds as in thispaper for both Lasso and Dantzig selectors.
2. Main assumptions and results.
First, we introduce the notion of sparseeigenvalues , which will play a crucial role in providing bounds for the conver-gence rates of the Lasso estimator. Thereafter, the assumptions are explainedin detail and the main results are given.2.1.
Sparse eigenvalues.
The notion of sparse eigenvalues is not new andhas been used before [8]; we merely intend to fixate notation. The m-sparseminimal eigenvalue of a matrix is the minimal eigenvalue of any m × m -dimensional submatrix. Definition 3.
The m-sparse minimal eigenvalue and m-sparse maximaleigenvalue of C are defined as φ min ( m ) = min β : k β k ℓ ≤⌈ m ⌉ β T Cββ T β and φ max ( m ) = max β : k β k ℓ ≤⌈ m ⌉ β T Cββ T β . (5)The minimal eigenvalue of the unrestricted matrix C is equivalent to φ min ( p ). If the number of predictor variables p n is larger than sample size, p n > n , this eigenvalue is zero, as φ min ( m ) = 0 for any m > n .A crucial factor contributing to the convergence of the Lasso estimatoris the behavior of the smallest m-sparse eigenvalue , where the number m of variables over which the minimal eigenvalues is computed is roughly thesame order as the sparsity s n , or the number of nonzero components, of thetrue underlying vector β . ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS Sparsity multipliers and incoherent designs.
As apparent from theinteresting discussion in Candes and Tao [5], one cannot allow arbitrarilylarge “coherence” between variables if one still hopes to recover the correctsparsity pattern. Assume that there are two vectors β and ˜ β so that thesignal can be represented by either vector Xβ = X ˜ β and both vectors areequally sparse, say k β k ℓ = k ˜ β k = s n and are not identical. We have nohope of distinguishing between β and ˜ β in such a case: if indeed Xβ = X ˜ β and β and ˜ β are not identical, it follows that the minimal sparse eigenvalue φ min (2 s n ) = 0 vanishes as X ( β − ˜ β ) = 0 and k β − ˜ β k ℓ ≤ s n . If the minimalsparse eigenvalue of a selection of 2 s n variables is zero, we have no hope ofrecovering the true sparse underlying vector from noisy observations.To define our assumption about sufficient conditions for recovery, we needthe definition of incoherent design . As motivated by the example above, wewould need a lower bound on the minimal eigenvalue of at least 2 s n variables,where s n is again the number of nonzero coefficients. We now introduce theconcepts of sparsity multiplier ad incoherent design to make this requirementa bit more general, as minimal eigenvalues are allowed to converge to zeroslowly.A design is called incoherent in the following if minimal sparse eigenvaluesare not decaying too fast, in a sense made precise in the definition below.For notational simplicity, let in the following φ max = φ max ( s n + min { n, p n } )be the maximal eigenvalue of a selection of at most s n + min { n, p n } variables.At the cost of more involved proofs, one could also work with the maximaleigenvalue of a smaller selection of variables instead. Even though we donot assume an upper bound for the quantity φ max , it would not be veryrestrictive to do so for the p n ≫ n setting. To be specific, assume multivariatenormal predictors. If the maximal eigenvalue of the population covariancematrix, which is induced by selecting 2 n variables, is bounded from aboveby an arbitrarily large constant, it follows by Theorem 2.13 in Davidson andSzarek [7] or Lemma A3.1 in Paul [26] that the condition number of theinduced sample covariance matrix observes a Gaussian tail bound. Using anentropy bound for the possible number of subsets when choosing n out of p n variables. The maximal eigenvalue of a selection of 2 min { n, p } variablesis thus bounded from above by some constant, with probability convergingto 1 for n → ∞ under the condition that log p n = o ( n κ ) for some κ <
1, andthe assumption of a bounded φ max , even though not needed, is thus maybenot overly restrictive.As the maximal sparse eigenvalue is typically growing only very slowlyas a function of the number of variables, the focus will be on the decay ofthe smallest sparse eigenvalue, which is a much more pressing problem forhigh-dimensional data. N. MEINSHAUSEN AND B. YU
Definition 4 (Incoherent designs). A design is called incoherent if thereexists a positive sequence e n , the so-called sparsity multiplier sequence, suchthat lim inf n →∞ e n φ min ( e n s n ) φ max ( s n + min { n, p n } ) ≥ . (6)Our main result will require incoherent design . The constant 18 couldquite possibly be improved upon. We will assume for the following that themultiplier sequence is the smallest. Below, we give some simple examplesunder which the condition of incoherent design is fulfilled.2.2.1. Example: block designs.
The first example is maybe not overlyrealistic but gives, hopefully, some intuition for the condition. A “blockdesign” is understood to have the structure n − X T X = Σ(1) 0 · · ·
00 Σ(2) · · · · · · · · · · · · · · · · · · Σ( d ) , (7)where the matrices Σ(1) , . . . , Σ( d ) are of dimension b (1) , . . . , b ( d ), respec-tively. The minimal and maximal eigenvalues over all d sub-matrices aredenoted by φ blockmin := min k min u ∈ R b ( k ) u T Σ( k ) uu T u , φ blockmax := max k max u ∈ R b ( k ) u T Σ( k ) uu T u . In our setup, all constants are allowed to depend on the sample size n .The question arises if simple bounds can be found under which the design is incoherent in the sense of (6). The blocked sparse eigenvalues are trivial lowerand upper bounds, respectively, for φ min ( u ) and φ max ( u ) for all values of u .Choosing e n such that e n s n = o ( n ), the condition (6) of incoherent design requires then e n φ min ( e n s n ) ≫ φ max ( s n + min { n, p n } ). Using φ min ( e n s n ) ≥ φ blockmin and φ max ≤ φ blockmax , it is sufficient if there exists a sequence e n with e n = o ( φ blockmax /φ blockmin ). Together with the requirement e n s n = o ( n ), the conditionof incoherent design is fulfilled if, for n → ∞ , s n = o (cid:18) nc n (cid:19) , (8)where the condition number c n is given by c n := φ blockmax /φ blockmin . (9)Under increasingly stronger assumption on the sparsity, the condition num-ber c n can thus grow almost as fast as √ n , while still allowing for incoherentdesign . ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS More examples of incoherent designs.
Consider two more exam-ples of incoherent design: • The condition (6) of incoherent design is fulfilled if the minimal eigenvalueof a selection of s n (log n ) variables is vanishing slowly for n → ∞ so that φ min { s n (log n ) } ≫ n φ max ( s n + min { p n , n } ) . • The condition is also fulfilled if the minimal eigenvalue of a selection of n α s n variables is vanishing slowly for n → ∞ so that φ min ( n α s n ) ≫ n − α/ φ max . These results can be derived from (6) by choosing the sparse multipliersequences e n = log n and e n = n α/ , respectively. Some more scenarios of incoherent design can be seen to satisfy (6).2.2.3. Comparison with the uniform uncertainty principle.
Candes andTao [5] use a
Uniform Uncertainty Principle (UUP) to discuss the conver-gence of the so-called
Dantzig selector . The UUP can only be fulfilled if theminimal eigenvalue of a selection of s n variables is bounded from below bya constant, where s n is again the number of nonzero coefficients of β . In theoriginal version, a necessary condition for UUP is φ min ( s n ) + φ min (2 s n ) + φ min (3 s n ) > . (10)At the same time, a bound on the maximal eigenvalue is a condition for theUUP in [5], φ max ( s n ) + φ max (2 s n ) + φ max (3 s n ) < . (11)This UUP condition is different from our incoherent design condition. Insome sense, the UUP is weaker than incoherent design , as the minimal eigen-values are calculated over only 3 s n variables. In another sense, UUP is quitestrong as it demands, in form (10) and assuming s n ≥
2, that all pairwise correlations between variables be less than 1 /
3! The condition of incoherentdesign is weaker as the eigenvalue can be bounded from below by an arbi-trarily small constant (as opposed to the large value implied by the UUP).Sparse eigenvalues can even converge slowly to zero in our setting.Taking the example of block designs from further above, incoherent de-sign allowed for the condition number (9) to grow almost as fast as √ n . Incontrast, if the sparsity s n is larger than the maximal block-size, the UUPrequires that the condition number c n be bounded from above by a positiveconstant. Using its form (10) and the corresponding bound (11) for the max-imal eigenvalue, it implies specifically that c n ≤
2, which is clearly stricterthan the condition (8). N. MEINSHAUSEN AND B. YU
Incoherent designs and the irrepresentable condition.
One mightask in what sense the notion of incoherent design is more general than the irrepresentable condition . At first, it might seem like we are simply replac-ing the strict condition of irrepresentable condition by a similarly strongcondition on the design matrix.Consider first the classical case of a fixed number p n of variables. If thecovariance matrix C = C n is converging to a positive definite matrix for largesample sizes, the design is automatically incoherent . On the other hand, itis easy to violate the irrepresentable condition in this case; for examples, seeZou [42].The notion of incoherent designs is only a real restriction in the high-dimensional case with p n > n . Even then, it is clear that the notion of incoherence is a relaxation from irrepresentable condition , as the irrepre-sentable condition can easily be violated even though all sparse eigenvaluesare bounded well away from zero.2.3. Main result for high-dimensional data ( p n > n ). We first state ourmain result.
Theorem 1 (Convergence in ℓ -norm). Assume the incoherent designcondition (6) with a sparsity multiplier sequence e n . If λ ∝ σe n √ n log p n ,there exists a constant M > such that, with probability converging to 1 for n → ∞ , k β − ˆ β λ n k ℓ ≤ M σ s n log p n n e n φ ( e n s n ) . (12)A proof is given in Section 3. It can be seen from the proofs that nonasymp-totic bounds could be obtained with essentially the same results.If we choose the smallest possible multiplier sequence e n , one obtains notonly the required lower bound e n ≥ φ max /φ min ( e n s n ) from (6) but alsoan upper bound e n ≤ Kφ max /φ min ( e n s n ) . Plugging this into (12) yields theprobabilistic bound, for some positive M , k β − ˆ β λ n k ℓ ≤ M σ s n log p n n φ φ ( e n s n ) . It is now easy to see that the convergence rate is essentially optimal aslong as the relevant eigenvalues are bounded.
Corollary 1.
Assume that there exist constants < κ min ≤ κ max < ∞ such that lim inf n →∞ φ min ( s n log n ) ≥ κ min and (13) lim sup n →∞ φ max ( s n + min { n, p n } ) ≤ κ max . ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS Then, for λ ∝ σ √ n log p n , there exists a constant M > such that, withprobability converging to 1 for n → ∞ , k β − ˆ β λ n k ℓ ≤ M σ s n log p n n . The proof of this follows from Theorem 1 by choosing a constant sparsitymultiplier sequence, for example, 20 κ max /κ min .The rate of convergence achieved is essentially optimal. Ignoring the log p n factor, it corresponds to the rate that could be achieved with maximumlikelihood estimation if the true underlying sparse model would be known.It is perhaps also worthwhile to make a remark about the penalty param-eter sequence λ and its, maybe unusual, reliance on the sparsity multipliersequence e n . If both the relevant minimal and maximal sparse eigenvaluesin (6) are bounded from below and above, as in Corollary 1 above, thesequence e n is simply a constant. Any deviation from the usually optimalsequence λ ∝ σ √ n log p n occurs thus only if the minimal sparse eigenval-ues are decaying to zero for n → ∞ , in which case the penalty parameteris increased slightly. The value of λ can be computed, in theory, withoutknowledge about the true β . Doing so in practice would not be a trivialtask, however, as the sparse eigenvalues would have to be known. Moreover,the noise level σ would have to be estimated from data, a difficult task forhigh-dimensional data with p n > n . From a practical perspective, we mostlysee the results as implying that the ℓ -distance can be small for some valueof the penalty parameter λ along the solution path.2.4. Number of selected variables.
As a result of separate interest, it isperhaps noteworthy that bounds on the number of selected variables arederived for the proof of Theorem 1. For the setting of Corollary 1 above,where a constant sparsity multiplier can be chosen, Lemma 5 implies that,with high probability, at most O ( s n ) variables are selected by the Lassoestimator. The selected subset is hence of the same order of magnitude asthe set of “truly nonzero” coefficients. In general, with high probability, nomore than e n s n variables are selected.2.5. Sign consistency with two-step procedures.
It follows from our re-sults above that the Lasso estimator can be modified to be sign consistent in a two-step procedure even if the irrepresentable condition is relaxed. Allone needs is the assumption that nonzero coefficients of β are “sufficiently”large. One possibility is hard-thresholding of the obtained coefficients, ne-glecting variables with very small coefficients. This effect has already beenobserved empirically in [33]. Other possibilities include soft-thresholding andrelaxation methods such as the Gauss–Dantzig selector [5], the relaxed Lasso[22] with an additional thresholding step or the adaptive Lasso of Zou [42]. N. MEINSHAUSEN AND B. YU
Definition 5 (Hard-thresholded Lasso estimator). Let, for each x ∈ R p ,the quantity 1 {| x | ≥ c } be a p n -dimensional vector which is, componentwise,equal to 1 if | x k | ≥ c and 0 otherwise. For a given sequence t n , the hard-thresholded Lasso estimator ˆ β ht,λ is defined asˆ β ht,λ = ˆ β λ { ˆ β λ ≥ σt n q log p n /n } . The sequence t n can be chosen freely. We start with a corollary thatfollows directly from Theorem 1, stating that the hard-thresholded Lassoestimator (unlike the un-thresholded estimator) is sign consistent underregularity assumptions that are weaker than the irrepresentable condition needed for sign-consistency of the ordinary Lasso estimator. Corollary 2 (Sign consistency by hard thresholding).
Assume the in-coherent design assumption (6) holds and the sparsity of β fulfills s n = o ( t n e − n ) for n → ∞ . Assume furthermore min k : β k =0 | β k | ≫ σt n q log p n /n, n → ∞ . Under a choice λ ∝ σe n √ n log p n , the hard-thresholded Lasso estimator ofDefinition 5 is then sign-consistent and P { sign( ˆ β ht,λ ) = sign( β ) } → as n → ∞ . The proof follows from the results of Theorem 1. The bound (12) onthe ℓ -distance, derived from Theorem 1, gives then trivially the identicalbound on the squared ℓ ∞ -distance between ˆ β λ and β . The result followsby observing that 1 /φ max = O (1) and the fact that ℓ ∞ error is a smallerorder of the lower bound on the size of nonzero β ’s due to assumptions ofincoherent design and s n = o ( t n e − n ). When choosing a suitable value of thecut-off parameter t n , one is faced with a trade-off. Choosing larger values ofthe cut-off t n places a stricter condition on the minimal nonzero value of β ,while smaller values of t n relax this assumption, yet require the vector β tobe sparser.The result mainly implies that sign-consistency can be achieved with thehard-thresholded Lasso estimator under much weaker consistency require-ments than with the ordinary Lasso estimator. As discussed previously, theordinary Lasso estimator is only sign consistent if the irrepresentable condi-tion or, equivalently, neighborhood stability is fulfilled [23, 41, 42]. This is aconsiderably stronger assumption than the incoherence assumption above.In either case, a similar assumption on the rate of decay of the minimalnonzero components is needed.In conclusion, even though one cannot achieve sign consistency in gen-eral with just a single Lasso estimation, it can be achieved in a two-stageprocedure. ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS
3. Proof of Theorem 1.
Let β λ be the estimator under the absence ofnoise, that is, β λ = ˆ β λ, , where ˆ β λ,ξ is defined as in (15). The ℓ -distancecan then be bounded by k ˆ β λ − β k ℓ ≤ k ˆ β λ − β λ k ℓ + 2 k β λ − β k ℓ . The firstterm on the right-hand side represents the variance of the estimation, whilethe second term represents the bias. The bias contribution follows directlyfrom Lemma 2 below. The bound on the variance term follows by Lemma 6below.
De-noised response.
Before starting, it is useful to define a de-noisedresponse. Define for 0 < ξ < Y ( ξ ) = Xβ + ξε. (14)We can regulate the amount of noise with the parameter ξ . For ξ = 0, onlythe signal is retained. The original observations with the full amount of noiseare recovered for ξ = 1. Now consider for 0 ≤ ξ ≤ β λ,ξ ,ˆ β λ,ξ = arg min β k Y ( ξ ) − Xβ k ℓ + λ k β k ℓ . (15)The ordinary Lasso estimate is recovered under the full amount of noise sothat ˆ β λ, = ˆ β λ . Using the notation from the previous results, we can writefor the estimate in the absence of noise, ˆ β λ, = β λ . The definition of thede-noised version of the Lasso estimator will be helpful for the proof as itallows to characterize the variance of the estimator.3.1. Part I of proof: bias. Let K be the set of nonzero elements of β ,that is, K = { k : β k = 0 } . The cardinality of K is again denoted by s = s n .For the following, let β λ be the estimator ˆ β λ, under the absence of noise,as defined in (15). The solution β λ can, for each value of λ , be written as β λ = β + γ λ , where γ λ = arg min ζ ∈ R p f ( ζ ) . (16)The function f ( ζ ) is given by f ( ζ ) = nζ T Cζ + λ X k ∈ K c | ζ k | + λ X k ∈ K ( | β k + ζ k | − | β k | ) . (17)The vector γ λ is the bias of the Lasso estimator. We derive first a bound onthe ℓ -norm of γ λ . Lemma 1.
Assume incoherent design as in (6) with a sparsity multipliersequence e n . The ℓ -norm of γ λ , as defined in (16), is then bounded forsufficiently large values of n by k γ λ k ℓ ≤ . λn √ s n φ min ( e n s n ) . (18) N. MEINSHAUSEN AND B. YU
Proof.
We write in the following γ instead of γ λ for notational sim-plicity. Let γ ( K ) be the vector with coefficients γ k ( K ) = γ k { k ∈ K } , thatis, γ ( K ) is the bias of the truly nonzero coefficients. Analogously, let γ ( K c )be the bias of the truly zero coefficients with γ k ( K c ) = γ k { k / ∈ K } . Clearly, γ = γ ( K ) + γ ( K c ). The value of the function f ( ζ ), as defined in (17), is 0if setting ζ = 0. For the true solution γ λ , it follows hence that f ( γ λ ) ≤ ζ T Cζ ≥ ζ , k γ ( K c ) k ℓ = X k ∈ K c | ζ k | ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) X k ∈ K ( | β k + ζ k | − | β k | ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ k γ ( K ) k ℓ . (19)As k γ ( K ) k ℓ ≤ s n , it follows that k γ ( K ) k ℓ ≤ √ s n k γ ( K ) k ℓ ≤ √ s n k γ k ℓ andhence, using (19), k γ k ℓ ≤ √ s n k γ k ℓ . (20)This result will be used further below. We use now again that f ( γ λ ) ≤ ζ = 0 yields the upper bound f ( ζ ) = 0]. Using the previous resultthat k γ ( K ) k ℓ ≤ √ s n k γ k ℓ , and ignoring the nonnegative term k γ ( K c ) k ℓ ,it follows that nγ T Cγ ≤ λ √ s n k γ k ℓ . (21)Consider now the term γ T Cγ . Bounding this term from below and plug-ging the result into (21) will yield the desired upper bound on the ℓ -normof γ . Let | γ (1) | ≥ | γ (2) | ≥ · · · ≥ | γ ( p ) | be the ordered entries of γ .Let u n for n ∈ N be a sequence of positive integers, to be chosen later, anddefine the set of the “ u n -largest coefficients” as U = { k : | γ k | ≥ | γ ( u n ) |} . Defineanalogously to above the vectors γ ( U ) and γ ( U c ) by γ k ( U ) = γ k { k ∈ U } and γ k ( U c ) = γ k { k / ∈ U } . The quantity γ T Cγ can be written as γ T Cγ = k a + b k ℓ , where a := n − / Xγ ( U ) and b := n − / Xγ ( U c ). Then γ T Cγ = k a + b k ℓ ≥ ( k a k ℓ − k b k ℓ ) . (22)Before proceeding, we need to bound the norm k γ ( U c ) k ℓ as a function of u n . Assume for the moment that the ℓ -norm k γ k ℓ is identical to some ℓ >
0. Then it holds for every k = 1 , . . . , p that γ ( k ) ≤ ℓ/k . Hence, k γ ( U c ) k ℓ ≤ k γ k ℓ p X k = u n +1 k ≤ (4 s n k γ k ℓ ) 1 u n , (23)having used the result (20) from above that k γ k ℓ ≤ √ s n k γ k ℓ . As γ ( U )has by definition only u n nonzero coefficients, k a k ℓ = k γ ( U ) T Cγ ( U ) k ℓ ≥ φ min ( u n ) k γ ( U ) k ℓ (24) ≥ φ min ( u n ) (cid:18) − s n u n (cid:19) k γ k ℓ , ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS having used (23) and k γ ( U ) k ℓ = k γ k ℓ − k γ ( U c ) k ℓ . As γ ( U c ) has at mostmin { n, p } nonzero coefficients and using again (23), k b k ℓ = k γ ( U c ) T Cγ ( U c ) k ℓ ≤ φ max k γ ( U c ) k ℓ ≤ φ max s n u n k γ k ℓ . (25)Using (24) and (25) in (22), together with φ max ≥ φ min ( u n ), γ T Cγ ≥ φ min ( u n ) k γ k ℓ (cid:18) − s s n φ max u n φ min ( u n ) (cid:19) . (26)Choosing for u n the sparsity multiplier sequence, as defined in (6), times thesparsity s n , so that u n = e n s n it holds that s n φ max / ( e n s n φ min ( e n s n )) < / s n φ max / ( e n s n φ min ( e n s n )) < /
18, since φ min ( e n s n ) ≤ φ min ( e n s n ). Thus the right-hand side in (26) is bounded from below by18 φ min ( e n s n ) k γ k ℓ since (1 − / √ ≤ .
5. Using the last result togetherwith (21), which says that γ T Cγ ≤ n − λ √ s n k γ k ℓ , it follows that for large n , k γ k ℓ ≤ . λn √ s n φ min ( e n s n ) , which completes the proof. (cid:3) Lemma 2.
Under the assumptions of Theorem 1, the bias k γ λ k ℓ isbounded by k γ λ k ℓ ≤ (17 . σ s n log p n n e n φ ( e n s n ) . Proof.
This is an immediate consequence of Lemma 1. Plugging thepenalty sequence λ ∝ σ √ n log p n e n into (18), the results follows by the in-equality φ min ( e n s n ) ≥ φ min ( e n s n ), having used that, by its definition in (6), e n is necessarily larger than 1. (cid:3) Part II of proof: variance. The proof for the variance part needstwo steps. First, a bound on the variance is derived, which is a functionof the number of active variables. In a second step, the number of activevariables will be bounded, taking into account also the bound on the biasderived above.
Variance of restricted OLS.
Before considering the Lasso estimator, atrivial bound is shown for the variance of a restricted OLS estimation. Letˆ θ M ∈ R p be, for every subset M ⊆ { , . . . , p } with | M | ≤ n , the restrictedOLS-estimator of the noise vector ε ,ˆ θ M = ( X TM X M ) − X TM ε. (27) N. MEINSHAUSEN AND B. YU
First, we bound the ℓ -norm of this estimator. The result is useful for bound-ing the variance of the final estimator, based on the derived bound on thenumber of active variables. Lemma 3.
Let m n be a sequence with m n = o ( n ) and m n → ∞ for n →∞ . If p n → ∞ , it holds with probability converging to 1 for n → ∞ max M : | M |≤ m n k ˆ θ M k ℓ ≤ p n n m n φ ( m n ) σ . The ℓ -norm of the restricted estimator ˆ θ M is thus bounded uniformlyover all sets M with | M | ≤ m n . Proof of Lemma 3.
It follows directly from the definition of ˆ θ M that,for every M with | M | ≤ m n , k ˆ θ M k ℓ ≤ n φ ( m n ) k X TM ε k ℓ . (28)It remains to be shown that, for n → ∞ , with probability converging to 1,max M : | M |≤ m n k X TM ε k ℓ ≤ p n σ m n n. As ε i ∼ N (0 , σ ) for all i = 1 , . . . , n , it holds with probability converging to 1for n → ∞ , by Bonferroni’s inequality that max k ≤ p n | X Tk ε | is bounded fromabove by 2 log p n σ n . Hence, with probability converging to 1 for n → ∞ ,max M : | M |≤ m n k X TM ε k ℓ ≤ m n max k ≤ p n | X Tk ε | ≤ p n σ nm n , (29)which completes the proof. (cid:3) Variance of estimate is bounded by restricted OLS variance.
We showthat the variance of the Lasso estimator can be bounded by the variances ofrestricted OLS estimators, using bounds on the number of active variables.
Lemma 4.
If, for a fixed value of λ , the number of active variables ofthe de-noised estimators ˆ β λ,ξ is for every ≤ ξ ≤ bounded by m , then sup ≤ ξ ≤ k ˆ β λ, − ˆ β λ,ξ k ℓ ≤ max M : | M |≤ m k ˆ θ M k ℓ . (30) Proof.
The key in the proof is that the solution path of ˆ β λ,ξ , if increas-ing the value of ξ from 0 to 1, can be expressed piecewise in terms of therestricted OLS solution. It will be obvious from the proof that it is sufficientto show the claim for ξ = 1 in the term on the r.h.s. of (30). ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS The set M ( ξ ) of active variables is the set with maximal absolute gradient, M ( ξ ) = { k : | G λ,ξk | = λ } . Note that the estimator ˆ β λ,ξ and also the gradient G λ,ξk are continuous func-tions in both λ and ξ [11]. Let 0 = ξ < ξ < · · · < ξ L +1 = 1 be the points ofdiscontinuity of M ( ξ ). At these locations, variables either join the active setor are dropped from the active set.Fix some j with 1 ≤ j ≤ J . Denote by M j the set of active variables M ( ξ )for any ξ ∈ ( ξ j , ξ j +1 ). We show in the following that the solution ˆ β λ,ξ is forall ξ in the interval ( ξ j , ξ j +1 ) given by ∀ ξ ∈ ( ξ j , ξ j +1 ) : ˆ β λ,ξ = ˆ β λ,ξ j + ( ξ − ξ j )ˆ θ M j , (31)where ˆ θ M j is the restricted OLS estimator of noise, as defined in (27). Thelocal effect of increased noise (larger value of ξ ) on the estimator is thusto shift the coefficients of the active set of variables along the least squaresdirection.Once (31) is shown, the claim follows by piecing together the piecewiselinear parts and using continuity of the solution as a function of ξ to obtain k ˆ β λ, − ˆ β λ, k ℓ ≤ J X j =1 k ˆ β λ,ξ j − ˆ β λ,ξ j +1 k ℓ ≤ max M : | M |≤ m k ˆ θ M k ℓ J X j =1 ( ξ j +1 − ξ j ) = max M : | M |≤ m k ˆ θ M k ℓ . It thus remains to show (31). A necessary and sufficient condition for ˆ β λ,ξ with ξ ∈ ( ξ j , ξ j +1 ) to be a valid solution is that for all k ∈ M j with nonzerocoefficient ˆ β λ,ξk = 0, the gradient is equal to λ times the negative sign, G λ,ξk = − λ sign( ˆ β λ,ξk ) , (32)that for all variables with k ∈ M j with zero coefficient ˆ β λ,ξk = 0 the gradientis equal in absolute value to λ | G λ,ξk | = λ (33)and for variables k / ∈ M j not in the active set, | G λ,ξk | < λ. (34)These conditions are a consequence of the requirement that the subgradientof the loss function contains 0 for a valid solution. N. MEINSHAUSEN AND B. YU
Note that the gradient of the active variables in M j is unchanged if re-placing ξ ∈ ( ξ j , ξ j +1 ) by some ξ ′ ∈ ( ξ j , ξ j +1 ) and replacing ˆ β λ,ξ by ˆ β λ,ξ + ( ξ ′ − ξ )ˆ θ M j . That is, for all k ∈ M j ,( Y ( ξ ) − X ˆ β λ,ξ ) T X k = { Y ( ξ ′ ) − X ( ˆ β λ,ξ + ( ξ ′ − ξ )ˆ θ M j ) } T X k , as the difference of both sides is equal to ( ξ ′ − ξ ) { ( ε − X ˆ θ M j ) T X k } , and( ε − X ˆ θ M j ) T X k = 0 for all k ∈ M j , as ˆ θ M j is the OLS of ε , regressed on thevariables in M j . Equalities (32) and (33) are thus fulfilled for the solutionand it remains to show that (34) also holds. For sufficiently small values of ξ ′ − ξ , inequality (34) is clearly fulfilled for continuity reasons. Note thatif | ξ ′ − ξ | is large enough such that for one variable k / ∈ M j inequality (34)becomes an equality, then the set of active variables changes and thus either ξ ′ = ξ j +1 or ξ ′ = ξ j . We have thus shown that the solution ˆ β λ,ξ can for all ξ ∈ ( ξ j , ξ j +1 ) be written asˆ β λ,ξ = ˆ β λ,ξ j + ( ξ − ξ j )ˆ θ M j , which proves (31) and thus completes the proof. (cid:3) A bound on the number of active variables.
A decisive part in the vari-ance of the estimator is determined by the number of selected variables.Instead of directly bounding the number of selected variables, we derivebounds for the number of active variables . As any variable with a nonzeroregression coefficient is also an active variable , these bounds lead triviallyto bounds for the number of selected variables.Let A λ be the set of active variables , A λ = { k : | G λk | = λ } . Let A λ,ξ be the set of active variables of the de-noised estimator ˆ β λ,ξ , asdefined in (15). The number of selected variables (variables with a nonzerocoefficient) is at most as large as the number of active variables , as anyvariable with a nonzero estimated coefficient has to be an active variable [25]. Lemma 5.
For λ ≥ σe n √ n log p n , the maximal number sup ≤ ξ ≤ |A λ,ξ | of active variables is bounded, with probability converging to 1 for n → ∞ ,by sup ≤ ξ ≤ |A λ,ξ | ≤ e n s n . Proof.
Let R λ,ξ be the residuals of the de-noised estimator (15), R λ,ξ = Y − X ˆ β λ,ξ . For any k in the |A λ,ξ | -dimensional space spanned by the activevariables , | X Tk R λ,ξ | = λ. (35) ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS Adding up, it follows that for all 0 ≤ ξ ≤ |A λ,ξ | λ = k X T A λ,ξ R λ,ξ k ℓ . (36)The residuals can for all values 0 ≤ ξ ≤ R λ,ξ = X ( β − ˆ β λ,ξ ) + ξε. Equality (36) can now be transformed intothe inequality, |A λ,ξ | λ ≤ ( k X T A λ,ξ X ( β − ˆ β λ,ξ ) k ℓ + ξ k X T A λ,ξ ε k ℓ ) (37) ≤ ( k X T A λ,ξ X ( β − ˆ β λ,ξ ) k ℓ + k X T A λ,ξ ε k ℓ ) . (38)Denote by ˜ m the supremum of |A λ,ξ | over all values of 0 ≤ ξ ≤
1. Using thesame argument as in the derivation of (29), the term sup ≤ ξ ≤ k X T A λ,ξ ε k ℓ isof order o p ( ˜ mn log p n ) as long as p n → ∞ for n → ∞ . For sufficiently large n it holds thus, using λ ≥ σe n √ n log p n , that sup ≤ ξ ≤ k X T A λ,ξ ε k ℓ / ( ˜ mλ ) / ≤ η for any η >
0. Dividing by λ , (37) implies then, with probability converg-ing to 1, ˜ m ≤ sup ≤ ξ ≤ ( λ − k X T A λ,ξ X ( β − ˆ β λ,ξ ) k ℓ + η √ ˜ m ) . (39)Now turning to the right-hand side, it trivially holds for any value of 0 ≤ ξ ≤ |A λ,ξ | ≤ min { n, p } . On the other hand, X ( β − ˆ β λ,ξ ) = X B λ,ξ ( β − ˆ β λ,ξ ) , where B λ,ξ := A λ,ξ ∪ { k : β k = 0 } , as the difference vector β − ˆ β λ,ξ has nonzeroentries only in the set B λ,ξ . Thus k X T A λ,ξ X ( β − ˆ β λ,ξ ) k ℓ ≤ k X T B λ,ξ X B λ,ξ ( β − ˆ β λ,ξ ) k ℓ . Using additionally |B λ,ξ | ≤ s n + min { n, p } , it follows that k X T A λ,ξ X ( β − ˆ β λ,ξ ) k ℓ ≤ n φ k ( β − ˆ β λ,ξ ) k ℓ . Splitting the difference β − ˆ β λ,ξ into ( β − β λ ) + ( β λ − ˆ β λ,ξ ), where β λ = ˆ β λ, is again the population version of the Lasso estimator, it holds for any η > n → ∞ ,˜ m ≤ (cid:18) nλ − φ max k β − β λ k ℓ + nλ − φ max sup ≤ ξ ≤ k ˆ β λ, − ˆ β λ,ξ k ℓ + η √ ˜ m (cid:19) . (40)Using Lemmas 3 and 4, the variance term n φ sup ≤ ξ ≤ k ˆ β λ, − ˆ β λ,ξ k ℓ is bounded by o p { n ˜ m log p n φ /φ ( ˜ m ) } . Define, implicitly, a sequence˜ λ = σ √ n log p n ( φ max /φ min ( ˜ m )) . For any sequence λ with lim inf n →∞ λ/ ˜ λ >
0, the term n λ − φ sup ≤ ξ ≤ k ˆ β λ, − ˆ β λ,ξ k ℓ is then of order o p ( ˜ m ). Usingfurthermore the bound on the bias from Lemma 1, it holds with probability N. MEINSHAUSEN AND B. YU converging to 1, for n → ∞ for any sequence λ with lim inf n →∞ λ/ ˜ λ > η > m ≤ ( nλ − φ max k β − β λ k ℓ + 2 η √ ˜ m ) ≤ (cid:18) . φ max √ s n φ min ( e n s n ) + 2 η √ ˜ m (cid:19) . Choosing η = 0 .
013 implies, for an inequality of the form a ≤ ( x + 2 ηa ) ,that a ≤ (18 / . x . Hence, choosing this value of η , it follows from theequation above that, with probability converging to 1 for n → ∞ ,˜ m ≤ φ s n φ ( e n s n ) = e n s n (cid:18) φ max e n φ min ( e n s n ) (cid:19) ≤ e n s n , having used the definition of the sparsity multiplier in (6). We can nowsee that the requirement on λ , namely lim inf n →∞ λ/ ˜ λ >
0, is fulfilled if λ ≥ σe n √ n log p n , which completes the proof. (cid:3) Finally, we use Lemmas 3, 4 and 5 to show the bound on the variance ofthe estimator.
Lemma 6.
Under the conditions of Theorem 1, with probability converg-ing to 1 for n → ∞ , k β λ − ˆ β λ k ℓ ≤ σ s n log p n n e n φ ( e n s n ) . The proof follows immediately from Lemmas 3 and 4 when inserting thebound on the number of active variables obtained in Lemma 5.
4. Numerical illustration: frequency detection.
Instead of extensive nu-merical simulations, we would like to illustrate a few aspects of Lasso-typevariable selection if the irrepresentable condition is not fulfilled. We arenot making claims that the Lasso is superior to other methods for high-dimensional data. We merely want to draw attention to the fact that (a)the Lasso might not be able to select the correct variables but (b) comesnevertheless close to the true vector in an ℓ -sense.An illustrative example is frequency detection. It is of interest in someareas of the physical sciences to accurately detect and resolve frequency com-ponents; two examples are variable stars [27] and detection of gravitationalwaves [6, 32]. A nonparametric approach is often most suitable for fitting ofthe involved periodic functions [15]. However, we assume here for simplicitythat the observations Y = ( Y , . . . , Y n ) at time points t = ( t , . . . , t n ) are ofthe form Y i = X ω ∈ Ω β ω sin(2 πωt i + φ ω ) + ε i , ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS where Ω contains the set of fundamental frequencies involved, and ε i for i =1 , . . . , n is independently and identically distributed noise with ε i ∼ N (0 , σ ).To simplify the problem even more, we assume that the phases are knownto be zero, φ ω = 0 for all ω ∈ Ω. Otherwise one might like to employ theGroup Lasso [37], grouping together the sine and cosine part of identicalfrequencies.It is of interest to resolve closely adjacent spectral lines [16] and we willwork in this setting in the following. We choose for the experiment n =200 evenly spaced observation times. There are supposed to be two closelyadjacent frequencies with ω = 0 . ω = 0 . ω + 1 / β ω = β ω = 1. As we have the information that the phase iszero for all frequencies, the predictor variables are given by all sine-functionswith frequencies evenly spaced between 1 /
200 and 1 /
2, with a spacing of1 /
600 between adjacent frequencies.In the chosen setting, the irrepresentable condition is violated for thefrequency ω m = ( ω + ω ) /
2. Even in the absence of noise, this resonancefrequency is included in the Lasso-estimate for all positive penalty param-eters, as can be seen from the results further below. As a consequence ofa violated irrepresentable condition , the largest peak in the periodogram isin general obtained for the resonance frequency . In Figure 1 we show theperiodogram [28] under a moderate noise level σ = 0 .
2. The periodogramshows the amount of energy in each frequency, and is defined through thefunction ∆ E ( ω ) = X i Y i − X i ( Y i − ˆ Y ( ω ) i ) , where ˆ Y ( ω ) is the least squares fit of the observations Y , using only sineand cosine functions with frequency ω as two predictor variables. Thereis clearly a peak at frequency ω m . As can be seen in the close-up around ω m , it is not immediately obvious from the periodogram that there are two frequencies at frequencies ω and ω . As said above, the irrepresentablecondition is violated for the resonance frequency and it is of interest to seewhich frequencies are picked up by the Lasso estimator.The results are shown in Figures 2 and 3. Figure 3 highlights that thetwo true frequencies are with high probability picked up by the Lasso. The resonance frequency is also selected with high probability, no matter howthe penalty is chosen. This result could be expected as the irrepresentablecondition is violated and the estimator can thus not be sign consistent . Weexpect from the theoretical results in this manuscript that the coefficient ofthe falsely selected resonance frequency is very small if the penalty parameteris chosen correctly. And it can indeed be seen in Figure 2 that the coefficientsof the true frequencies are much larger than the coefficient of the resonancefrequency for an appropriate choice of the penalty parameter. N. MEINSHAUSEN AND B. YU
Fig. 1.
The energy log ∆ E ( ω ) for a noise level σ = 0 . is shown on the left for a rangeof frequencies ω . A close-up of the region around the peak is shown on the right. The twofrequencies ω and ω are marked with solid vertical lines, while the resonance frequency ( ω + ω ) / is shown with a broken vertical line. These results reinforce our conclusion that the Lasso might not be ableto pick up the correct sparsity pattern, but delivers nevertheless useful ap-proximations as falsely selected variables are chosen only with a very smallcoefficient; this behavior is typical and expected from the results of Theorem1. Falsely selected coefficients can thus be removed in a second step, eitherby thresholding variables with small coefficients or using other relaxationtechniques. In any case, it is reassuring to know that all important variablesare included in the Lasso estimate.
5. Concluding remarks.
It has recently been discovered that the Lassocannot recover the correct sparsity pattern in certain circumstances, evennot asymptotically for p n fixed and n → ∞ . This sheds a little doubt onwhether the Lasso is a good method for identification of sparse models forboth low- and high-dimensional data.Here we have shown that the Lasso can continue to deliver good approx-imations to sparse coefficient vectors β in the sense that the ℓ -difference k β − ˆ β λ k ℓ vanishes for large sample sizes n , even if it fails to discover thecorrect sparsity pattern. The conditions needed for a good approximation inthe ℓ -sense are weaker than the irrepresentable condition needed for signconsistency . We pointed out that the correct sparsity pattern could be recov-ered in a two-stage procedure when the true coefficients are not too small.The first step consists in a regular Lasso fit. Variables with small absolutecoefficients are then removed from the model in a second step.We derived possible scenarios under which ℓ -consistency in the sense of(4) can be achieved as a function of the sparsity of the vector β , the number ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS Fig. 2.
An example where the Lasso is bound to select wrong variables, while being agood approximation to the true vector in the ℓ -sense. Top row: The noise level increasesfrom left to right as σ = 0 , . , . , . For one run of the simulation, paths of the estimatedcoefficients are shown as a function of the square root √ λ of the penalty parameter. Theactually present signal frequencies ω and ω are shown as solid lines, the resonance fre-quency as a broken line, and all other frequencies are shown as dotted lines. Bottom row:The shaded areas contain, for 90% of all simulations, the regularization paths of the signalfrequencies (region with solid borders), resonance frequency (area with broken borders) andall other frequencies (area with dotted boundaries). The path of the resonance frequencydisplays reverse shrinkage as its coefficient gets, in general, smaller for smaller values ofthe penalty. As expected from the theoretical results, if the penalty parameter is chosencorrectly, it is possible to separate the signal and resonance frequencies for sufficiently lownoise levels by just retaining large and neglecting small coefficients. It is also apparentthat the coefficient of the resonance frequency is small for a correct choice of the penaltyparameter but very seldom identically zero. of samples and the number of variables. Under the condition that sparseminimal eigenvalues are not decaying too fast in some sense, the requirementfor ℓ -consistency is (ignoring log n factors) s n log p n n → n → ∞ . The rate of convergence is actually optimal with an appropriate choice ofthe tuning parameter λ and under the condition of bounded maximal andminimal sparse eigenvalues. This rate is, apart from logarithmic factor in p n and n , identical to what could be achieved if the true sparse model would beknown. If ℓ -consistency is achieved, the Lasso is selecting all “sufficientlylarge” coefficients, and possibly some other unwanted variables. “Sufficientlylarge” means here that the squared size of the coefficients is decaying slowerthan the rate n − s n log p n , again ignoring logarithmic factors in the samplesize. The number of variables can thus be narrowed down considerably with N. MEINSHAUSEN AND B. YU
Fig. 3.
The top row shows the ℓ -distance between β and ˆ β λ separately for the signalfrequencies (solid blue line), resonance frequency (broken red line) and all other frequencies(dotted gray line). It is evident that the distance is quite small for all three categoriessimultaneously if the noise level is sufficiently low (the noise level is again increasing fromleft to right as σ = 0 , . , . , ). The bottom row shows, on the other hand, the averagenumber of selected variables (with nonzero estimated regression coefficient) in each of thethree categories as a function of the penalty parameter. It is impossible to choose the correctmodel, as the resonance frequency is always selected, no matter how low the noise leveland no matter how the penalty parameter is chosen. This illustrates that sign consistencydoes not hold if the irrepresentable condition is violated, even though the estimate can beclose to the true vector β in the ℓ -sense. the Lasso in a meaningful way, keeping all important variables. The size ofthe reduced subset can be bounded with high probability by the numberof truly important variables times a factor that depends on the decay ofthe sparse eigenvalues. This factor is often simply the squared logarithmof the sample size. Our conditions are similar in spirit to those in relatedaforementioned works, but expand the ground to cover possibly cases withmore dependent predictors than UUP. These results support that the Lassois a useful model identification method for high-dimensional data. Acknowledgments.
We would like to thank Noureddine El Karoui andDebashis Paul for pointing out interesting connections to Random Matrixtheory. Some results of this manuscript have been presented at the Ober-wolfach workshop “Qualitative Assumptions and Regularization for High-Dimensional Data.” Finally, we would like to thank the two referees and theAE for their helpful comments that have led to an improvement over ourprevious results.
ASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS REFERENCES [1]
Bickel, P., Ritov, Y. and
Tsybakov, A. (2008). Simultaneous analysis of Lassoand Dantzig selector.
Ann. Statist.
To appear.[2]
Bunea, B., Tsybakov, A. and
Wegkamp, M. (2007). Aggregation for Gaussianregression.
Ann. Statist. Bunea, F., Tsybakov, A. and
Wegkamp, M. (2006). Sparsity oracle inequalitiesfor the Lasso.
Electron. J. Statist.
Candes, E. and
Tao, T. (2005a). Decoding by linear programming.
IEEE Trans.Inform. Theory Candes, E. and
Tao, T. (2005b). The Dantzig selector: Statistical estimation when p is much larger than n . Ann. Statist. Cornish, N. and
Crowder, J. (2005). LISA data analysis using Markov chain MonteCarlo methods.
Phys. Rev. D Davidson, K. and
Szarek, S. (2001). Local operator theory, random matricesand Banach spaces. In
Handbook on the Geometry of Banach Spaces (W.B. Johnson and J. Lindenstrauss, eds.) 317–366. North -Holland, Amsterdam.MR1863696[8] Donoho, D. (2006). For most large underdetermined systems of linear equations,the minimal l -norm solution is also the sparsest solution. Comm. Pure Appl.Math. Donoho, D. and
Elad, M. (2003). Optimally sparse representation in general(nonorthogonal) dictionaries via ℓ -minimization. Proc. Natl. Acad. Sci. USA
Donoho, D., Elad, M. and
Temlyakov, V. (2006). Stable recovery of sparse over-complete representations in the presence of noise.
IEEE Trans. Inform. Theory Efron, B., Hastie, T., Johnstone, I. and
Tibshirani, R. (2004). Least angleregression.
Ann. Statist. Fuchs, J. (2005). Recovery of exact sparse representations in the presence of boundednoise.
IEEE Trans. Inform. Theory Greenshtein, E. and
Ritov, Y. (2004). Persistence in high-dimensional predic-tor selection and the virtue of over-parametrization.
Bernoulli Gribonval, R. and
Nielsen, M. (2003). Sparse representations in unions of bases.
IEEE Trans. Inform. Theory Hall, P., Reimann, J. and
Rice, J. (2000). Nonparametric estimation of a periodicfunction.
Biometrika Hannan, E. and
Quinn, B. (1989). The resolution of closely adjacent spectral lines.
J. Time Ser. Anal. Joshi, R., Crump, V. and
Fischer, T. (1995). Image subband coding using arith-metic coded trellis codedquantization.
IEEE Transactions on Circuits and Sys-tems for Video Technology Knight, K. and
Fu, W. (2000). Asymptotics for Lasso-type estimators.
Ann. Statist. LoPresto, S., Ramchandran, K. and
Orchard, M. (1997). Image coding basedon mixture modeling of wavelet coefficients and a fast estimation-quantizationframework. In
Proc. Data Compression Conference
Mallat, S. (1989). A theory for multiresolution signal decomposition: The waveletrepresentation.
IEEE Trans. Pattern Anal. Machine Intelligence N. MEINSHAUSEN AND B. YU[21]
Meier, L., van de Geer, S. and
B¨uhlmann, P. (2008). The group Lasso for logisticregression.
J. Roy. Statist. Soc. Ser. B Meinshausen, N. (2007). Relaxed Lasso.
Comput. Statist. Data Anal. Meinshausen, N. and
B¨uhlmann, P. (2006). High-dimensional graphs and variableselection with the Lasso.
Ann. Statist. Meinshausen, N., Rocha, G. and
Yu, B. (2007). A tale of three cousins: Lasso,L2Boosting and Dantzig.
Ann. Statist. Osborne, M., Presnell, B. and
Turlach, B. (2000). On the Lasso and its dual.
J. Comput. Graph. Statistics Paul, D. (2007). Asymptotics of sample eigenstructure for a large-dimensional spikedcovariance model.
Statist. Sinica Pojmanski, G. (2002). The All Sky Automated Survey. Catalog of Variable Stars.I. 0 h–6 Quarter of the Southern Hemisphere.
Acta Astronomica Scargle, J. (1982). Studies in astronomical time series analysis. II. Statistical as-pects of spectral analysis of unevenly spaced data.
Astrophysical J.
Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso.
J. Roy.Statist. Soc. Ser. B Tropp, J. (2004). Greed is good: Algorithmic results for sparse approximation.
IEEETrans. Inform. Theory Tropp, J. (2006). Just relax: Convex programming methods for identifying sparsesignals in noise.
IEEE Trans. Inform. Theory Umst¨atter, R., Christensen, N., Hendry, M., Meyer, R., Simha, V., Veitch,J., Vigeland, S. and
Woan, G. (2005). LISA source confusion: Identificationand characterization of signals.
Classical and Quantum Gravity Vald´es-Sosa, P., S´anchez-Bornot, J., Lage-Castellanos, A., Vega-Hern´andez, M., Bosch-Bayard, J., Melie-Garc´ıa, L. and
Canales-Rodr´ıguez, E. (2005). Estimating brain functional connectivity with sparsemultivariate autoregression.
Philos. Trans. Roy. Soc. B: Biological Sciences van de Geer, S. (2006). High-dimensional generalized linear models and the Lasso.
Ann. Statist. Wainwright, M. (2006). Sharp thresholds for high-dimensional and noisy recoveryof sparsity. Available at arXiv:math.ST/0605740.[36]
Yuan, M. and
Lin, Y. (2006a). Model selection and estimation in regression withgrouped variables.
J. Roy. Statist. Soc. Ser. B Yuan, M. and
Lin, Y. (2006b). Model selection and estimation in regression withgrouped variables.
J. Roy. Statist. Soc. Ser. B Zhang, C.-H. and
Huang, J. (2006). The sparsity and bias of the Lasso selectionin high-dimensional linear regression.
Ann. Statist. Zhang, H. and
Lu, W. (2007). Adaptive-Lasso for Cox’s proportional hazards model.
Biometrika Zhao, P. and
Yu, B. (2004). Stagewise Lasso.
J. Machine Learning Research Zhao, P. and
Yu, B. (2006). On model selection consistency of Lasso.
J. MachineLearning Research Zou, H. (2006). The adaptive Lasso and its oracle properties.
J. Amer. Statist. Assoc. Department of StatisticsUniversity of Oxford1 South Parks RoadOxford OX1 3TGUnited KingdomE-mail: [email protected]