Balancing Sparsity and Rank Constraints in Quadratic Basis Pursuit
aa r X i v : . [ c s . NA ] M a r Balancing Sparsity and Rank Constraints inQuadratic Basis Pursuit
C¸ a ˘gdas¸ Bilen ∗ , Gilles Puy † , R´emi Gribonval ∗ and Laurent Daudet ‡∗ INRIA, Centre Inria Rennes - Bretagne Atlantique, 35042 Rennes Cedex, France. † Inst. of Electrical Eng., Ecole Polytechnique Federale de Lausanne (EPFL) CH-1015 Lausanne, Switzerland ‡ Institut Langevin, CNRS UMR 7587, UPMC, Univ. Paris Diderot, ESPCI, 75005 Paris, France
Abstract —We investigate the methods that simultaneouslyenforce sparsity and low-rank structure in a matrix as oftenemployed for sparse phase retrieval problems or phase cali-bration problems in compressive sensing. We propose a newapproach for analyzing the trade off between the sparsity andlow rank constraints in these approaches which not only helps toprovide guidelines to adjust the weights between the aforemen-tioned constraints, but also enables new simulation strategies forevaluating performance. We then provide simulation results forphase retrieval and phase calibration cases both to demonstratethe consistency of the proposed method with other approachesand to evaluate the change of performance with different weightsfor the sparsity and low rank structure constraints.
Index Terms —Compressed sensing, blind calibration, phaseestimation, phase retrieval, lifting
I. I
NTRODUCTION
Compressed sensing is a theoretical and numerical frame-work to sample sparse signals at lower rates than required bythe Nyquist-Shannon theorem [1]. More precisely, a K -sparsesource vector x ∈ C N is sampled by a number M of linearmeasurements y i = m ′ i x , i = 1 , . . . , M (1)where m , . . . , m M ∈ C N are known measurement vectors,and . ′ denotes the conjugate transpose operator. A relatedproblem to the compressive sensing recovery is the phaseretrieval problem, which occurs in imaging techniques such asoptical interferometric imaging for astronomy. In this problem,one has only access to the magnitude of the measurements z i = | y i | = m ′ i xx ′ m i , i = 1 , . . . , M , where m , . . . , m M are vectors of the Fourier basis. Reconstructing the originalsignal from such magnitude measurements is a phase retrievalproblem and seems more challenging than simply recovering x from y i . Nevertheless, Cand`es et al. have recently showed[2], [3] that x could be recovered exactly by solving aconvex optimization problem with a number of measurements, M > N , essentially proportional to N . Instead of directlylooking for a signal vector x , the method relies on finding apositive semi-definite matrix X , xx ′ of rank-one such that | y i | = m ′ i Xm i for all i = 1 , . . . , M . When estimating XS ,the measurement constraint becomes linear and Cand`es et al. This work was partly funded by the Agence Nationale de la Recherche(ANR), project ECHANGE (ANR-08-EMER-006) and by the EuropeanResearch Council, PLEASE project (ERC-StG-2011-277906). LD is on a jointaffiliation between Univ. Paris Diderot and Institut Universitaire de France. propose to solve the following convex optimization called thePhaselift (PL) to recover X : ˆX PL = arg min Z Tr( Z ) (2)subject to Z < | y i | = m ′ i Zm i , i = 1 , . . . , M The trace norm
Tr( · ) favors the selection of low rank matricesamong all the ones satisfying the constraints. Let us acknowl-edge that this phase retrieval problem was also previouslystudied theoretically in, e.g., [4], [5], but a larger numberof measurements is needed for reconstruction of the originalsignal with the technique therein ( M ∝ N instead of M ∝ N ). Note also that several simple iterative algorithmssuch as the one described in [6] have been proposed to estimatethe signal x from magnitude measurements, however there isin general no guarantee that such algorithms converge.When the measured vector x is sparse, a modification of thisso-called Phaselift approach was then proposed by Ohlsson etal. [7], [8]. This new approach is called Compressive PhaseRetrieval via Lifting (CPRL) or Quadratic Basis Pursuit [9]and consists in solving the problem in 2 with the addition of acost term that penalizes non-sparse matrices. This extra termallows them to reduce the number of magnitude measurementsneeded to accurately recover the sparse signals. The convexoptimization becomes ˆX CPRL = arg min Z Tr( Z ) + λ k Z k (3)subject to Z < | y i | = m ′ i Zm i , i = 1 , . . . , M, where λ > . The authors also provide bounds for guaranteedrecovery of this method using a generalization of the restrictedisometry property. Note that conditions on the number ofmeasurements for accurate reconstruction of sparse signals bythis approach when the measurements are drawn randomlyfrom the normal distribution are also available in [10].Recently we have shown that the Quadratic Basis Pursuitapproach can be extended to solve a whole different classof problems, namely the phase calibration in compressivesensing [11]. The phase calibration problem is defined assignal recovery when the measurements are contaminated withunknown phase shifts as in y i,ℓ = e jθ i m ′ i x ℓ i = 1 , . . . , M, θ i ∈ [0 , π ) (4) In this case, we can define the cross measurements, g i,k,ℓ as g i,k,ℓ , y i,k y ′ i,ℓ i = 1 , . . . , M (5) k, ℓ = 1 , . . . , L = e jθ i m ′ i x k x ′ ℓ m i e − jθ i (6) = m ′ i X k,ℓ m i X k,ℓ , x k x ′ ℓ ∈ C N × N (7)and we can also define the joint signal matrix X ∈ C LN × LN X , x ... x L | {z } x (cid:2) x ′ · · · x ′ L (cid:3)| {z } x ′ = X , · · · X ,L ... . . . ... X L, · · · X L,L (8)which is rank-one, hermitian, positive semi-definite and sparsewhen the input signals, x ℓ , are sparse. Therefore the jointmatrix X can be recovered with the semi-definite program Phase-Cal : ˆX = arg min Z f λ ( Z ) (9)subject to Z < g i,k,ℓ = m ′ i Z k,ℓ m i i = 1 , . . . , Mk, ℓ = 1 , . . . , L where f λ ( Z ) , Tr( Z ) + λ k Z k (10)It can be noted that when L = 1 the optimization problemin (9) becomes identical to (3) even though the originatingproblems are completely different.An important parameter in both (3) and (9) is the parameter λ which determines the weight between the sparsity and lowrank structure constraints. Recently it is suggested in [12] thatthe joint use of sparsity inducing objective function, i.e. ℓ norm, along with low rank inducing objective function, i.e.the trace norm, would not necessarily improve the recoveryperformance and for each example one of the constraints issufficient. However it is not known for which examples eachnorm is more suitable. Therefore an ambiguity related to λ is the range of values for λ that leads to the best recoveryperformance in different problems. For real valued systems,the bounds on M and λ for the CPRL recovery are investigatedin [10] where sufficient conditions for perfect recovery isgiven as (assuming k x k = 1 without loss of generality) λ > √ K k x k + 1 , λ < N / and M > C λ log N where C is a constant. However similar to bounds shownfor compressive sensing, these bounds are also far from tightand experimental results suggest that there is a large room forimprovement.In this report we propose a new approach to numericallydetermine the range of values for the parameter λ in quadraticbasis pursuit problems. The proposed approach is derivedanalytically from the quadratic basis pursuit formulation bytaking advantage of the convex nature of the problem. It isshown that the proposed approach gives empirically consistentresults with the quadratic basis pursuit while providing boundson the parameter λ for best recovery performance that lead tointeresting insights for the phase calibration and sparse phaseretrieval problems. Algorithm 1
P-Cal- λ : Determine if perfect recovery of x ispossible and find upper and lower bounds on λ Set recovery ← false , λ low ← , λ up ← ∞ Perform optimization to find ˆD given x , m , . . . , m M if G( ˆD ) ≤ then return ( recovery , λ low , λ up ) end if Perform optimization to find ˆD − given x , m , . . . , m M if G( ˆD − ) ≤ then return ( recovery , λ low , λ up ) else λ low ← ˆD − ) end if Perform optimization to find ˆD given x , m , . . . , m M if G( ˆD ) < then λ up ← − ˆD ) end if if λ low < λ up then recovery ← true end if return ( recovery , λ low , λ up ) II. A N A LGORITHM TO D ETERMINE THE BOUNDS ON λ Instead of finding theoretical bounds on λ as in [10], wepropose to numerically determine the range of values for λ forwhich perfect recovery is guaranteed given x . An algorithmto determine if the perfect recovery is possible as well as theupper and lower bounds on the parameter λ for given x and [ m , . . . , m M ] is shown in Algorithm 1 ( P-Cal- λ ). The term ˆD p in Algorithm 1 represents the result of the optimization ˆD p , arg min Z G( Z ) (11)subject to m ′ i Z k,ℓ m i = 0 , i = 1 , . . . , M Tr( Z ) = p, k, ℓ = 1 , . . . , L Z = E (cid:20) a b ′ b C (cid:21) E ′ , a ∈ R , b ∈ C LN − C < , C ∈ C LN − × LN − where E is defined with the eigen-decomposition of X , xx ′ such that X = EΛE ′ . The function G( . ) is defined as G( Z ) , k Z Ω ⊥ X k + Real {h sign( X ) , Z Ω X i} (12)where the function sign( . ) operating on every element of thematrix is sign( Z ) , ( Z i,j | Z i,j | if Z i,j = 00 if Z i,j = 0 (13)In order to clarify how the Algorithm 1 is derived, let usfirst define the matrix subspaces Ω X and Ω ⊥ X as Ω X = { Z ∈ C LN × LN | Z i,j = 0 if X i,j = 0 } Ω ⊥ X = { Z ∈ C LN × LN | Z i,j = 0 if X i,j = 0 } and let Z Ω X and Z Ω ⊥ X indicate the projections of matrix Z onto Ω X and Ω ⊥ X respectively. Theorem 1:
For a given x = [ x ′ . . . x ′ L ] ′ ∈ C LN , X , xx ′ having the eigen-decomposition X = EΛE ′ , the result ˆX ofthe optimization Phase-Cal is equal to X if and only if allof the following conditions are satisfied: C1: if G < , then λ < − G (14) C2: G − > (15) C3: λ > G − (16) C4: G > (17)where G p = G( D p ) and D p is defined as D p , arg min Z G( Z ) (18)subject to m ′ i Z k,ℓ m i = 0 , i = 1 , . . . , M Tr( Z ) = p, k, ℓ = 1 , . . . , L Z = E (cid:20) a b ′ b C (cid:21) E ′ , a ∈ R , b ∈ R ( C ) C < , C ∈ C LN − × LN − given that R ( C ) represents the range of matrix C and Z k,ℓ , Z ( k − N +1 , ( ℓ − N +1 · · · Z ( k − N +1 ,ℓN ... . . . ... Z kN, ( ℓ − N +1 · · · Z kN,ℓN (19)In order to prove Theorem 1 we shall first establish a fewobservations. Let us define the cone S X such that S X = { A | X + c A < , ∃ c > } (20) Lemma 1:
For a given x = [ x ′ . . . x ′ L ] ′ ∈ C LN , X = xx ′ having the eigen-decomposition X = EΛE ′ , the matrix ∆ , E (cid:20) a b ′ b C (cid:21) E ′ where b ∈ C LN − , C ∈ C LN − × LN − isin S X if and only if C < , a ∈ R and b ∈ R ( C ) where R ( . ) represents the range of the matrix. Proof of Lemma 1:
Let us assume that ∆ ∈ S X , then bydefinition ∃ c ∈ R + , such that ∀ u ∈ C , ∀ v ∈ C LN − , ∀ c ∈ (0 , c ] , [ u ′ v ′ ] (cid:18) X + c E (cid:20) a b ′ b C (cid:21) E ′ (cid:19) (cid:20) u v (cid:21) ≥ (21) ⇒ [ u ′ v ′ ] (cid:18) ∆ + c (cid:20) a b ′ b C (cid:21)(cid:19) (cid:20) u v (cid:21) ≥ (22) ⇒ [ u ′ v ′ ] (cid:20) k x k + ca c b ′ c b c C (cid:21) (cid:20) u v (cid:21) ≥ (23) ⇒| u | ( k x k + ca ) + cu ′ b ′ v + cu v ′ b + c v ′ Cv ≥ (24)The first necessary condition for (24) is that k x k + ca ≥ ∀ c ∈ (0 , c ] ⇒ a ≥ −k x k c , a ∈ R (25) Following (24), we have ( k x k + ca ) "(cid:13)(cid:13)(cid:13)(cid:13) u + c b ′ v k x k + ca (cid:13)(cid:13)(cid:13)(cid:13) − (cid:13)(cid:13)(cid:13)(cid:13) c b ′ v k x k + ca (cid:13)(cid:13)(cid:13)(cid:13) + c v ′ Cv ≥ (26) ⇒ v ′ Cv − c v ′ bb ′ v k x k + ca ≥ , ∀ v , ∀ c ∈ (0 , c ] (27) ⇒ C − c bb ′ k x k + ca < , ∀ c ∈ (0 , c ] (28)The second and third necessary conditions implied by (28) are C < (29) b ′ v = 0 ∀ v satisfying Cv = 0 ⇒ b ∈ R ( C ) (30)More strict conditions can be derived assuming C hasthe eigen-decomposition C = FΛ C Λ ′ C F ′ , Λ C =Diag( p λ C , , . . . , p λ C ,LN − ) and without loss of generalityrepresenting b as b = FΛ C s , s ∈ C LN − (31) = FΛ C s Ω C , Ω C , { a | a i = 0 if λ C ,i = 0 } (32) ⇒ C − c bb ′ k x k + ca = FΛ C Λ ′ C F ′ − c FΛ C s Ω C s ′ Ω C Λ ′ C F ′ k x k + ca < (33) ⇒ ˆ v ′ ˆ v − c k x k + ca | ˆ v ′ s Ω C | ≥ , ∀ v ∈ C LN − (34) ˆ v , Λ ′ C F ′ v ⇒ − c k x k + ca (cid:12)(cid:12)(cid:12)(cid:12) ˆ v ′ s Ω C | ˆ v | (cid:12)(cid:12)(cid:12)(cid:12) ≥ , ∀ v ∈ C LN − (35) ⇒ − c k x k + ca k s Ω C k ≥ (36) ⇒ k s Ω C k ≤ a + k x k c ∀ c ∈ (0 , c ] (37)Given c , the three necessary conditions are a ∈ R , C < and b ∈ R ( C ) as shown in (25), (29) and (30). It can also beshown that given ∆ , a constant, c , can be chosen to ensure ∆ ∈ S X considering the conditions in (25) and (37) providedthat a ∈ R , C < and b ∈ R ( C ) . Lemma 2:
The matrix X is not the global (or local) min-imum of the optimization Phase-Cal if and only if ∃ ∆ ∈ C LN × LN satisfying all three of the following conditions: C1: m ′ i ∆ k,ℓ m i = 0 , i = 1 , . . . , M, k, ℓ = 1 , . . . , L (38) C2: ∆ ∈ S X (39) C3:
Tr( ∆ ) + λ G( ∆ ) ≤ (40)where ∆ k,ℓ is defined similar to (19). Corollary 1:
The matrix X is the global minimum of theoptimization Phase-Cal if and only if
Tr( ∆ ) + λ G( ∆ ) > , ∀ ∆ satisfying C1 and C2 (41) Proof of Lemma 2: If X is not the global minimum of theoptimization Phase-Cal , then by definition ∃ W < , W = X such that f λ ( W ) ≤ f λ ( X ) (42) g i,k,ℓ = m ′ i W k,ℓ m i , i = 1 , . . . , M, k, ℓ = 1 , . . . , L (43) Using (42) and convexity of the function f λ f λ ( W ) ≤ f λ ( X + c ∆ ) ≤ f λ ( X ) , < c ≤ (44) ∆ , W − X Considering that X satisfies the measurements ( g i,k,ℓ = m ′ i X k,ℓ m i , i = 1 , . . . , M, k, ℓ = 1 , . . . , L ), (43) easily leadsus to C1 such that m ′ i ( W − X ) k,ℓ m i = 0 , i = 1 , . . . , M, k, ℓ = 1 , . . . , L (45) ⇒ m ′ i ∆ k,ℓ m i = 0 , < c ≤ (46)Note that (44), (46) and the fact that X + c ∆ < , Following the Corollary 1, in order tohave X as a unique solution to optimization Phase-Cal , wemust have Tr( ∆ ) + λ G( ∆ ) > (60) ∀ ∆ satisfying m ′ i ∆ k,ℓ m i = 0 , i = 1 , . . . , M ∆ ∈ S X , k, ℓ = 1 , . . . , L or equivalently Tr( ∆ ) + λ G( ∆ ) > (61) ∀ ∆ satisfying m ′ i ∆ k,ℓ m i = 0 , i = 1 , . . . , M, k, ℓ = 1 , . . . , L ∆ = E (cid:20) a b ′ b C (cid:21) E ′ , a ∈ R , b ∈ R ( C ) C < , C ∈ C LN − × LN − as suggested by Lemma 1. Since given a ∆ satisfying (61) c ∆ , c > also satisfies (61), Tr( ∆ ) can be fixed withoutloss of generality. Therefore (61) can be considered in threecases: • Case 1: Tr( ∆ ) = 1 (61) can be satisfied for all Tr( ∆ ) > provided that Tr( ∆ ) + λ G( ∆ ) > (62) ∀ ∆ s.t. m ′ i ∆ k,ℓ m i = 0 , i = 1 , . . . , M, k, ℓ = 1 , . . . , L Tr( ∆ ) = 1 ∆ = E (cid:20) a b ′ b C (cid:21) E ′ , a ∈ R , b ∈ R ( C ) C < , C ∈ C LN − × LN − which is satisfied if λ G( D ) > − . Therefore if G( D ) < , we must have λ < − D ) , and if G( D ) ≥ no limitation on λ is needed ( C1 ). • Case 2: Tr( ∆ ) = − Similar to Case 1, (61) can be satisfied for all Tr( ∆ ) < provided that λ G( D − ) > . Consequently, λ G( D − ) > only if G( D − ) > ( C2 ) and λ > D − ) ( C3 ). • Case 3: Tr( ∆ ) = 0 As in Case 1 and 2, (61) can be satisfied for all Tr( ∆ ) =0 provided that λ G( D ) > . As a result, λ G( D ) > only if G( D ) > ( C4 ).Combining all three cases, (61) can be satisfied given that C1 , C2 , C3 and C4 are satisfied. Similarly it can be shown thatthe conditions C1 , C2 , C3 and C4 are sufficient for (61) tobe true which concludes the proof of the Theorem 1. Remark 1: Defining the sets S D and S ˆD as S D = (cid:26) A | A = E (cid:20) a b ′ b C (cid:21) E ′ , a ∈ R , b ∈ R ( C ) , C < C ∈ C LN − × LN − (cid:27) (63) S ˆD = (cid:26) A | A = E (cid:20) a b ′ b C (cid:21) E ′ , a ∈ R , b ∈ C LN − , C < C ∈ C LN − × LN − (cid:27) (64)we can observe that D p ∈ S D , ˆD p ∈ S ˆD and S D ⊂ S ˆD . Asa result, we can conclude that1) G( ˆD p ) = G( D p ) if ˆD p ∈ S D 2) If ˆD p / ∈ S D , then G( ˆD p ) ≤ G( D p ) and the bounds on λ computed through Theorem 1 with ˆD p can only betighter than or equal to that of the bounds obtained with D ( p ) .The optimization problem defined in (18) for a given setof m i , the transform E and the constant p ∈ R is difficultto handle due to the non-linear nature of the constraintsspecifically introduced by the requirement b ∈ R ( C ) . In orderto simplify the optimization, one can omit this criteria andinstead solve (11). Since the resulting bounds will be tighteras explained above, the results are guaranteed to be valid fordetermining viable range of λ for perfect reconstruction.Following the Theorem 1 and Remark 1, it is straightfor-ward to show that for a given set of sparse input signals andthe measurement matrix, Algorithm 1 ( P-Cal- λ ) can be usedto determine whether perfect recovery is possible as well asthe upper and lower bounds on the parameter λ .III. E XPERIMENTAL R ESULTS In order to demonstrate the performance of the proposedalgorithm P-Cal- λ , the upper and lower bounds on the pa-rameter λ in the optimization method Phase-Cal have beenestimated for different numbers of input signals, L ∈ { , , } ,each with size N = 100 . The measurement vectors and thenon-zero entries of the input signals are randomly generatedfrom an i.i.d. normal distribution. The positions of the K non-zero coefficients of the input signals, x ℓ , are chosen uniformlyat random in { , . . . , N } . The number of non-zero entries, K ,of each input signal, x ℓ , and the number of measurements, M , are varied such that the performance under differentsparsity levels, ρ , K/M ∈ { . , . , . , . , . } , areobserved for one under-complete and one over-complete set ofmeasurements such that δ , M/N ∈ { . , . } . In order toobserve the bounds on λ for perfect recovery, the optimizationin (11) is performed for p = 1 , − , and the bounds on λ ρ (K/M) λ L=1L=3L=6 (a) Lower bound for λ , δ = 0 . ρ (K/M) λ L=1L=3L=6 (b) Lower bound for λ , δ = 1 . ρ (K/M) R e c o v e r y P r obab ili t y L=1L=3L=6 (c) Probability of recovery, δ = 0 . ρ (K/M) R e c o v e r y P r obab ili t y L=1L=3L=6 (d) Probability of recovery, δ = 1 . Fig. 1: ( P-Cal- λ ) The lower bound on λ , (a)-(b), and theestimated probability of perfect recovery, (c)-(d), for N = 100 with respect to ρ , K/M and L . The solid lines indicate theresults obtained through P-Cal- λ whereas dashed lines in (c)-(d) indicate the empirical probability of recovery presented in[13] obtained by Phase-Cal . are computed as described in Algorithm 1 ( P-Cal- λ ) using10 independently generated input signals, x . Lowest upperbound and highest lower bound are selected among these 10experiments as the viable range for λ for a given ρ and δ .The maximum lower bound for λ among these 10 exper-iments as a function of ρ are shown in Figures 1a-1b for δ = 0 . and δ = 1 . respectively. It can be observed fromFigures 1a and 1b that the benefit of increasing the number ofinput signals mainly appears when M > N as the recoveryis possible for a broader range of λ and ρ . For the significantmajority of the simulations, λ is found to have no upper bound( G > ), and therefore the upper bounds are not shown. Formost of the simulations that resulted in a feasible range of λ for perfect recovery, the optimization result, ˆD p , is observedto be in S D , which affirms that the bounds found on λ foreach simulation are tight.The probabilities of recovering the signals, empirically es-timated by the percentage of successful recovery during thesesimulations, are displayed in Figures 1c-1d. These probabili-ties are often displayed in phase transition diagrams in com-pressive sensing recovery scenarios when evaluating differentalgorithms, as in [11], [13] for the optimization Phase-Cal . Inorder to demonstrate that the proposed approach accuratelyestimates the performance, the probabilities of recovery of Phase-Cal as reported in [13] (which are consistent withthe results in [11]) are also shown in Figures 1c-1d. It canbe observed that the reported probabilities of both methodsclosely match for every simulation scenario.IV. C ONCLUSIONS We have proposed a novel approach for evaluation ofconvex minimization methods used in phase retrieval andphase calibration problems. The proposed method, P-Cal- λ ,not only provides an alternative approach to evaluating theperformance of the discussed optimization methods (CPRLand Phase-Cal ), but also helps finding tight bounds on theoptimization parameter for perfect recovery .For the evaluation of performance of an optimizationmethod such as Phase-Cal or CPRL, using the approach P-Cal- λ has several advantages compared to monte carlosimulations performed directly by evaluating the optimizationmethod itself. Firstly, the P-Cal- λ algorithm determines notonly the possibility of successful recovery with the optimiza-tion, but also the bounds on the parameter λ for ensuringthe perfect recovery when possible. Secondly, it provides abetter way to deal with the convergence issues in practicalsimulations. When the optimization method Phase-Cal (orCPRL) is directly performed in simulations, perfectly accurateresult may not be reached within a limited time due to slowconvergence, even though perfect reconstruction would havebeen possible with a relaxed time constraint. However when P-Cal- λ is used to evaluate the performance, early termination ofthe optimization most often affects the accuracy of the boundson λ but not the accuracy of determining whether perfect The codes for the MATLAB R (cid:13) implementations of the proposed methodhas been provided inhttp://hal.inria.fr/docs/00/96/02/72/TEX/Calcodesv2.0.rar recovery is possible or not. Lastly, even though the order ofcomputational complexity of the optimization approaches in(9) and (11) are comparable, the algorithm P-Cal- λ can beperformed quickly for many cases for which the recovery isnot possible. This is due to the fact that finding a point thatresults in a negative objective function (rather than the pointminimizing it) is sufficient for determining the unsuccessfulrecovery (for the optimization in lines 3 and 7 of P-Cal- λ ).The experimental results on the bounds of the parameter λ shows that this parameter can be chosen to be very large tomaximize the chances of perfect recovery. Furthermore, thefact that there is no upper bound on λ for almost all thesimulated scenarios suggests that the same recovery perfor-mance can be reached without minimizing the trace in Phase-Cal (and CPRL) and minimization of ℓ -norm is sufficient.This suggestion is consistent with the analysis provided in[12], which states that for the recovery of a given signal,only one of the components (trace and the ℓ -norm) of theobjective function in Phase-Cal (and CPRL) is needed. Theresults with our approach simply shows that this componentis almost always the ℓ -norm. The performance of ℓ -normonly optimization is reported in [14] which further verifiesthis conclusion. Furthermore our experiments also showed thatminimizing the ℓ -norm leads to a faster convergence (in termsof the number of iterations) than minimizing the objectivefunction with both the trace and the ℓ -norm.R EFERENCES[1] D. L. Donoho, “Compressed Sensing,” Information Theory, IEEE Trans-actions on , vol. 52, no. 4, pp. 1289 – 1306, 2006.[2] E. J. Cand`es, Y. C. Eldar, T. Strohmer, and V. Voroninski, “PhaseRetrieval via Matrix Completion,” Imaging Sciences, SIAM Journal on(to appear) , 2011, arXiv:1109.0573v2.[3] E. J. Cand`es, T. Strohmer, and V. Voroninski, “PhaseLift: Exact andStable Signal Recovery from Magnitude Measurements via ConvexProgramming,” Communications on Pure and Applied Mathematics (toappear) , 2011, arXiv:1109.4499v1.[4] R. Balan, P. Casazza, and D. Edidin, “On signal reconstruction with-out phase,” Applied and Computational Harmonic Analysis , vol. 20,pp. 345–356, May 2006.[5] R. Balan, B. G. Bodmann, P. G. Casazza, and D. Edidin, “Fast algorithmsfor signal reconstruction without phase,” in Wavelets XII, Proc. ofSPIE , vol. 6701, (San Diago, California, USA), pp. 67011L.1–67011L.9,August 2007.[6] R. W. Gerchberg and W. O. Saxton, “A practical algorithm for thedetermination of the phase from image and diffraction plane pictures,” Optik , vol. 35, no. 2, pp. 237–246, 1972.[7] H. Ohlsson, A. Y. Yang, R. Dong, and S. S. Sastry, “CompressivePhase Retrieval from Squared Output Measurements via SemidefiniteProgramming,” arXiv preprint arXiv:1111.6323 , 2011.[8] H. Ohlsson, A. Y. Yang, R. Dong, and S. S. Sastry, “CPRL-An Extensionof Compressive Sensing to the Phase Retrieval Problem,” in NeuralInformation Processing Systems (NIPS) , 2012.[9] H. Ohlsson, A. Yang, R. Dong, and S. Sastry, “Quadratic Basis Pursuit,”in Signal Processing with Adaptive Sparse Structured Representations(SPARS) Workshop , (Lausanne, Switzerland), 2013.[10] X. Li and V. Voroninski, “Sparse Signal Recovery from Quadratic Mea-surements via Convex Programming,” arXiv preprint arXiv:1209.4785 ,no. September, pp. 1–15, 2012, arXiv:1209.4785v1.[11] C. Bilen, G. Puy, R. Gribonval, and L. Daudet, “Blind Phase Calibrationin Sparse Recovery,” in , 2013.[12] S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar, and B. Hassibi, “Simul-taneously Structured Models with Application to Sparse and Low-rankMatrices,” arXiv preprint arXiv:1212.3753 , 2012.[13] C. Bilen, G. Puy, R. Gribonval, and L. Daudet, “Convex OptimizationApproaches for Blind Sensor Calibration using Sparsity,” ArXiv e-prints ,Aug. 2013, 1308.5354v1. [14] C. Bilen, G. Puy, R. Gribonval, and L. Daudet, “Convex OptimizationApproaches for Blind Sensor Calibration using Sparsity,”