Multiplicative Iteration for Nonnegative Quadratic Programming
NNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS
Numer. Linear Algebra Appl. Multiplicative Iteration for Nonnegative Quadratic Programming
X. Xiao † and D. Chen ∗ School of Securities and Futures, Southwestern University of Finance and Economics, Sichuan, China, 610041
SUMMARYIn many applications, it makes sense to solve the least square problems with nonnegative constraints. In thisarticle, we present a new multiplicative iteration that monotonically decreases the value of the nonnegative quadratic programming (NNQP) objective function. This new algorithm has a simple closed form and iseasily implemented on a parallel machine. We prove the global convergence of the new algorithm andapply it to solving image super-resolution and color image labelling problems. The experimental resultsdemonstrate the effectiveness and broad applicability of the new algorithm. Copyright c (cid:13)
Received . . .
KEY WORDS: Nonnegative Constraints; Multiplicative Iteration; NNQP; NNLS
1. INTRODUCTIONNumerical problems with nonnegativity constraints on solutions are pervasive throughout science,engineering and business. These constraints usually come from physical grounds correspondingto amounts and measurements , such as solutions associated with image restoration andreconstruction [6, 16, 18, 26] and chemical concentrations [3], etc.. Nonnegativity constraints veryoften arise in least squares problems, i.e. nonnegative least squares (NNLS) argmin x F ( x ) = argmin x || Ax − b || s.t x ≥ . (1)The problem can be stated equivalently as the following nonnegative quadratic programing (NNQP), argmin x F ( x ) = argmin x x T Qx − x T h s.t x ≥ . (2)NNLS (1) and NNQP (2) have the same unique solution. In this article, we assume Q = A T A ∈ R n × n is a symmetric positive definite matrix, and vector h = A T b ∈ R n × .Since the first NNLS algorithm introduced by Lawson and Hanson [15], researchers havedeveloped many different techniques to solve (1) and (2), such as active set methods [3, 5, 15],interior point methods [2], iterative approaches [14] etc.. Among these methods, gradient projectionmethods are known as the most efficient methods in solving problems with simple constraints [19].In this paper, we develop a new multiplicative gradient projection algorithm for NNQP problem. Other similar research can be found in the literature [4, 9, 23]. ∗ Correspondence to: School of Securities and Futures, Southwestern University of Finance and Economics, Sichuan,China, 610041. E-mail: [email protected] † E-mail: [email protected] c (cid:13)
Prepared using nlaauth.cls [Version: 2010/05/13 v2.00] a r X i v : . [ m a t h . NA ] J un X. XIAO AND D. CHEN
In the paper [9], the authors studied a special NNLS problem with nonnegative matrix Q andvector b , and proposed an algorithm called the image space reconstruction algorithm (ISRA). Thecorresponding multiplicative iteration is x i ← x i (cid:20) h i ( Qx ) i (cid:21) . (3)The proof of the convergence property of ISRA can be found in [1, 10, 12, 20]. More recently,Lee and Seung generalized the idea of ISRA to the problem of non-negative matrix factorization(NMF) [16]. For general matrix Q and vector b which have both negative and positive entries, theauthors proposed another multiplicative iteration [23] x i ← x i (cid:34) h i + (cid:112) h i + 4( Q + x ) i ( Q − x ) i Q + x ) i (cid:35) . In [4], Brand and Chen also introduced a multiplicative iteration x i ← x i (cid:20) ( Q − x ) i + h + i ( Q + x ) i + h − i (cid:21) , where Q + = max( Q, , Q − = max( − Q, , h + = max( h, , h − = max( − h, , “max” iselement-wise comparison of two matrices or vectors. Both above algorithms are provedmonotonically converging to global minimum of NNQP objective function (2).In this paper, we present a new iterative NNLS algorithm along with its convergence analysis.We prove that the quality of the approximation improves monotonically, and the iteration isguaranteed to converge to the global optimal solution. The focus of this paper is theoretical proofof the monotone convergence of the new algorithm. We leave the comparison with other NNLSalgorithms for future research. The remainder of this paper is organized as follows. Section 2presents the new multiplicative NNLS algorithm, we prove the algorithm monotonically decreasethe NNQP objective function. In section 3, we discuss two applications of the new algorithm toimage processing problems, including image super-resolution and color image labelling. Finally, insection 4, we conclude by summarizing the main advantage of our approach..2. MULTIPLICATIVE ITERATION AND ITS CONVERGENCE ANALYSISIn this section we derive the multiplicative iteration and discuss its convergence properties. Considerthe NNQP problem (2) argmin x F ( x ) = argmin x x T Qx − x T h s.t. x ≥ , where Q = A T A ∈ R n × n is positive definite, h = A T b ∈ R n . The proposed multiplicative iteration for solving (2) is x i ← x i (cid:20) Q − x ) i + h + i + δ ( | Q | x ) i + h − i + δ (cid:21) , (4)where | Q | = abs ( Q ) = Q + + Q − , constant stablizer < δ (cid:28) guarantees the iteration monotoni-cally convergent. We will discuss how to choose δ later in this section. If all the entries in Q and h are nonnegative, the multiplicative update (4) reduced to ISRA [9].Since all the components of the multiplicative factors, i.e. matrices Q + , Q − , | Q | , and vectors h + , h − , are nonnegative, Q − x ) i + h + i + δ ( | Q | x ) i + h − i + δ , Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.clsnlaauth.cls
Prepared using nlaauth.clsnlaauth.cls
DOI: 10.1002/nlaULTIPLICATIVE ITERATION FOR NNQP x , all the generated iterations, { x k } , are nonnegative. Ingenerating the sequence { x k } , the iteration computes the new update x i +1 by using only the previousvector x i . It does not need to know all the previous updates, { x k } . And the major computationaltasks to be performed at each iteration are computations of the matrix-vector products, Q − x and | Q | x . These remarkable properties imply that the multiplicative iteration requires little storage andcomputation for each iteration.The iteration (4) is a gradient projection method which can be shown by x k +1 i − x ki = (cid:20) Q − x k ) i + h + i + δ ( | Q | x k ) i + h − i + δ (cid:21) x ki − x ki = (cid:20) Q − x k ) i + h + i − ( | Q | x k ) i − h − i ( | Q | x k ) i + h − i + δ (cid:21) x ki = − (cid:20) ( Qx k ) i − h i ( | Q | x k ) i + h − i + δ (cid:21) x ki = − (cid:20) x ki ( | Q | x k ) i + h − i + δ (cid:21) (( Qx k ) i − h i )= − γ k ∇ ( F ( x k )) , where the step-size γ k = (cid:104) x ki ( | Q | x k ) i + h − i + δ (cid:105) , and the gradient of the NNQP objection function (2) ∇ ( F ( x )) = Qx − h . The proposed iteration (4) is motivated by the Karush-Kuhn-Tucker (KKT) first-order optimalcondition [19]. Consider the Lagrangian function of NNQP objective function (2) defined by L ( x, µ ) = 12 x T Qx − x T h − µx, (5)with the scalar Lagrangian multiplier µ i ≥ , assume x ∗ is the optimal solution of Lagrangianfunction (5), the KKT conditions are x ∗ ◦ ( Qx ∗ − h − µ ) = 0 µ ◦ x ∗ = 0 , with ◦ denoting the Hadamard product. Above two equalities imply that either i th constraint isactive x ∗ i = 0 , or µ i = 0 and ( Qx ∗ ) i − h i = 0 when the i th constraint is inactive ( x ∗ i > ). Because Q = | Q | − Q − , equality ( Qx ∗ ) i − h i = 0 implies (( | Q | − Q − ) x ∗ ) i − ( h + i − h − i ) = 0 , we obtain (( | Q | ) x ∗ ) i + h − i + δ = 2( Q − x ∗ ) i + h + i + δ , which is equivalent to Q − x ∗ ) i + h + i + δ ( | Q | x ∗ ) i + h − i + δ = 1 , which means the i th multiplicative factor is constant .Therefore, any optimal solution x ∗ satisfyingthe KKT conditions conrresponds to a fixed point of the multiplicative iteration. In this section, we prove the proposed multiplicative iteration (4) monotonically decrease the valueof the NNQP objective function (2) to its global minimum. This analysis is based on constructionof an auxiliary function of F ( x ) . Similar techniques have been used in the papers [16, 23, 24].For the sake of completeness, we begin our discussion with a brief review of the definition ofauxiliary function. Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.clsnlaauth.cls
Prepared using nlaauth.clsnlaauth.cls
DOI: 10.1002/nla
X. XIAO AND D. CHEN
Figure 1. Graph illustrating how to searching the minimum of the objective function F ( x ) by minimizingthe auxiliary function G ( x, y ) iteratively. Each new update x i is computed by searching the minimum of theauxiliary function G ( x, y ) in every step. Definition 2.1
Let x and y be two positive vectors, function G ( x, y ) is an auxiliary function of F ( x ) if it satisfiesthe following two properties • F ( x ) < G ( x, y ) if x (cid:54) = y ; • F ( x ) = G ( x, x ) .Figure 1 illustrates the relationship between the auxiliary function G ( x, y ) and the correspondingobjective function F ( x ) . In each iteration, the updated x i is computed by minimizing the auxiliaryfunction. The iteration stops when it reaches a stationary point. The following lemma, which is alsopresented in [16, 23, 24], proves the iteration by minimizing auxiliary function G ( x, y ) in each stepdecreases the value of the objective function F ( x ) . Lemma 2.2
Let G ( x, y ) be an auxiliary function of F ( x ) , then F ( x ) is strictly decreasing under the update x k +1 = argmin x G ( x, x k ) , if x k +1 (cid:54) = x k . Proof
By the definition of auxiliary function, if x k +1 (cid:54) = x k , we have F ( x k +1 ) < G ( x k +1 , x k ) ≤ G ( x k , x k ) = F ( x k ) . The middle inequality is because of the assumption x k +1 = argmin x G ( x, x k ) .Deriving a suitable auxiliary function for NNQP objective function F ( x ) (2) is a key step to provethe convergence of our multiplicative iteration. In the following lemma, we prove two positive semi-definite matrices which are used to build our auxiliary function later. Lemma 2.3
Let P ∈ R n × n be a nonnegative real symmetric matrix without all-zero rows and x ∈ R n × be a vector whose entries are positive. Define the diagonal matrix D ∈ R n × n , D ij = (cid:26) if i (cid:54) = j ( P x ) i x i otherwiseThen, the matrices, ( D ± P ) , are positive semi-definite. Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.clsnlaauth.cls
Prepared using nlaauth.clsnlaauth.cls
DOI: 10.1002/nlaULTIPLICATIVE ITERATION FOR NNQP Proof
Consider the matrices, M = diag ( x i )( D + P ) diag ( x i ) , M = diag ( x i )( D − P ) diag ( x i ) , where diag ( x i ) represents the diagonal matrix whose entries on the main diagonal are the entries ofvector x . Since x is a positive vector, diag ( x i ) is invertible. Hence, D ± P are congruent with M , M , correspondingly. The matrices D ± P are positive semi-definite if and only if M and M arepositive semi-definite [13].Given any nonzero vector z ∈ R n × , z T M z = (cid:88) ij ( D ij + P ij ) x i x j z i z j = (cid:88) ij D ij x i x j z i z j + (cid:88) ij P ij x i x j z i z j = (cid:88) i D ii x i z i + (cid:88) ij P ij x i x j z i z j = (cid:88) i ( P x ) i x i z i + (cid:88) ij P ij x i x j z i z j = (cid:88) ij P ij x i x j z i + (cid:88) ij P ij x i x j z i z j = 12 (cid:88) ij P ij x i x j ( z i + z j ) ≥ Hence, M is positive semi-definite. Similarly, we have M is positive semi-definite, z T M z = 12 (cid:88) ij P ij x i x j ( z i − z j ) ≥ . Therefore, D ± P are positive semi-definite.Combining two previous lemmas, we construct an auxiliary function for NNQP (2) as follows. Lemma 2.4 (Auxiliary Function)Let vectors x and y represent two positive vectors, define the diagonal matrix, D ( y ) , with diagonalentries D ii = ( | Q | y ) i + h − i + δy i , i = 1 , , · · · , n, δ > . Then the function G ( x, y ) = F ( y ) + ( x − y ) T ∇ F ( y ) + 12 ( x − y ) T D ( y )( x − y ) (6)is an auxiliary function for quadratic model F ( x ) = 12 x T Qx − x T h. Proof
First of all, it is obvious that G ( x, x ) = F ( x ) which is the second property in Definition 2.1. Next,we have to show the first property, G ( x, y ) > F ( x ) for x (cid:54) = y .Notice that Q is the Hesssian matrix of F ( x ) . The Taylor expansion of F ( x ) at y is F ( x ) = F ( y ) + ( x − y ) T ∇ F ( y ) + 12 ( x − y ) T Q ( x − y ) . Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.clsnlaauth.cls
Prepared using nlaauth.clsnlaauth.cls
DOI: 10.1002/nla
X. XIAO AND D. CHEN
The difference between G ( x, y ) and F ( x ) is G ( x, y ) − F ( x ) = 12 ( x − y ) T ( D ( y ) − Q )( x − y ) .G ( x, y ) > F ( x ) for x (cid:54) = y if and only if ( D ( y ) − Q ) is positive definite.Recall that | Q | = Q + + Q − , | Q | y = Q + y + Q − y , D ( y ) − Q = diag (cid:18) ( | Q | y ) i + h − i + δy i (cid:19) − Q = diag (cid:18) ( | Q | y ) i + h − i + δy i (cid:19) − ( Q + − Q − )= diag (cid:18) ( Q + y ) i y i (cid:19) − Q + + diag (cid:18) ( Q − y ) i y i (cid:19) + Q − + diag (cid:18) h − i + δy i (cid:19) Because diag (cid:16) h − i + δy i (cid:17) is positive definite for δ > , and by Lemma 2.3, diag (cid:16) ( Q + y ) i y i (cid:17) − Q + anddiag (cid:16) ( Q − y ) i y i (cid:17) + Q − are positive semi-definite. Thus, ( D ( y ) − Q ) is positive definite. Hence, we obtain G ( x, y ) > F ( x ) for any vectors x (cid:54) = y . Therefore, G ( x, y ) is an auxiliary of F ( x ) .In previous proof, we use the fact δ > to prove the diagonal matrix diag ( h − i + δy i ) is positivedefinite. Because matrix diag ( h − i y i ) is positive semi-definite for vector y with all positive entries,we can choose δ to be any positive number. In our experiments, it is chosen to be eps = 10 − .Armed with previous lemmas, we are ready to prove the convergence theorem for our multiplicativeiteration (4). Theorem 2.5 (Monotone Convergence)The value of the objective function F ( x ) in (2) is monotonically decreasing under the multiplicativeupdate x k +1 i = x ki (cid:20) Q − x k ) i + h + i + δ ( | Q | x k ) i + h − i + δ (cid:21) . It attains the global minimum of F ( x ) at the stationary point of the iteration. Proof
By the auxiliary function definition 2.1 and Lemma 2.4, G ( x, y ) in (6) is an auxialiry of function F ( x ) . Lemma 2.2 shows that the objective function F ( x ) is monotonically decreasing under theupdate x k +1 = argmin x G ( x, x k ) if x k +1 (cid:54) = x k . It remains to show that the proposed iteration (4) approaches the minimum of G ( x, x k ) .By Fermat’s theorem [22], taking the first partial derivative of G ( x, y ) with respect to x , andsetting it to , we obtain that ∇ x G ( x, x k ) = ∇ F ( x k ) + D ( x k )( x − x k ) = 0 . (7)Hence, x = x k − ( D ( x k )) − ∇ F ( x k )= x k − ( D ( x k )) − ( Qx k − h )= x k − ( D ( x k )) − ( | Q | x k + h − + δ − Q − x k − h + − δ )= x k − ( D ( x k )) − ( | Q | x k + h − + δ ) + ( D ( x k )) − (2 Q − x k + h + + δ )= ( D ( x k )) − (2 Q − x k + h + + δ )= diag (cid:18) Q − x k ) i + h + i + δ ( | Q | x k ) i + h − i + δ (cid:19) x k Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.cls
DOI: 10.1002/nlaULTIPLICATIVE ITERATION FOR NNQP ( D ( x k )) − ( | Q | x k + h − + δ ) = x k .The decreasing sequence { F ( x k ) } is bounded below − b T b . By Monotone ConvergenceTheorem [22], the sequence converges to the limit F ∗ . Because F ( x ) is continuous, given anycompact domain, there exists x ∗ such that F ( x ∗ ) = F ∗ . Since F ( x ∗ ) is the global minimum of F ( x ) , the gradient of F ( x ) at x ∗ is zero, i.e. ∇ F ( x ∗ ) = Qx ∗ − h = 0 , which is equivalent to Q − x ∗ ) i + h + i + δ ( | Q | x ∗ ) i + h − i + δ = 1 , which means x ∗ is a stationary point. Thus, the sequence { F ( x k ) } converges to the global minimum F ( x ∗ ) as { x k } approaches to the limit point x ∗ .3. NUMERICAL EXPERIMENTS We now illustrate two applications of the proposed NNLS algorithm in image processing problems.
Image super-resolution (SR) refers to the process of combining a set of low resolution images into asingle high-resolution image [7, 8]. Each low-resolution image y k is assumed to be generated froman ideal high-resolution image x via a displacement S k , a decimation D k , and a noise process n k : y k = D k S k x + n k , k = 1 , , · · · , K. (8)We use the bilinear interpolation proposed by Chung, Haber and Nagy [8] † to estimate thedisplacement matrices S k . Then we reconstruct the high-resolution image by iteratively solvingthe NNLS argmin x K (cid:88) k =1 || D k S k x − y k || , s.t. x ≥ The low-resolution test data set is taken from the Multi-Dimensional Signal Processing ResearchGroup (MDSP) [11]. Figure 2a shows 4 of the uncompressed low-resolution text images of size × pixels. The reconstructed high-resolution image of size × pixels is computed by ouralgorithm is shown in Figure 2b. Figure 3a shows 4 of low-resolution EIA images of size × pixels. Figure 3b shows the reconstructed × pixels high-resolution image computed by theproposed multiplicative iteration. As shown in the figures, the high-resolution images are visually much better than the low-resolution images. In Markov random fields (MRF)-based interactive image segmentation techniques, the user labels asmall subset of pixels, and the MRF propagates these labels across the image, typically finding high-gradient contours as segmentation boundaries where the labeling changes [21]. These techniquesrequire users to impose hard constraints by indicating certain pixels (seeds) that absolutely have tobe part of the labeling k . Intuitively, the hard constraints provide clues as to what the user intends tosegment. Denote X as the m -by- n test RGB images, X ij represent a -by- vector at pixel ( i, j ) .The class set is denoted by C = { , , · · · , K } . The probabilistic labeling approaches compute a probability measure field for each pixel ( i, j ) , X = { X kij : k ∈ C , i = 1 , , · · · , m, j = 1 , , · · · , n } † Thanks to Julianne Chung for providing the Matlab code.Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.clsnlaauth.cls
Prepared using nlaauth.clsnlaauth.cls
DOI: 10.1002/nla
X. XIAO AND D. CHEN(a) 4 frames of 30 input low-resolution frames (b) Restored high-resolution image by theproposed multiplicative iteration
Figure 2. Super-resolution example 1. Left: 4 sample of 30 input low-resolution image with size of × pixels. Right: restored high-resolution image with size × pixels. (a) 4 frames of 16 input low-resolution frames (b) Restored high-resolution image by theproposed multiplicative iteration Figure 3. Super-resolution example 1. Left: 4 sample of 16 input low-resolution image with size of × pixels. Right: restored high-resolution image with size × pixels. with the constraints K (cid:88) k =1 X kij = 1 , X kij ≥ , ∀ k ∈ C . (9)Denoting N ij = { ( i (cid:48) , j (cid:48) ) : min {| i (cid:48) − i | , | j (cid:48) − j |} = 1 } as the set of neighbors of pixel ( i, j ) , the costfunction are in the following quadratic form argmin x K (cid:88) k =1 m (cid:88) i =1 n (cid:88) j =1 α (cid:88) ( i (cid:48) ,j (cid:48) ) ∈N i j ω iji (cid:48) j (cid:48) ( X ki (cid:48) j (cid:48) − X kij ) + D kij X kij , (10)with the constraints (9). D kij is the cost of assigning label k to pixel ( i, j ) . The first term, (cid:80) ( i (cid:48) ,j (cid:48) ) ∈N i j ω iji (cid:48) j (cid:48) ( X ki (cid:48) j (cid:48) − X kij ) , which controls the granularity of the regions, promotes smooth Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.clsnlaauth.cls
Prepared using nlaauth.clsnlaauth.cls
DOI: 10.1002/nlaULTIPLICATIVE ITERATION FOR NNQP (a) flowers (b) segmented image(c) Manhattan skyline (d) segmented image Figure 4. Sample image labeling results. using MRF model solved by the proposed NNLS algorithm
Algorithm 1
NNLS Algorithm MRF Image Segmentation while norm (( X k ) new − ( X k ) old ) / norm (( X k ) old ) > tol do Update the probability measure field ( X kij ) new = ( X kij ) old ∗ α (cid:80) ( i (cid:48) ,j (cid:48) ) ∈N ij ω iji (cid:48) j (cid:48) X kij + ( D kij ) − + λ ij αX kij (cid:16)(cid:80) ( i (cid:48) ,j (cid:48) ) ∈N ij ω iji (cid:48) j (cid:48) (cid:17) + α (cid:80) ( i (cid:48) ,j (cid:48) ) ∈N ij ω iji (cid:48) j (cid:48) X kij + ( D kij ) + Update the Lagrangian parameter λ new ij = λ old ij ∗ (cid:80) k X kij end while return ( X k ) new regions. The spatial smoothness is controlled by the positive parameter, α , and weight, ω , which ischosen such that ω iji (cid:48) j (cid:48) ≈ if the neighbouring pixels ( i, j ) and ( i (cid:48) , j (cid:48) ) are likely to belong to thesame class and ω iji (cid:48) j (cid:48) ≈ otherwise. In these experiments, ω is defined to be the cosine of the anglebetween two neighbouring pixels, ω iji (cid:48) j (cid:48) = X Tij X i (cid:48) j (cid:48) | X ij | · | X i (cid:48) j (cid:48) | . The cost of labeling k at each pixel ( i, j ) , D kij , is trained with a Gaussian mixture model [25] usingseeds labeled by the user. Given sample mean, µ k , and variance, σ k , for the seeds with labeling k , Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.clsnlaauth.cls
Prepared using nlaauth.clsnlaauth.cls
DOI: 10.1002/nla X. XIAO AND D. CHEN D kij is computed as the Mahalanobis distance [17] between each pixel of the image and the seeds, D kij = 12 K (cid:88) k =1 ( X ij − µ k ) T (Σ k ) − ( X ij − µ k ) + 12 log(Σ k ) . The KKT optimality conditions α (cid:88) ( i (cid:48) ,j (cid:48) ) ∈N ij ω iji (cid:48) j (cid:48) ( X kij − X ki (cid:48) j (cid:48) ) + D kij − λ ij = 0 αX kij (cid:88) ( i (cid:48) ,j (cid:48) ) ∈N ij ω iji (cid:48) j (cid:48) − α (cid:88) ( i (cid:48) ,j (cid:48) ) ∈N ij ω iji (cid:48) j (cid:48) X kij + D kij − λ ij = 0 yield a two-step Algorithm 1.In the experiments, we implement our NNLS algorithm without explicitly constructing the matrix Q . Figure 4 shows the results of the labeled images.
4. SUMMARYIn this paper, we presented a new graduent projection NNLS algorithm and its convergence analysis.By expressing the KKT first-order optimality conditions as a ratio, we obtained a multiplicativeiteration that could quickly solves large quadratic programs on low-cost parallel compute devices.The iteration monotonically converges from any positive initial guess. We demonstrated applicationsto image super-resolution and color image labelling. Our algorithm is also applicable to solve otheroptimization problems involving nonnegativity contraints. Future research includes comparing theperformance of this new NNLS algorithm with other existing NNLS algorithms.
REFERENCES1. G.E.B. Archer and D.M. Titterington. The iterative image space reconstruction algorithm as an alternative to theEM algorithm for solving positive linear inverse problems.
Statistica Sinica , 5:77–96, 1995.2. S. Bellavia, M. Macconi, and B. Morini. An interior point Newton-like method for non-negative least-squaresproblems with degenerate solution.
Numerical Linear Algebra with Applications , 13(10):825–846, 2006.3. M. Van Benthem and M. Keenan. Fast algorithm for the solution of large-scale non-negativity-constrained leastsquares problems.
J. of Chemometrics , 18(10):441–450, 2004.4. M. Brand and D. Chen. Parallel quadratic programming for image processing. In , pages 2261 –2264, September 2011.5. R. Bro and S. De Jong. A fast non-negativity-constrained least squares algorithm.
J. of Chemometrics , 11(5):393–401, 1997.6. D. Calvetti, G. Landi, L. Reichel, and F. Sgallari. Non-negativity and iterative methods for ill-posed problems.
Inverse Problems , 20(6):1747, 2004.7. D. Chen. Comparisons of multiframe super-resolution algorithms for pure translation motion. Master’s thesis,Wake Forest University, Winston-Salem, NC, August 2008.8. J. Chung, E. Haber, and J. Nagy. Numerical methods for coupled super-resolution.
Inverse Problem , 22:1261–1272, 2006.9. M. E. Daube-Witherspoon and G. Muehllehner. An iterative image space reconstruction algorthm suitable forvolume ECT.
Medical Imaging, IEEE Transactions on , 5(2):61 –66, June 1986.10. P. P. B. Eggermont. Multiplicative iterative algorithms for convex programming.
Linear Algebra and itsApplications , 130:25–42, 1990.11. S. Farsiu, D. Robinson, M. Elad, and P. Milanfar. Fast and robust multi-frame super-resolution.
IEEE Transactionson Image Processing , 13:1327–1344, 2003.12. J. Han, L. Han, M. Neumann, and U. Prasad. On the rate of convergence of the image space reconstructionalgorithm.
Operators and Matrices , 3(1):41–58, 2009.13. R.A. Horn and C.R. Johnson.
Matrix Analysis . Cambridge University Press, 1990.14. D. Kim, S. Sra, and I. Dhillon. A new projected quasi-newton approach for the non-negative least squares problem.Technical report, The University of Texas at Austin, 2006.15. C. Lawson and R. Hanson.
Solving Least Squares Problems . SIAM, 3rd edition, 1995.16. D. Lee and S. Seung. Algorithms for non-negative matrix factorization. In
Advances in Neural InformationProcessing Systems 13 , pages 556–562. MIT Press, April 2001.Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.clsnlaauth.cls
Prepared using nlaauth.clsnlaauth.cls
DOI: 10.1002/nlaULTIPLICATIVE ITERATION FOR NNQP
17. P. Mahalanobis. On the generalised distance in statistics. In
Proceedings National Institute of Science, India ,number 2 in 1, pages 49–55, April 1936.18. J. Nagy and Z. Strako ˇ s. Enforcing nonnegativity in image reconstruction algorithms. In SPIE Conference Series ,volume 4121 of
SPIE Conference Series , pages 182–190, October 2000.19. J. Nocedal and S. Wright.
Numerical Optimization . Springer, New York, 2nd edition, 2006.20. A. De Pierro. On the convergence of the iterative image space reconstruction algorithm for volume ECT.
MedicalImaging, IEEE Transactions on , 6(2):174 –175, June 1987.21. M. Rivera, O. Dalmau, and J. Tago. Image segmentation by convex quadratic programming. In , pages 1–5, 2008.22. W. Rudin.
Principles of mathematical analysis . McGraw-Hill Book Co., New York, third edition, 1976.International Series in Pure and Applied Mathematics.23. F. Sha, Y. Lin, L. Saul, and D. Lee. Multiplicative updates for nonnegative quadratic programming.
NeuralComput. , 19(8):2004–2031, 2007.24. F. Sha, L. Saul, and D. Lee. Multiplicative updates for nonnegative quadratic programming in support vectormachines. In
Advances in Neural Information Processing Systems 15 , pages 1041–1048. MIT Press, 2002.25. L. Xu and M. Jordan. On convergence properties of the EM algorithm for gaussian mixtures.
Neural Computation ,8:129–151, 1995.26. R. Zdunek and A. Cichocki. Nonnegative matrix factorization with quadratic programming.
Neurocomput. ,71:2309–2320, June 2008.Copyright c (cid:13)
Numer. Linear Algebra Appl. (2014)
Prepared using nlaauth.clsnlaauth.cls