Jesse L. Barlow
Pennsylvania State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jesse L. Barlow.
SIAM Journal on Numerical Analysis | 1990
Jesse L. Barlow; James Demmel
When computing eigenvalues of symmetric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded matrices, and all symmetric positive definite matrices which can be consistently ordered (and thus all symmetric positive definite tridiagonal matrices). In particular, the singular values and eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiagnoal and tridiagonal matrices. 17 refs.
SIAM Journal on Scientific Computing | 2005
Haoying Fu; Michael K. Ng; Mila Nikolova; Jesse L. Barlow
Image restoration problems are often solved by finding the minimizer of a suitable objective function. Usually this function consists of a data-fitting term and a regularization term. For the least squares solution, both the data-fitting and the regularization terms are in the
international conference on machine learning | 2006
Xin Yang; Haoying Fu; Hongyuan Zha; Jesse L. Barlow
\ell
Computing | 1985
Jesse L. Barlow; Erwin H. Bareiss
2 norm. In this paper, we consider the least absolute deviation (LAD) solution and the least mixed norm (LMN) solution. For the LAD solution, both the data-fitting and the regularization terms are in the
Siam Journal on Scientific and Statistical Computing | 1988
Jesse L. Barlow; Nancy Nichols; Robert J. Plemmons
\ell
SIAM Journal on Numerical Analysis | 1988
Jesse L. Barlow
1 norm. For the LMN solution, the regularization term is in the
Siam Journal on Scientific and Statistical Computing | 1987
Jesse L. Barlow; Ilse C. F. Ipsen
\ell
Bit Numerical Mathematics | 1996
Jesse L. Barlow; Peter A. Yoon; Hongyuan Zha
1 norm but the data-fitting term is in the
Linear Algebra and its Applications | 2000
Jesse L. Barlow; Ivan Slapničar
\ell
Siam Journal on Scientific and Statistical Computing | 1988
Jesse L. Barlow; Susan L. Handy
2 norm. Since images often have nonnegative intensity values, the proposed algorithms provide the option of taking into account the nonnegativity constraint. The LMN and LAD solutions are formulated as the solution to a linear or quadratic programming problem which is solved by interior point methods. At each iteration of the interior point method, a structured linear system must be solved. The preconditioned conjugate gradient method with factorized sparse inverse preconditioners is employed to solve such structured inner systems. Experimental results are used to demonstrate the effectiveness of our approach. We also show the quality of the restored images, using the minimization of mixed