Ariela Sofer
George Mason University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ariela Sofer.
Archive | 2009
Igor Griva; Stephen G. Nash; Ariela Sofer
Preface Part I. Basics: 1. Optimization models 2. Fundamentals of optimization 3. Representation of linear constraints Part II. Linear Programming: 4. Geometry of linear programming 5. The simplex method 6. Duality and sensitivity 7. Enhancements of the simplex method 8. Network problems 9. Computational complexity of linear programming 10. Interior-point methods of linear programming Part III. Unconstrained Optimization: 11. Basics of unconstrained optimization 12. Methods for unconstrained optimization 13. Low-storage methods for unconstrained problems Part IV. Nonlinear Optimization: 14. Optimality conditions for constrained problems 15. Feasible-point methods 16. Penalty and barrier methods Part V. Appendices: Appendix A. Topics from linear algebra Appendix B. Other fundamentals Appendix C. Software Bibliography Index.
IEEE Transactions on Medical Imaging | 2000
Calvin A. Johnson; Jurgen Seidel; Ariela Sofer
Interior-point methods have been successfully applied to a wide variety of linear and nonlinear programming applications. This paper presents a class of algorithms, based on path-following interior-point methodology, for performing regularized maximum-likelihood (ML) reconstructions on three-dimensional (3-D) emission tomography data. The algorithms solve a sequence of subproblems that converge to the regularized maximum likelihood solution from the interior of the feasible region (the nonnegative orthant). The authors propose two methods, a primal method which updates only the primal image variables and a primal-dual method which simultaneously updates the primal variables and the Lagrange multipliers. A parallel implementation permits the interior-point methods to scale to very large reconstruction problems. Termination is based on well-defined convergence measures, namely, the Karush-Kuhn-Tucker first-order necessary conditions for optimality. The authors demonstrate the rapid convergence of the path-following interior-point methods using both data from a small animal scanner and Monte Carlo simulated data. The proposed methods can readily be applied to solve the regularized, weighted least squares reconstruction problem.
Informs Journal on Computing | 1993
Stephen G. Nash; Ariela Sofer
A logarithmic barrier method is applied to the solution of a nonlinear programming problem with inequality constraints. An approximation to the Newton direction is derived that avoids the ill conditioning normally associated with barrier methods. This approximation can be used within a truncated-Newton method, and hence is suitable for large-scale problems; the approximation can also be used in the context of a parallel algorithm. Enhancements to the basic barrier method are described that improve its efficiency and reliability. The resulting method can be shown to be a primal-dual method when the objective function is convex and all of the constraints are linear. Computational experiments are presented where the method is applied to 1000-variable problems with bound constraints. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.
Archive | 1994
Stephen G. Nash; Roman A. Polyak; Ariela Sofer
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated- Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Operations Research Letters | 1990
Stephen G. Nash; Ariela Sofer
Truncated-Newton methods for nonlinear optimization compute a search direction by approximately solving the Newton equations, typically via the conjugate-gradient algorithm. The search direction is usually assessed using the norm of the residual. This note shows that the norm of the residual can be an arbitrarily poor predictor of a good search direction.
Mathematical Programming | 1989
Stephen G. Nash; Ariela Sofer
Truncated-Newton methods are a class of optimization methods suitable for large scale problems. At each iteration, a search direction is obtained by approximately solving the Newton equations using an iterative method. In this way, matrix costs and second-derivative calculations are avoided, hence removing the major drawbacks of Newtons method. In this form, the algorithms are well-suited for vectorization. Further improvements in performance are sought by using block iterative methods for computing the search direction. In particular, conjugate-gradient-type methods are considered. Computational experience on a hypercube computer is reported, indicating that on some problems the improvements in performance can be better than that attributable to parallelism alone.
symposium on frontiers of massively parallel computation | 1999
Calvin A. Johnson; Ariela Sofer
In the tomographic imaging problem images are reconstructed from a set of measured projections. Iterative reconstruction methods are computationally intensive alternatives to the more traditional Fourier-based methods. Despite their high cost, the popularity of these methods is increasing because of the advantages they pose. Although numerous iterative methods have been proposed over the years, all of these methods can be shown to have a similar computational structure. This paper presents a parallel algorithm that we originally developed for performing the expectation maximization algorithm in emission tomography. This algorithm is capable of exploiting the sparsity and symmetries of the model in a computationally efficient manner. Our parallelization scheme is based upon decomposition of the measurement-space vectors. We demonstrate that such a parallelization scheme is applicable to the vast majority of iterative reconstruction algorithms proposed to date.
SIAM Journal on Matrix Analysis and Applications | 1996
Stephen G. Nash; Ariela Sofer
We study preconditioning strategies for linear systems with positive-definite matrices of the form
IEEE Transactions on Reliability | 1991
Ariela Sofer; Douglas R. Miller
Z^{T} GZ
Annals of Operations Research | 2003
Ariela Sofer; Jianchao Zeng; Seong Ki Mun
, where