Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven J. Benson is active.

Publication


Featured researches published by Steven J. Benson.


Siam Journal on Optimization | 1999

Solving Large-Scale Sparse Semidefinite Programs for Combinatorial Optimization

Steven J. Benson; Yinyu Ye; Xiong Zhang

We present a dual-scaling interior-point algorithm and show how it exploits the structure and sparsity of some large-scale problems. We solve the positive semidefinite relaxation of combinatorial and quadratic optimization problems subject to boolean constraints. We report the first computational results of interior-point algorithms for approximating maximum cut semidefinite programs with dimension up to 3,000.


parallel computing | 2002

Parallel components for PDEs and optimization: some issues and experiences

Boyana Norris; Satish Balay; Steven J. Benson; Lori A. Freitag; Paul D. Hovland; Lois Curfman McInnes; Barry F. Smith

High-performance simulations in computational science often involve the combined software contributions of multidisciplinary teams of scientists, engineers, mathematicians, and computer scientists. One goal of component-based software engineering in large-scale scientific simulations is to help manage such complexity by enabling better interoperability among codes developed by different groups. This paper discusses recent work on building component interfaces and implementations in parallel numerical toolkits for mesh manipulations, discretization, linear algebra, and optimization. We consider several motivating applications involving partial differential equations and unconstrained minimization to demonstrate this approach and evaluate performance.


Archive | 2006

Parallel PDE-Based Simulations Using the Common Component Architecture

Lois Curfman McInnes; Benjamin A. Allan; Robert C. Armstrong; Steven J. Benson; David E. Bernholdt; Tamara L. Dahlgren; Lori Freitag Diachin; Manojkumar Krishnan; James Arthur Kohl; J. Walter Larson; Sophia Lefantzi; Jarek Nieplocha; Boyana Norris; Steven G. Parker; Jaideep Ray; Shujia Zhou

The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component- based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general-purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations.


Journal of Computational Chemistry | 2004

Component-based integration of chemistry and optimization software

Joseph P. Kenny; Steven J. Benson; Yuri Alexeev; Jason Sarich; Curtis L. Janssen; Lois Curfman McInnes; Manojkumar Krishnan; Jarek Nieplocha; Elizabeth Jurrus; Carl Fahlstrom; Theresa L. Windus

Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component‐based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.


ACM Transactions on Mathematical Software | 2001

A case study in the performance and scalability of optimization algorithms

Steven J. Benson; Lois Curfman McInnes; Jorge J. Moré

We analyze the performance and scalabilty of algorithms for the solution of large optimization problems on high-performance parallel architectures. Our case study uses the GPCG (gradient projection, conjugate gradient) algorithm for solving bound-constrained convex quadratic problems. Our implementation of the GPCG algorithm within the Toolkit for Advanced Optimization (TAO) is available for a wide range of high-performance architectures and has been tested on problems with over 2.5 million variables. We analyze the performance as a function of the number of variables, the number of free variables, and the preconditioner. In addition, we discuss how the software design facilitates algorithmic comparisons.


ACM Transactions on Mathematical Software | 2008

Algorithm 875: DSDP5—software for semidefinite programming

Steven J. Benson; Yinyu Ye

DSDP implements the dual-scaling algorithm for semidefinite programming. The source code for this interior-point algorithm, written entirely in ANSI C, is freely available under an open source license. The solver can be used as a subroutine library, as a function within the Matlab environment, or as an executable that reads and writes to data files. Initiated in 1997, DSDP has developed into an efficient and robust general-purpose solver for semidefinite programming. Its features include a convergence proof with polynomially bounded worst-case complexity, primal and dual feasible solutions when they exist, certificates of infeasibility when solutions do not exist, initial points that can be feasible or infeasible, relatively low memory requirements for an interior-point method, sparse and low-rank data structures, extensibility that allows applications to customize the solver and improve its performance, a subroutine library that enables it to be linked to larger applications, scalable performance for large problems on parallel architectures, and a well-documented interface and examples of its use. The package has been used in many applications and tested for efficiency, robustness, and ease of use.


Archive | 2001

Approximating Maximum Stable Set and Minimum Graph Coloring Problems with the Positive Semidefinite Relaxation

Steven J. Benson; Yinyu Ye

We compute approximate solutions to the maximum stable set problem and the minimum graph coloring problem using a positive semidefinite relaxation. The positive semidefinite programs are solved using an implementation of the dual scaling algorithm that takes advantage of the sparsity inherent in most graphs and the structure inherent in the problem formulation. Prom the solution to the relaxation, we apply a randomized algorithm to find approximate maximum stable sets and a modification of a popular heuristic to find graph colorings. We obtained high quality answers for graphs with over 1000 vertices and over 6000 edges.


Optimization Methods & Software | 2006

Flexible complementarity solvers for large-scale applications

Steven J. Benson; Todd S. Munson

Discretizations of infinite-dimensional variational inequalities lead to linear and nonlinear complementarity problems with many degrees of freedom. To solve these problems in a parallel computing environment, we propose two active-set methods that solve only one linear system of equations per iteration. The linear solver, preconditioner, and matrix structures can be chosen by the user for a particular application to achieve high parallel performance. The parallel scalability of these methods is demonstrated for some discretizations of infinite-dimensional variational inequalities.


ACM Transactions on Mathematical Software | 2007

Using the GA and TAO toolkits for solving large-scale optimization problems on parallel computers

Steven J. Benson; Manojkumar Krishnan; Lois Curfman McInnes; Jarek Nieplocha; Jason Sarich

Challenges in the scalable solution of large-scale optimization problems include the development of innovative algorithms and efficient tools for parallel data manipulation. This article discusses two complementary toolkits from the collection of Advanced CompuTational Software (ACTS), namely, Global Arrays (GA) for parallel data management and the Toolkit for Advanced Optimization (TAO), which have been integrated to support large-scale scientific applications of unconstrained and bound constrained minimization problems. Most likely to benefit are minimization problems arising in classical molecular dynamics, free energy simulations, and other applications where the coupling among variables requires dense data structures. TAO uses abstractions for vectors and matrices so that its optimization algorithms can easily interface to distributed data management and linear algebra capabilities implemented in the GA library. The GA/TAO interfaces are available both in the traditional library mode and as components compliant with the Common Component Architecture (CCA). We highlight the design of each toolkit, describe the interfaces between them, and demonstrate their use.


10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference | 2004

Scalable Algorithms in Optimization: Computational Experiments

Steven J. Benson; Lois Curfman McInnes; Jorge J. Moré; Jason Sarich

We survey techniques in the Toolkit for Advanced Optimization (TAO) for developing scalable algorithms for mesh-based optimization problems on distributed architectures. We discuss the distribution of the mesh, the computation of the gradient and the Hessian matrix, and the use of preconditioners. We show that these techniques, together with mesh sequencing, can produce results that scale with mesh size.

Collaboration


Dive into the Steven J. Benson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jarek Nieplocha

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jason Sarich

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Manojkumar Krishnan

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jorge J. Moré

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Paul D. Hovland

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Barry F. Smith

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Benjamin A. Allan

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge