Matthew J. Saltzman
Clemson University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew J. Saltzman.
Informs Journal on Computing | 2000
Marie Coffin; Matthew J. Saltzman
Statistical analysis is a powerful tool to apply when evaluating the performance of computer implementations of algorithms and heuristics. Yet many computational studies in the literature do not use this tool to maximum effectiveness. This paper examines the types of data that arise in computational comparisons and presents appropriate techniques for analyzing such data sets. Case studies of computational tests from the open literature are re-evaluated using the proposed methods in order to illustrate the value of statistical analysis for gaining insight into the behavior of the tested algorithms.
Mathematical Programming | 2003
Ted K. Ralphs; Laszlo Ladanyi; Matthew J. Saltzman
Abstract.In discrete optimization, most exact solution approaches are based on branch and bound, which is conceptually easy to parallelize in its simplest forms. More sophisticated variants, such as the so-called branch, cut, and price algorithms, are more difficult to parallelize because of the need to share large amounts of knowledge discovered during the search process. In the first part of the paper, we survey the issues involved in parallelizing such algorithms. We then review the implementation of SYMPHONY and COIN/BCP, two existing frameworks for implementing parallel branch, cut, and price. These frameworks have limited scalability, but are effective on small numbers of processors. Finally, we briefly describe our next-generation framework, which improves scalability and further abstracts many of the notions inherent in parallel BCP, making it possible to implement and parallelize more general classes of algorithms.
Clinical Biomechanics | 2001
Brian May; Subrata Saha; Matthew J. Saltzman
OBJECTIVE A mathematical model of the temporomandibular joint was developed to study the magnitude and direction of the compressive loading experienced at the temporomandibular joint during clenching. DESIGN The model was based on the principles of static equilibrium in three dimensions. BACKGROUND Direct measurement of temporomandibular joint loading in humans is extremely difficult. Animal models have provided an alternative in the past. However, evidence suggests that primates are not the most accurate human analogues for temporomandibular joint studies. A mathematical model was used as an alternative to direct measurement. METHODS The EMG activity of two masticatory muscles was combined with their cross-sectional areas to calculate the force exerted by each muscle. Experimentally determined forces were implemented into a quadratic programming model to solve for the compressive forces on the joint. Two objective functions were chosen and their ability to predict muscle and joint forces was evaluated. RESULTS The maximum bite forces for normal men, normal women, and women with temporomandibular joint disorders were 300 N (SD 102 N), 210 N (SD 57.7 N), and 120 N (SD 77.1 N), respectively. The calculated joint force for normal males was 260 N (SD 84.1 N). Normal females and female temporomandibular joint disorder patients produced temporomandibular joint forces of 172 N (SD 37.5 N) and 152 N (SD 44.2 N), respectively.
Annals of Operations Research | 2006
Ted K. Ralphs; Matthew J. Saltzman; Margaret M. Wiecek
A parametric algorithm for identifying the Pareto set of a biobjective integer program is proposed. The algorithm is based on the weighted Chebyshev (Tchebycheff) scalarization, and its running time is asymptotically optimal. A number of extensions are described, including: a technique for handling weakly dominated outcomes, a Pareto set approximation scheme, and an interactive version that provides access to all Pareto outcomes. Extensive computational tests on instances of the biobjective knapsack problem and a capacitated network routing problem are presented.
Informs Journal on Computing | 1989
Roy E. Marsten; Matthew J. Saltzman; David F. Shanno; George S. Pierce; J. F. Ballintijn
The dual affine interior point method is extended to handle variables with simple upper bounds as well as free variables. During execution, variables which appear to be going to zero are fixed at zero, and rows with slack variables bounded away from zero are removed. A variant of the big- M artificial variable method to attain feasibility is derived. The simplex method is used to recover an optimal basis upon completion of the algorithm, and the effects of scaling are discussed. Computational experience on a variety of problems is presented. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.
Informs Journal on Computing | 2009
Y. Xu; Ted K. Ralphs; Laszlo Ladanyi; Matthew J. Saltzman
In this paper, we discuss the challenges that arise in parallelizing algorithms for solving generic mixed integer linear programs and introduce a software framework that aims to address these challenges. Although the framework makes few algorithmic assumptions, it was designed specifically with support for implementation of relaxation-based branch-and-bound algorithms in mind. Achieving efficiency for such algorithms is particularly challenging and involves a careful analysis of the trade-offs inherent in the mechanisms for sharing the large amounts of information that can be generated. We present computational results that illustrate the degree to which various sources of parallel overhead affect scalability and discuss why properties of the problem class itself can have a substantial effect on the efficiency of a particular methodology.
Archive | 2005
Yan Xu; Ted K. Ralphs; Laszlo Ladanyi; Matthew J. Saltzman
ALPS is a framework for implementing and parallelizing tree search algorithms. It employs a number of features to improve scalability and is designed specifically to support the implementation of data intensive algorithms, in which large amounts of knowledge are generated and must be maintained and shared during the search. Implementing such algorithms in a scalable manner is challenging both because of storage requirements and because of communications overhead incurred in the sharing of data. In this abstract, we describe the design of ALPS and how the design addresses these challenges. We present two sample applications built with ALPS and preliminary computational results.
Informs Journal on Computing | 2012
Peter M. Hahn; Yi-Rong Zhu; Monique Guignard; William L. Hightower; Matthew J. Saltzman
We apply the level-3 reformulation-linearization technique (RLT3) to the quadratic assignment problem (QAP). We then present our experience in calculating lower bounds using an essentially new algorithm based on this RLT3 formulation. Our method is not guaranteed to calculate the RLT3 lower bound exactly, but it approximates this lower bound very closely and reaches it in some instances. For Nugent problem instances up to size 24, our RLT3-based lower bound calculation solves these problem instances exactly or serves to verify the optimal value. Calculating lower bounds for problem sizes larger than size 27 still presents a challenge because of the large amount of memory needed to implement the RLT3 formulation. Our presentation emphasizes the steps taken to significantly conserve memory by using the numerous problem symmetries in the RLT3 formulation of the QAP. We implemented this RLT3-based bound calculation in a branch-and-bound algorithm. Experimental results project significant runtime improvement over all other published QAP branch-and-bound solvers.
The Journal of Supercomputing | 2004
Ted K. Ralphs; Laszlo Ladanyi; Matthew J. Saltzman
This paper describes the design of the Abstract Library for Parallel Search (ALPS), a framework for implementing scalable, parallel algorithms based on tree search. ALPS is specifically designed to support data-intensive algorithms, in which large amounts of data are required to describe each node in the search tree. Implementing such algorithms in a scalable manner is challenging both because of data storage requirements and communication overhead. ALPS incorporates a number of new ideas to address this challenge. The paper also describes the design of two other libraries forming a hierarchy built on top of ALPS. The first is the Branch, Constrain, and Price Software (BiCePS) library, a framework that supports the implementation of parallel branch and bound algorithms in which the bounds are obtained by solving some sort of relaxation, usually Lagrangian. In this layer, the notion of global data objects associated with the variables and constraints is introduced. These global objects provide a connection between the various subproblems in the search tree, but they pose further difficulties for designing scalable algorithms. The other library is the BiCePS linear integer solver (BLIS), a concretization of BiCePS, in which linear programming is used to obtain bounds in each search tree node.
Operations Research Letters | 1994
Robert E. Bixby; Matthew J. Saltzman
An important issue in the implementation of interior point algorithms for linear programming is the recovery of an optimal basic solution from an optimal interior point solution. In this paper we describe a method for recovering such a solution. Our implementation links a high-performance interior point code (OB1) with a high-performance simplex code (CPLEX). Results of our computational tests indicate that basis recovery can be done quickly and efficiently.