Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steve W. Otto is active.

Publication


Featured researches published by Steve W. Otto.


Annals of Operations Research | 1993

Combining Simulated Annealing with Local Search Heuristics

Olivier C. Martin; Steve W. Otto

We introduce a meta-heuristic to combine simulated annealing with local search methods for CO problems. This new class of Markov chains leads to significantly more powerful optimization methods than either simulated annealing or local search. The main idea is to embed deterministic local search techniques into simulated annealing so that the chain explores only local optima. It makes large, global changes, even at low temperatures, thus overcoming large barriers in configuration space. We have tested this meta-heuristic for the traveling salesman and graph partitioning problems. Tests on instances from public libraries and random ensembles quantify the power of the method. Our algorithm is able to solve large instances to optimality, improving upon local search methods very significantly. For the traveling salesman problem with randomly distributed cities, in a square, the procedure improves on 3-opt by 1.6%, and on Lin-Kernighan local search by 1.3%. For the partitioning of sparse random graphs of average degree equal to 5, the improvement over Kernighan-Lin local search is 8.9%. For both CO problems, we obtain new best heuristics.


Operations Research Letters | 1992

Large-step markov chains for the TSP incorporating local search heuristics

Olivier C. Martin; Steve W. Otto; Edward W. Felten

We consider a new class of optimization heuristics which combine local searches with stochastic sampling methods, allowing one to iterate local optimization heuristics. We have tested this on the Euclidean Traveling Salesman Problem, improving 3-opt by over 1.6% and Lin-Kernighan by 1.3%.


parallel computing | 1987

Matrix algorithms on a hypercube I: Matrix multiplication☆

Geoffrey C. Fox; Steve W. Otto; Anthony J. G. Hey

Abstract We discuss algorithms for matrix multiplication on a concurrent processor containing a two-dimensional mesh or richer topology. We present detailed performance measurements on hypercubes with 4, 16, and 64 nodes, and analyze them in terms of communication overhead and load balancing. We show that the decomposition into square subblocks is optimal C code implementing the algorithms is available.


Communications of The ACM | 1996

A message passing standard for MPP and workstations

Jack J. Dongarra; Steve W. Otto; Marc Snir; David W. Walker

T HE Message Passing Interface (MPI) is a portable message-passing standard that facilitates development of parallel applications and libraries. MPI defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in Fortran 77 or C. The standard also forms a possible target for such language compilers as High Performance Fortran [7]. Commercial and free, public-domain implementations of MPI have been available since 1994 (see the sidebar, “MPI Implementations”), running on both tightly coupled, massively parallel processing (MPP) machines and on networks of workstations (NOWs). The MPI standard was developed over a 12month period in 1993-1994 of intensive meetings involving more than 80 people from approximately 40 organizations, mainly from the U.S. and Europe. The meetings were announced on various bulletin boards and mailing lists and were open to the technical community. The MPI meetings operated on a tight budget (actually no budget when the first meeting was announced). DARPA provided partial travel support for U.S. academic participants through the National Science Foundation. Support for several European participants was provided by the European Commission through its Esprit program. Formal voting at the meetings was by a single vote per organization; in order to vote, an organization needed to have had at least one representative at two of the last three meetings. To provide guidance for preparing formal proposals, frequent informal votes including all those present were held. Many vendors of concurrent computers were involved, as were researchers from universities, government laboratories, and industry. This effort culminated in the 1994 publication of the MPI specification [8]. Other sources of information on MPI are available [10] or are under development (see the sidebar, “More MPI Assistance”). Researchers incorporated into MPI the


Physics Today | 1984

Algorithms for concurrent processors

Geoffrey C. Fox; Steve W. Otto

We are on the verge of a revolution in computing, spawned by advances in computer technology. Progress in very‐large‐scale integration is leading not so much to faster computers, but to much less expensive and much smaller computers—computers contained on a few chips. These machines, whose cost‐effectiveness is expected to be staggering, will make it practical to build very‐high‐performance computers, or “supercomputers,” consisting of very many small computers combined to form a single concurrent processor.


conference on high performance computing (supercomputing) | 1994

Adaptive load migration systems for PVM

Jeremy Casas; Ravi Konuru; Steve W. Otto; Robert Prouty; Jonathan Walpole

Adaptive load distribution is necessary for parallel applications to co-exist effectively with other jobs in a network of shared, heterogeneous workstations. We present three methods that provide such support for PVM applications. Two of these methods, MPVM (migratable PVM) and UPVM (user-level PVM), adapt to changes in the workstation environment by transparently migrating the virtual processors (VPs) of the parallel application. A VP in MPVM is a Unix process, while UPVM defines lightweight process-like VPs. The third method, ADM (adaptive data movement), is a programming methodology for writing programs that perform adaptive load distribution through data movement. These methods are discussed and compared in terms of effectiveness, usability and performance.<<ETX>>


Concurrency and Computation: Practice and Experience | 1996

Redistribution of block-cyclic data distributions using MPI

David W. Walker; Steve W. Otto

Arrays that are distributed in a block-cyclic fashion are important for many applications in the computational sciences since they often lead to parallel algorithms with good load balancing properties. We consider the problem of redistributing such an array to a new block size. This operation is directly expressible in High Performance Fortran (HPF) and will arise in applications written in this language. Efficient message passing algorithms are given for the redistribution operation, expressed in the standardized message passing interface, MPI. The algorithms are analyzed and performance results from the IBM SP-1 and Intel Paragon are given and discussed. The results show that redistribution can be done in time comparable to other collective communication operations, such as broadcast and MPI_ALLTOALL.


Nuclear Physics | 1981

Beyond leading order QCD perturbative corrections to the pion form factor

R. D. Field; Rajan Gupta; Steve W. Otto; Lee Chang

Abstract The order α s 2 ( Q 2 ) corrections to the pion form factor, F π ( Q 2 ), are calculated using perturbative quantum chromodynamics and dimensional regularization. In the MS renormalization scheme these corrections are large. This means that reliable perturbative predictions cannot be made until momentum transfers, Q , of about 300–400 GeV are reached or unless one can sum the large perturbative terms to all orders. Attempts to reorganize the perturbation series so that the first term gives a better approximation of the complete sum indicate that at Q = 10 GeV the pion form factor may be about a factor of two larger than the leading order result.


Nuclear Physics | 1982

Monte Carlo estimates of the mass gap of the O(2) and O(3) spin models in 1+1 dimensions☆

Geoffrey C. Fox; Rajan Gupta; Olivier Martin; Steve W. Otto

Abstract We have developed a Monte Carlo method to estimate the mass gap for field theories by a hamiltonian variational principle. We also show that using a zero momentum operator in the 2-point correlation method leads to a dramatic improvement in the accuracy of the mass gap. Both methods give encouraging results for the O(2) and O(3) spin models in 1+1 dimensions. The mass gap for O(2) is compared with the predictions of Kosterlitz and Thouless. The connection between the O(3) spin model and the corresponding field theory leads to a prediction for the mass gap of the non-linear sigma model. Careful attention is given to estimates of the errors in each approach.


Nuclear Physics | 1982

String tensions for lattice gauge theories in 2 + 1 dimensions☆

Jan Ambjørn; Anthony J. G. Hey; Steve W. Otto

Abstract Compact U(1) and SU(2) lattice gauge theories in 3 euclidean dimensions are studied by standard Monte Carlo techniques. The question of extracting reliable string tensions from these theories is examined in detail, including a comparison of the Monte Carlo Wilson loop data with weak coupling predictions and a careful error analysis: our conclusions are rather different from those of previous investigations of these theories. In the case of U(1) theory, we find that only a tiny range of β values can possibly be relevant for extracting a string tension and we are unable to convincingly demonstrate the expected exponential dependence of the string tension on β. For the SU(2) theory we are able to determine, albeit with rather large errors, a string tension from a study of Wilson loops.

Collaboration


Dive into the Steve W. Otto's collaboration.

Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon Flower

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven Huss-Lederman

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge