Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul F. Reynolds is active.

Publication


Featured researches published by Paul F. Reynolds.


winter simulation conference | 1988

A spectrum of options for parallel simulation

Paul F. Reynolds

Conventional wisdom has it there are two basic approaches to parallel simulation: conservative (Chandy-Misra) and optimistic (time warp). All known protocols are thought to fall into one of these two classes. This dichotomy is false. There exists a spectrum of options that includes these approaches. We describe a design space that admits these as alternatives, we show how most of the well known parallel simulation approaches can be derived using our design alternatives, and we explore the implications of the existence of the design space we describe. In particular, we note there are many as yet unexplored approaches to parallel simulation.


ACM Transactions on Modeling and Computer Simulation | 1997

Consistency maintenance in multiresolution simulation

Paul F. Reynolds; Anand Natrajan; Sudhir Srinivasan

Simulations that run at multiple levels of resolution often encounter consistency problems because of insufficient correlation between the attributes at multiple levels of the same entity. Inconsistency may occur despite the existence of valid models at each resolution level. Cross-Resolution Modeling (CRM) attempts to build effective multiresolution simulations. The traditional approach to CRM—aggregation-disaggregation—causes chain disaggregation and puts an unacceptable burden on resources. We present four fundamental observations that would help guide future approaches to CRM. These observations form the basis of an approach we propose that involves the design of Multiple Resolution Entities (MREs). MREs are the foundation of a design that incorporates maintaining internal consistency. We also propose maintenance of core attributes as an approach to maintaining internal consistency within an MRE..


IEEE Transactions on Computers | 1990

Optimal dynamic remapping of data parallel computations

David M. Nicol; Paul F. Reynolds

A large class of data parallel computations is characterized by a sequence of phases, with phase changes occurring unpredictably. Dynamic remapping of the workload to processors may be required to maintain good performance. The problem considered, for which the utility of remapping and the future behavior of the workload are uncertain, arises when phases exhibit stable execution requirements during a given phase, but requirements change radically between phases. For these situations, a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The authors address the fundamental problem of balancing the expected remapping performance gain against the delay cost, and they derive the optimal remapping decision policy. The promise of the approach is shown by application to multiprocessor implementations of an adaptive gridding fluid dynamics program and to a battlefield simulation program. >


ACM Transactions on Modeling and Computer Simulation | 1998

Elastic time

Sudhir Srinivasan; Paul F. Reynolds

We introduce a new class of synchronization protocols for parallel discrete event simulation, those based on near-perfect state information (NPSI). NPSI protocols are adaptive dynamically controlling the rate at which processes constituting a parallel simulation proceed with the goal of completing a simulation efficiently. We show by analysis that a class of adaptive protocols (that includes NPSI and several others) can both arbitrarily outperform and be arbitrarily outperformed by the Time Warp synchronization protocol. This mixed result both substantiates the promising results we and other adaptive protocol designers have observed, and cautions those who might assume that any adaptive protocol will always be better than any nonadaptive one. We establish in an experimental study that a particular NPSI protocol, the Elastic Time Algorithm, outperforms Time Warp, both temporally and spatially on every workload tested. Although significant options remain with respect to the design of ETA, the work presented here establishes the class of NPSI protocols as a very promising approach.


ieee symposium on ultrasonics | 2003

Pre-compensated excitation waveform to suppress second harmonic generation in MEMS electrostatic transducers

Shiwei Zhou; Paul F. Reynolds; John A. Hossack

Microelectromechanical systems (MEMS) electrostatic-based transducers inherently produce harmonics as the electrostatic force generated in the transmit mode is approximately proportional to the square of the applied voltage signal. This characteristic precludes them from being effectively used for harmonic imaging (either with or without the addition of microbubble-based contrast agents). The harmonic signal that is nonlinearly generated by tissue (or contrast agent) cannot be distinguished from the inherent transmitted harmonic signal. We investigated two precompensation methods to cancel this inherent harmonic generation in electrostatic transducers. A combination of finite element analysis (FEA) and experimental results are presented. The first approach relies on a calculation, or measurement, of the transducers linear transfer function, which is valid for small signal levels. Using this transfer function and a measurement of the undesired harmonic signal, a predistorted transmit signal was calculated to cancel the harmonic inherently generated by the transducer. Due to the lack of perfect linearity, the approach does not work completely in a single iteration. However, with subsequent iterations, the problem becomes more linear and converges toward a very satisfactory result (a 18.6 dB harmonic reduction was achieved in FEA simulations and a 20.7 dB reduction was measured in a prototype experiment). The second approach tested involves defining a desired function [including a direct current (DC) offset], then taking the square root of this function to determine the shape of the required input function. A 5.5 dB reduction of transmitted harmonic was obtained in both FEA simulation and experimental prototypes test.


ACM Transactions on Programming Languages and Systems | 1987

The geometry of semaphore programs

Scott D. Carson; Paul F. Reynolds

Synchronization errors in concurrent programs are notoriously difficult to find and correct. Deadlock, partial deadlock, and unsafeness are conditions that constitute such errors. A model of concurrent semaphore programs based on multidimensional, solid geometry is presented. While previously reported geometric models are restricted to two-process mutual exclusion problems, the model described here applies to a broader class of synchronization problems. The model is shown to be exact for systems composed of an arbitrary, yet fixed number of concurrent processes, each consisting of a straight line sequence of arbitrarily ordered semaphore operations.


winter simulation conference | 1995

NPSI adaptive synchronization algorithms for PDES

Paul F. Reynolds

Adaptive approaches to synchronization in parallel discrete event simulations hold significant potential for performance improvement. We contend that an adaptive approach based on low cost near-perfect system state information is the most likely to yield a consistently efficient synchronization algorithm. We suggest a framework by which NPSI (near-perfect state information) adaptive protocols could be designed and describe the first such protocol-elastic time algorithm. We present performance results which show that NPSI protocols are very promising. In particular, they have the capacity to outperform time warp consistently in both time and space.


winter simulation conference | 2005

A case study of model context for simulation composability and reusability

Michael Spiegel; Paul F. Reynolds; David C. Brogan

How much effort will be required to compose or reuse simulations? What factors need to be considered? It is generally known that composability and reusability are daunting challenges for both simulations and more broadly software design as a whole. We have conducted a small case study in order to clarify the role that model context plays in simulation composability and reusability. For a simple problem: compute the position and velocity of a falling body, we found that a reasonable formulation of a solution included a surprising number of implicit constraints. Equally surprising, in a challenge posed to a small group of capable individuals, no one of them was able to identify more than three-quarters of the ultimate set of validation constraints. We document the challenge, interpret its results, and discuss the utility our study will have in future investigations into simulation composition and reuse.


IEEE Transactions on Parallel and Distributed Systems | 1997

Isotach networks

Paul F. Reynolds; Craig Williams; Raymond R. Wagner

We introduce a class of networks called Isotach networks designed to reduce the cost of synchronization in parallel computations. Isotach networks maintain an invariant that allows each process to control the logical times at which its messages are received and consequently executed. This control allows processes to pipeline operations without sacrificing sequential consistency and to send isochrons, groups of operations that appear to be received and executed as an indivisible unit. Isochrons allow processes to execute atomic actions without locks. Other uses of Isotach networks include ensuring causal message delivery and consistency among replicated data. Isotach networks are characterized by this invariant, not by their topology. They can be implemented in a wide variety of configurations, including NUMA (nonuniform memory access) multiprocessors. Empirical and analytic studies of Isotach synchronization techniques show that they outperform conventional techniques, in some cases by an order of magnitude or more. Results presented here assume fault-free systems; we are exploring extension to selected failure models.


winter simulation conference | 2004

Approximating component selection

Michael Roy Fox; David C. Brogan; Paul F. Reynolds

Simulation composability is a difficult capability to achieve due to the challenges of creating components, selecting combinations of components, and integrating the selected components. We address the second of these challenges through analysis of Component Selection (CS), the NP-complete process of selecting a minimal set of components to satisfy a set of objectives. Due to the high order of computational complexity of CS, we examine approximating solutions that make the CS process practicable. We define two variations of CS and prove that good approximations to optimal solutions result from applying a standard Greedy selection algorithm to each. Despite our creation of approximable variations of CS, we conjecture that any proof of the inapproximability of CS will reveal theoretical limitations of its practicality. We conclude that reasonably constrained variations of CS can be solved satisfactorily, and efficiently, but more general cases appear to never be solvable in a similar manner.

Collaboration


Dive into the Paul F. Reynolds's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ross Gore

Old Dominion University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sudhir Srinivasan

Applied Science Private University

View shared research outputs
Researchain Logo
Decentralizing Knowledge