Philip A. Wilsey
University of Cincinnati
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philip A. Wilsey.
workshop on parallel and distributed simulation | 1995
Josef Fleischmann; Philip A. Wilsey
Checkpointing in a time warp synchronized parallel simulator is a necessary and potentially expensive operation. In the simple case, a time warp simulator checkpoints every &khgr; events, for some fixed value &khgr;. For larger values of &khgr;, the simulator requires less overhead for saving the state, but incurs an increased latency during rollback. Thus, the problem is to balance the time to save states against the time to coast forward upon rollback. Unfortunately, a static determination of a optimal value for &khgr; is very difficult and can vary widely, even between closely related instances of a time warp simulator. Furthermore, the optimal checkpoint interval may actually vary over the lifetime of the simulation. To address these problems, several investigators have proposed dynamically adjusting the checkpoint interval &khgr; as the simulation progresses. This paper analyzes three previous techniques for dynamically sizing checkpoint intervals and presents a new, heuristic algorithm for this purpose. All four techniques are implemented in a common application domain (digital system simulation from VHDL descriptions) and a direct comparison between the algorithms is performed. The results show a significant difference in the performance of the implemented algorithms. However, in virtually all cases, the dynamic algorithms performed near or better than the best static value. Furthermore, the best algorithms performed as much as 12% better than the best static value.
hawaii international conference on system sciences | 1996
Dale E. Martin; Timothy J. McBrayer; Philip A. Wilsey
WARPED is a publically-available time warp simulation kernel for experimentation and application development. The kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of time warp optimizations. It is written in C++, uses the MPI (Message Passing Interface) standard and shared memory for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Intel Paragon and IBM-compatible PCs running Linux. WARPED is distributed with several applications and includes a sequential kernel implementation for comparative analysis. The kernel supports LP (logical process) clustering, various time warp algorithms and several optimizations that dynamically adjust simulation parameters.
workshop on parallel and distributed simulation | 1993
Avinash C. Palaniswamy; Philip A. Wilsey
The successful application of optimistic synchronization techniques in parallel simulation requires that rollback overheads be contained. The chief contributions to rollback overhead in a Time Warp simulation are the time required to save state information and the time required to restore a previous state. Two competing techniques for reducing rollback overhead are periodic checkpointing (Lin and Lazowska, 1989) and incremental state saving (Bauer et al., 1991). This paper analytically compares the relative performance of periodic checkpointing to incremental state savings. The analytical model derived for periodic checkpointing is based almost entirely on the previous model developed by Lin (Lin and Lazowska, 1989). The analytical model for incremental state saving has been developed for this study. The comparison assumes an optimal checkpoint interval and shows under what simulation parameters each technique performs best.
Lecture Notes in Computer Science | 1998
Radharamanan Radhakrishnan; Dale E. Martin; Malolan Chetlur; Dhananjai Madhava Rao; Philip A. Wilsey
The design of a Time Warp simulation kernel is made difficult by the inherent complexity of the paradigm. Hence it becomes critical that the design of such complex simulation kernels follow established design principles such as object-oriented design so that the implementation is simple to modify and extend. In this paper, we present a compendium of our efforts in the design and development of an object-oriented Time Warp simulation kernel, called warped. warped is a publically available Time Warp simulation kernel for experimentation and application development. The kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of Time Warp optimizations. It is written in C++, uses the MPI message passing standard for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Cray T3E, the Intel Paragon, and IBM-compatible PCs running Linux.
winter simulation conference | 1998
Dhananjai Madhava Rao; Narayanan V. Thondugulam; Radharamanan Radhakrishnan; Philip A. Wilsey
Distributed synchronization for parallel simulation is generally classified as being either optimistic or conservative. While considerable investigations have been conducted to analyze and optimize each of these synchronization strategies, very little study on the definition and strictness of causality have been conducted. Do we really need to preserve causality in all types of simulations? The paper attempts to answer this question. We argue that significant performance gains can be made by reconsidering this definition to decide if the parallel simulation needs to preserve causality. We investigate the feasibility of unsynchronized parallel simulation through the use of several queuing model simulations and present a comparative analysis between unsynchronized and Time Warp simulation.
workshop on parallel and distributed simulation | 1994
Loy M. D'Souza; Xianzhi Fan; Philip A. Wilsey
The time warp mechanism uses memory space to save event and state information for rollback processing. As the simulation advances in time, old state and event information can be discarded and the memory space reclaimed. This reclamation process is called fossil collection and is guided by a global time value called Global Virtual Time (GVT). That is, GVT represents the greatest minimum time of the fully committed events (the time before which no rollback will occur). GVT is then used to establish a boundary for fossil collection. This paper presents a new algorithm for GVT estimation called pGVT. pGVT was designed to support accurate estimates of the actual GVT value and it operates in an environment where the communication subsystem does not support FIFO message delivery and where message delivery failure may occur. We show that pGVT correctly estimates GVT values and present some performance comparisons with other GVT algorithms.
workshop on parallel and distributed simulation | 1998
Malolan Chetlur; Nael B. Abu-Ghazaleh; Radharamanan Radhakrishnan; Philip A. Wilsey
In message passing environments, the message send time is dominated by overheads that are relatively independent of the message size. Therefore, fine grained applications (such as Time Warp simulators) suffer high overheads because of frequent communication. We investigate the optimization of the communication subsystem of Time Warp simulators using dynamic message aggregation. Under this scheme, Time Warp messages with the same destination LP, occurring in close temporal proximity are dynamically aggregated and sent as a single physical message. Several aggregation strategies that attempt to minimize the communication overhead without harming the progress of the simulation (because of messages being delayed) are developed. The performance of the strategies is evaluated for a network of workstations, and an SMP, using a number of applications that have different communication behavior.
winter simulation conference | 1997
Vijay Balakrishnan; Peter Frey; Nael B. Abu-Ghazaleh; Philip A. Wilsey
A framework for performance analysis of parallel discrete event simulators is presented. The centerpiece of this framework is a platform-independent Workload Specification Language (WSL). WSL is a language that allows the characterization of simulation models using a set of fundamental performancecritical parameters. WSL also implements a facility for representing real models. For each simulator to be tested, a WSL translator is used to generate synthetic platform-specific simulation models that conform to the performance characteristics captured by the WSL description. Accordingly, sets of portable simulation models that explore the effects of the different parameters, individually or collectively, on the performance can be constructed. The construction of the workload simulation models is assisted using a Synthetic Workload Generator (SWG). The utility of the system is demonstrated with the generation of a representative set of experiments. The described framework can be used to create a standard benchmark suite that consists of a mixture of real simulation models, selected from different application domains, and synthetic models generated by SWG.
great lakes symposium on vlsi | 1993
Avinash C. Palaniswamy; Philip A. Wilsey
The authors address one problem of speeding parallel digital system simulation using time warp, namely, that logical processes with errant behavior can incur considerable rollback behavior (analogous to thrashing in paging virtual memories). Consequently, additional mechanisms must be added to an optimistically synchronized simulator to inhibit excessive rollback. They describe a method of adaptively sizing bounded time windows to balance lookahead processing.<<ETX>>
modeling analysis and simulation on computer and telecommunication systems | 1999
D. Madhava Rao; Philip A. Wilsey
The steady growth in size and complexity of communication networks has necessitated corresponding advances in the underlying networking technologies including communication protocols. This multi-faceted growth has rendered analysis of todays ultra-large networks, a complex task. Simulations have been used to model and analyze communication networks. Complete models of the ultra-large networks need to be simulated in order to study crucial scalability and performance issues. Discrete event simulations of such ultra-large networks, with limited hardware resources is complex due to their sheer size. This paper presents the issues involved in the design of a framework to enable ultra-large simulations, consisting of millions of nodes. The parallel simulation techniques used, the application program interface needed for model development, and the results from experiments conducted using the framework are also presented.