Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruno R. Preiss is active.

Publication


Featured researches published by Bruno R. Preiss.


ACM Transactions on Modeling and Computer Simulation | 1994

Effects of the checkpoint interval on time and space in time warp

Bruno R. Preiss; Wayne M. Loucks; Ian D. Macintyre

Optimistically synchronized parallel discrete-event simulation is based on the use of communicating sequential processes. Optimistic synchronization means that the processes proceed under the assumption that a synchronized execution schedule is fortuitous. Periodic checkpointing of the state of a process allows the process to roll back to an earlier state when synchronization errors are detected. This article examines the effects of varying the checkpoint interval on the execution time and memory space needed to perform a parallel simulation.nThe empirical results presented in this article were obtained from the simulation of closed stochastic queuing networks with several different topologies. Various intraprocessor process-scheduling algorithms and both lazy and aggressive cancellation strategies are considered. The empirical results are compared with analytical formulae predicting time-optimal checkpoint intervals. Two modes of operation, throttling and thrashing, have been noted and their effect examined. As the checkpoint interval is increased from one, there is a throttling effect among processes on the same processor, which improves performance. When the checkpoint interval is made too large, there is a thrashing effect caused by interaction between processes on different processors. It is shown that the time-optimal and space-optimal checkpoint intervals are not the same. Furthermore, a checkpoint interval that is too small affects space adversely more than time, whereas, a checkpoint interval that is too large affects time adversely more than space.


ACM Transactions on Modeling and Computer Simulation | 1991

Optimal memory management for time warp parallel simulation

Yi-Bing Lin; Bruno R. Preiss

Recently there has been a great deal of interest in performance evalution of parallel simulation. Most work is devoted to the time complexity and assumes that the amount of memory available for parallel simulation is unlimited. This paper studies the space complexity of parallel simulation. Our goal is to design an efficient memory management protocol which guarantees that the memory consumption of parallel simulation is of the same order as sequential simulation. (Such an algorithm is referred to as a optimal.) First, we derive the relationships among the space complexities of sequential simulation, Chandy-Misra simulation [2], and Time Warp simulation [7]. We show that Chandy-Misra may consume more storage than sequential simulation, or vice versa. Then we show that Time Warp never consumes less memory than sequential simulation. Then we describe cancelback, an optimal Time Warp memory management protocol proposed by Jefferson. Although cancelback is considered to be complete solution for the storage management problem in Time Warp, some efficiency issues in implementing this algorithm must be considered. We propose an optimal algorithm called artifical rollback. We show that this algorithm is easy to implement and analyze. An implementation of artificial rollback is given, which is integrated with processor scheduling to adjust the memory consumption rate based on the amount of free storage available in the system.


workshop on parallel and distributed simulation | 1995

Memory management techniques for Time Warp on a distributed memory machine

Bruno R. Preiss; Wayne M. Loucks

This paper examines memory management issues associated with Time Warp synchronized parallel simulation on distributed memory machines. The paper begins with a summary of the techniques which have been previously proposed for memory management on various parallel processor memory structures. It then concentrates the discussion on parallel simulation executing on a distributed memory computer—a system comprised of separate computers, interconnected by a communications network. An important characteristic of the software developed for such systems is the fact that the dynamic memory is allocated from a pool of memory that is shared by all of the processes at a given processor.nThis paper presents a new memory management protocol, pruneback, which recovers space by discarding previous states. This is different from all previous schemes such as artificial rollback and cancelback which recover memory space by causing one or more logical processes to roll back to an earlier simulation time.nThe paper includes an empirical study of a parallel simulation of a closed stochastic queueing network showing the relationship between simulation execution time and amount of memory available. The results indicate that using pruneback is significantly more effective than artificial rollback (adapted for a distributed memory computer) for this problem. In the study, varying the memory limits over a 2:1 range resulted in a 1:2 change in artificial rollback execution time and almost no change in pruneback execution time.


Journal of Parallel and Distributed Computing | 2002

From Design Patterns to Parallel Architectural Skeletons

Dhrubajyoti Goswami; Ajit Singh; Bruno R. Preiss

The concept of design patterns has been extensively studied and applied in the context of object-oriented software design. Similar ideas are being explored in other areas of computing as well. Over the past several years, researchers have been experimenting with the feasibility of employing design-patterns related concepts in the parallel computing domain. In the past, several pattern-based systems have been developed with the intention to facilitate faster parallel application development through the use of preimplemented and reusable components that are based on frequently used parallel computing design patterns. However, most of these systems face several serious limitations such as limited flexibility, zero extensibility, and the ad hoc nature of their components. Lack of flexibility in a parallel programming system limits a programmer to using only the high-level components provided by the system. Lack of extensibility here refers to the fact that most of the existing pattern-based parallel programming systems come with a set of prebuilt patterns integrated into the system. However, the system provides no obvious way of increasing the repertoire of patterns when need arises. Also, most of these systems do not offer any generic view of a parallel computing pattern, a fact which may be at the root of several of their shortcomings. This research proposes a generic (i.e., pattern- and application-independent) model for realizing and using parallel design patterns. The term “parallel architectural skeleton” is used to represent the set of generic attributes associated with a pattern. The Parallel Architectural Skeleton Model (PASM) is based on the message-passing paradigm, which makes it suitable for a LAN of workstations and PCs. The model is flexible as it allows the intermixing of high-level patterns with low-level message-passing primitives. An object-oriented and library-based implementation of the model has been completed using C++and MPI, without necessitating any language extension. The generic model and the library-based implementation allow new patterns to be defined and included into the system. The skeleton-library serves as a framework for the systematic, hierarchical development of network-oriented parallel applications.


winter simulation conference | 1988

A unified modeling methodology for performance evaluation of distributed discrete event simulation mechanisms

Bruno R. Preiss; V.C. Hamacher; W.M. Loucks

The main problem associated with comparing distributed discrete event simulation mechanisms is the need to base the comparisons on some common problem specification. This paper presents a specification strategy and language which allows the same simulation problem specification to be used for both distributed discrete event simulation mechanisms as well as the traditional single event list mechanism. This paper includes: a description of the Yaddes specification language; a description of the four simulation mechanisms currently supported; the results for three simulation examples; and an estimate of the performance of a communication structure needed to support the various simulation mechanisms. Currently this work has only been done on a uniprocessor emulating a multiprocessor. This has limited some of our results but lays a significant basis for future simulation mechanism comparison.


technical symposium on computer science education | 1999

Design patterns for the data structures and algorithms course

Bruno R. Preiss

Design patterns have recently emerged as a vehicle for describing and documenting recurring object-oriented designs. More significantly, they offer up a long-awaited framework for teaching good software design. This paper espouses the use of object-oriented design patterns in the teaching of the second course in computer science, viz., the data structures and algorithms course.To use design patterns effectively, it is necessary to present the various data structures and algorithms in a common programming framework. This paper also espouses the use of a single, unified class hierarchy and the commitment to a single design throughout the teaching of the second course.


Advances in Software Engineering | 2002

Building parallel applications using design patterns

Dhrubajyoti Goswami; Ajit Singh; Bruno R. Preiss

Parallel application design and development is a major area of interest in the domain of high-performance scientific and industrial computing. In fact, parallel computing is becoming an integral part in several major application domains such as space, medicine, cancer and genetic research, graphics and animation, image processing, to name a few. With the advent of fast interconnecting networks of workstations and PCs, it is now becoming increasingly possible to develop high-performance parallel applications using the combined computing powers of these networked resources, at no extra cost. Contrast this to the situation until the early 90s, where parallel computing was mostly confined only to special-purpose parallel computers that were unaffordable by small research institutions. Nowadays, high-speed networks and fast general-purpose computers are aiding in the mainstream adoption of parallel computing at a much more affordable cost.


Lecture Notes in Computer Science | 1999

Using Object-Oriented Techniques for Realizing Parallel Architectural Skeletons

Dhrubajyoti Goswami; Ajit Singh; Bruno R. Preiss

The concept of design patterns has recently emerged as a new paradigm in the context of object-oriented design methodology. Similar ideas are being explored in other areas of computing. In the parallel computing domain, design patterns describe recurring parallel computing problems and their solution strategies. Starting with the late 1980’s, several pattern-based systems have been built for facilitating parallel application development. However, most of these systems use patterns in ad hoc manners, thus lacking a generic or standard model for using and intermixing different patterns. This substantially hampers the usability of such systems. Lack of flexibility and extensibility are some of the other major concerns associated with most of these systems. In this paper, we propose a generic (i.e., pattern- and application-independent) model for realizing and using parallel design patterns. The term architectural skeleton is used to represent the application independent, re-usable set of attributes associated with a pattern. The model can provide most of the functionalities of low level message passing libraries, such as PVM or MPI, plus the benefits of the patterns. This results in tremendous flexibility to the user. It turns out that the model is an ideal candidate for object-oriented style of design and implementation. It is currently implemented as a C++ template-library without requiring any language extension. The generic model, together with the object-oriented and library-based approach, facilitates extensibility.


canadian conference on electrical and computer engineering | 1993

On the performance of a multi-threaded RISC architecture

S.K. Lindsay; Bruno R. Preiss

Multi-threading is a form of parallel processing in which the processor contains several independent contexts which share a single execution pipeline. We propose a new multi-threaded architecture which differs from previous architectures in that context switches are performed only when the running program cannot execute an instruction in the next cycle. We argue that this strategy can improve pipeline utilization in environments which do not have a large enough number of processes to fully utilize earlier multi-threaded machines.<<ETX>>


winter simulation conference | 1991

Parallel instance discrete-event simulation using a vector uniprocessor

James F. Ohi; Bruno R. Preiss

The authors examine the possibility of running simulations in parallel on a vector processor. In such a system each instance of execution runs identical code but with a different input data set. The main problem addressed is the choice of block selection policy, that is, the choice of which indivisible block of code to execute next. The authors investigate four block selection policies by simulating the execution of such a system. A stochastic flow-graph representation was chosen to model the execution of a simulation. A two-level block selection policy was found to have the best potential speedup of the four block selection policies. The speedup levels achieved were not large, and decreased when there are a large number of unique event types (and therefore handlers) in the simulated system.<<ETX>>

Collaboration


Dive into the Bruno R. Preiss's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ajit Singh

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi-Bing Lin

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Rooks

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge