Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David W. Bauer is active.

Publication


Featured researches published by David W. Bauer.


workshop on parallel and distributed simulation | 2000

ROSS: a high-performance, low memory, modular time warp system

Christopher D. Carothers; David W. Bauer; Shawn Pearce

We introduce a new time warp system called ROSS: Rensselaers Optimistic Simulation System. ROSS is an extremely modular kernel that is capable of achieving event rates as high as 1,250,000 events per second when simulating a wireless telephone network model (PCS) on a quad processor PC server. In a head-to-head comparison, we observe that ROSS out performs the Georgia Tech Time Warp (GTW) system on the same computing platform by up to 180%. ROSS only requires a small constant amount of memory buffers greater than the amount needed by the sequential simulation for a constant number of processors. The driving force behind these high-performance and low memory utilization results is the coupling of an efficient pointer-based implementation framework, Fujimotos (1989) fast GVT algorithm for shared memory multiprocessors, reverse computation and the introduction of kernel processes (KPs). KPs lower fossil collection overheads by aggregating processed event lists. This aspect allows fossil collection to be done with greater frequency, thus lowering the overall memory necessary to sustain stable, efficient parallel execution.


Journal of Parallel and Distributed Computing | 2002

ROSS: A high-performance, low-memory, modular Time Warp system

Christopher D. Carothers; David W. Bauer; Shawn Pearce

Abstract In this paper, we introduce a new Time Warp system called ROSS: Rensselaer’s optimistic simulation system. ROSS is an extremely modular kernel that is capable of achieving event rates as high as 1,250,000 events per second when simulating a wireless telephone network model (PCS) on a quad processor PC server. In a head-to-head comparison, we observe that ROSS out performs the Georgia Tech Time Warp (GTW) system by up to 180% on a quad processor PC server and up to 200% on the SGI Origin 2000. ROSS only requires a small constant amount of memory buffers greater than the amount needed by the sequential simulation for a constant number of processors. ROSS demonstrates for the first time that stable, highly efficient execution using little memory above what the sequential model would require is possible for low-event granularity simulation models. The driving force behind these high-performance and low-memory utilization results is the coupling of an efficient pointer-based implementation framework, Fujimotos fast GVT algorithm for shared memory multiprocessors, reverse computation and the introduction of kernel processes (KPs). KPs lower fossil collection overheads by aggregating processed event lists. This aspect allows fossil collection to be done with greater frequency, thus lowering the overall memory necessary to sustain stable, efficient parallel execution. These characteristics make ROSS an ideal system for use in large-scale networking simulation models. The principle conclusion drawn from this study is that the performance of an optimistic simulator is largely determined by its memory usage.


acm special interest group on data communication | 2003

Large-scale network simulation techniques: examples of TCP and OSPF models

Garrett R. Yaun; David W. Bauer; Harshad L. Bhutada; Christopher D. Carothers; Murat Yuksel; Shivkumar Kalyanaraman

Simulation of large-scale networks remains to be a challenge, although various network simulators are in place. In this paper, we identify fundamental issues for large-scale networks simulation, and porpose new techniques that address them. First, we exploit optimistic parallel simulation techniques to enable fast execution on inexpensive hyper-threaded, multiprocessor systems. Second, we provide a compact, light-weight implementation framework that greatly reduces the amount of state required to simulate large-scale network models. Based on the proposed techniques, we provide sample simulation models for two networking protocols: TCP and OSPF. We implement these models in a simulation environment ROSSNet, which is an extension to the previously developed optimistic simulator ROSS. We perform validation experoments for TCP and OSPF and present performance reuslts of our techniques by simulating OSPF and TCP on a large and realistic topology, such as AT&Ts US network based on rocketfuel data. The end result of these innovations is that we are able to simulate million node network tolopgies using inexpensive commercial off-the-shelf hyper-threaded multiprocessor systems consuming less than 1.4 GB of RAM in total.


workshop on parallel and distributed simulation | 2006

A Case Study in Understanding OSPF and BGP Interactions Using Efficient Experiment Design

David W. Bauer; Murat Yuksel; Christopher D. Carothers; Shivkumar Kalyanaraman

In this paper, we analyze the two dominant inter- and intradomain routing protocols in the Internet: Open Shortest Path Forwarding (OSPFv2) and Border Gateway Protocol (BGP4). Specifically, we investigate interactions between these two routing protocols as well as overall (i.e. both OSPF and BGP) stability and dynamics. Our analysis is based on large-scale simulations of OSPF and BGP, and careful design of experiments (DoE) to perform an efficient search for the best parameter settings of these two routing protocols


winter simulation conference | 2007

Optimistic parallel discrete event simulation of the event-based transmission line matrix method

David W. Bauer; Ernest H. Page

In this paper we describe a technique for efficient parallelization of digital wave guide network (DWN) models based on an interpretation of the finite difference time domain (FDTD) method for discrete event simulation. Modeling methodologies based on FDTD approaches are typically constrained in both the spatial and time domains. This interpretation for discrete event simulation allows us to investigate the performance of DWN models in the context of optimistic parallel discrete event simulation employing reverse computation for rollback support. We present parallel performance results for a large-scale simulation of a 3D battlefield scenario, 100 km and at a height of 100 m with a resolution of 100 m in the X-, Y-planes, and 10 m in the Z-plane for 754 simultaneous radio wave transmissions.


workshop on parallel and distributed simulation | 2007

An Approach for Incorporating Rollback through Perfectly Reversible Computation in a Stream Simulator

David W. Bauer; Ernest H. Page

The traditional rollback mechanism deployed in optimistic simulation is state-saving. More recently, the method of reverse computation has been proposed to reduce the amount of memory consumed by state-saving. This method computes the reverse code for the model during rollback execution, rather than recalling saved state memory. In practice, this method has been shown to offer memory-efficiency without sacrificing computational efficiency. In order to support reverse codes in the model, events must continue to be preserved in the system until fossil collection can be performed. In this paper we define a new algorithm to support perfectly reversible model computation that does not depend on storing the full processed event history. This approach improves memory consumption, further supporting large-scale simulation.


winter simulation conference | 2008

An approach for the effective utilization of GP-GPUs in parallel combined simulation

David W. Bauer; Matthew T. McMahon; Ernest H. Page

A major challenge in the field of Modeling & Simulation is providing efficient parallel computation for a variety of algorithms. Algorithms that are described easily and computed efficiently for continuous simulation, may be complex to describe and/or efficiently execute in a discrete event context, and vice-versa. Real-world models often employ multiple algorithms that are optimally defined in one approach or the other. Parallel combined simulation addresses this problem by allowing models to define algorithmic components across multiple paradigms. In this paper, we illustrate the performance of parallel combined simulation, where the continuous component is executed across multiple graphical processing units (GPU) and the discrete event component is executed across multiple central processing units (CPU).


winter simulation conference | 2004

A case study in meta-simulation design and performance analysis for large-scale networks

David W. Bauer; Garrett R. Yaun; Christopher D. Carothers; Murat Yuksel; Shivkumar Kalyanaraman

Simulation and emulation techniques are fundamental to aid the process of large-scale protocol design and network operations. However, the results from these techniques are often view with a great deal of skepticism from the networking community. Criticisms come in two flavors: (i) the study presents isolated and potentially random feature interactions, and (ii) the parameters used in the study may not be representative of real-world conditions. In this paper, we explore both issues by applying large-scale experiment design and black-box optimization techniques to analyze convergence of network routes in the open shortest path first protocol over a realistic network topology. By using these techniques, we show that: (i) the needed number of simulation experiments can be reduced by an order of magnitude compared to traditional full-factorial experiment design (FFED) approach, (ii) unnecessary parameters can easily be eliminated, and (iii) rapid understanding of key parameter interactions can be achieved.


winter simulation conference | 2008

An application of parallel Monte Carlo modeling for real-time disease surveillance

David W. Bauer; Mojdeh Mohtashemi

The global health, threatened by emerging infectious diseases, pandemic influenza, and biological warfare, is becoming increasingly dependent on the rapid acquisition, processing, integration and interpretation of massive amounts of data. In response to these pressing needs, new information infrastructures are needed to support active, real time surveillance. Detection algorithms may have a high computational cost in both the time and space domains. High performance computing platforms may be the best approach for efficiently computing these algorithms. Unfortunately, these platforms are unavailable to many health care agencies. Our work focuses on efficient parallelization of outbreak detection algorithms within the context of cloud computing as a high throughput computing platform. Cloud computing is investigated as an approach to meet real time constraints and reduce or eliminate costs associated with real time disease surveillance systems.


winter simulation conference | 2006

Eliminating remote message passing in optimistic simulation

David W. Bauer; Christopher D. Carothers

This paper introduces an algorithm for parallel simulation capable of executing the critical path without a priori knowledge of the model being executed. This algorithm is founded on the observation that each initial event in a model causes a stream of events to be generated for execution. By focusing on the parallelization of event streams, rather than logical processes, we have created a new simulation engine optimized for large scale models (i.e., models with 1 million LPs or more)

Collaboration


Dive into the David W. Bauer's collaboration.

Top Co-Authors

Avatar

Christopher D. Carothers

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Shivkumar Kalyanaraman

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Garrett R. Yaun

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Murat Yuksel

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mojdeh Mohtashemi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shawn Pearce

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cheng Hsu

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge