Alfred Park
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alfred Park.
modeling, analysis, and simulation on computer and telecommunication systems | 2003
Richard M. Fujimoto; Kalyan S. Perumalla; Alfred Park; Hao Wu; Mostafa H. Ammar; George F. Riley
Parallel and distributed simulation tools are emerging that offer the ability to perform detailed, packet-level simulations of large-scale computer networks on an unprecedented scale. The state-of-the-art in large-scale network simulation is characterized quantitatively. For this purpose, a metric based on the number of packet transmissions that can be processed by a simulator per second of wallclock time (PTS) is used as a means to quantitatively assess packet-level network simulator performance. An approach to realizing scalable network simulations that leverages existing sequential simulation models and software is described. Results from a recent performance study are presented concerning large-scale network simulation on a variety of platforms ranging from workstations to cluster computers to supercomputers. These experiments include runs utilizing as many as 1536 processors yielding performance as high as 106 million PTS. The performance of packet-level simulations of web and ftp traffic, and denial of service attacks on networks containing millions of network nodes are briefly described, including a run demonstrating the ability to simulate a million web traffic flows in near real-time. New opportunities and research challenges to fully exploit this capability are discussed.
ACM Transactions on Modeling and Computer Simulation | 2004
George F. Riley; Mostafa H. Ammar; Richard M. Fujimoto; Alfred Park; Kalyan S. Perumalla; Donghua Xu
We describe an approach and our experiences in applying federated simulation techniques to create large-scale parallel simulations of computer networks. Using the federated approach, the topology and the protocol stack of the simulated network is partitioned into a number of submodels, and a simulation process is instantiated for each one. Runtime infrastructure software provides services for interprocess communication and synchronization (time management). We first describe issues that arise in homogeneous federations where a sequential simulator is federated with itself to realize a parallel implementation. We then describe additional issues that must be addressed in heterogeneous federations composed of different network simulation packages, and describe a dynamic simulation backplane mechanism that facilitates interoperability among different network simulators. Specifically, the dynamic simulation backplane provides a means of addressing key issues that arise in federating different network simulators: differing packet representations, incomplete implementations of network protocol models, and differing levels of detail among the simulation processes. We discuss two different methods for using the backplane for interactions between heterogeneous simulators: the cross-protocol stack method and the split-protocol stack method. Finally, results from an experimental study are presented for both the homogeneous and heterogeneous cases that provide evidence of the scalability of our federated approach on two moderately sized computing clusters. Two different homogeneous implementations are described: Parallel/Distributed ns (pdns) and the Georgia Tech Network Simulator (GTNetS). Results of a heterogeneous implementation federating ns with GloMoSim are described. This research demonstrates that federated simulations are a viable approach to realizing efficient parallel network simulation tools.
workshop on parallel and distributed simulation | 2004
Alfred Park; Richard M. Fujimoto; Kalyan S. Perumalla
Parallel discrete event simulation techniques have enabled the realization of large-scale models of communication networks containing millions of end hosts and routers. However, the performance of these parallel simulators could be severely degraded if proper synchronization algorithms are not utilized. In this paper, we compare the performance and scalability of synchronous and asynchronous algorithms for conservative parallel network simulation. We develop an analytical model to evaluate the efficiency and scalability of certain variations of the well-known null message algorithm, and present experimental data to verify the accuracy of this model. This analysis and initial performance measurements on parallel machines containing hundreds of processors suggest that for scenarios simulating scaled network models with constant number of input and output channels per logical process, an optimized null message algorithm offers better scalability than efficient global reduction based synchronous protocols.
international conference on cloud computing | 2009
Asad Waqar Malik; Alfred Park; Richard M. Fujimoto
Cloud computing offers the potential to make parallel discrete event simulation capabilities more widely accessible to users who are not experts in this technology and do not have ready access to high performance computing equipment. Services hosted within the “cloud” can potentially incur processing delays due to load sharing among other active services, and can cause optimistic simulation protocols to perform poorly. This paper proposes a mechanism termed the Time Warp Straggler Message Identification Protocol (TW-SMIP) to address optimistic synchronization and performance issues associated with executing parallel discrete event simulation in cloud computing environments.
workshop on parallel and distributed simulation | 2008
Matthias Jeschke; Alfred Park; Roland Ewald; Richard M. Fujimoto; Adelinde M. Uhrmacher
The application of parallel and distributed simulation techniques is often limited by the amount of parallelism available in the model. This holds true for large-scale cell- biological simulations, afield that has emerged as data and knowledge concerning these systems increases and biologists call for tools to guide wet-lab experimentation. A promising approach to exploit parallelism in this domain is the integration of spatial aspects, which are often crucial to a models validity. We describe an optimistic, parallel and distributed variant of the Next-Subvolume Method (NSM), a method that augments the well-known Gillespie Stochastic Simulation Algorithm (SSA) with spatial features. We discuss requirements imposed by this application on a parallel discrete event simulation engine to achieve efficient execution. First results of combining NSM and the grid-inspired simulation system AURORA are shown.
workshop on parallel and distributed simulation | 2006
Alfred Park; Richard M. Fujimoto
A master/worker paradigm for executing large-scale parallel discrete event simulation programs over networkenabled computational resources is proposed and evaluated. In contrast to conventional approaches to parallel simulation, a client/server architecture is proposed where clients (workers) repeatedly download state vectors of logical processes and associated message data from a server (master), perform simulation computations locally at the client, and then return the results back to the server. This process offers several potential advantages over conventional parallel discrete event simulation systems, including support for execution over heterogeneous distributed computing platforms, load balancing, efficient execution on shared platforms, easy addition or removal of client machines during program execution, simpler fault tolerance, and improved portability. A prototype implementation called the Aurora Parallel and Distributed Simulation System (Aurora) is described. The structure and interaction of the Aurora components is described. Results of an experimental performance evaluation are presented detailing primitive timings and application performance on both dedicated and shared computing platforms.
grid computing | 2007
Alfred Park; Richard M. Fujimoto
Utilizing desktop grid infrastructures is challenging for parallel discrete event simulation (POES) codes due to characteristics such as inter-process messaging, restricted execution, and overall lower concurrency than typical volunteer computing projects. The Aurora2 system uses an approach that simultaneously provides both replicated execution support and scalable performance of PDES applications through public resource computing. This is accomplished through a multithreaded distributed back-end system, low overhead communications middleware, and an efficient client implementation. This paper describes the Aurora2 architecture and issues pertinent to PDES executions in a desktop grid environment that must be addressed when distributing back-end services across multiple machines. We quantify improvement over the first generation Aurora system through a comparative performance study detailing PDES programs with various scalability characteristics for execution over desktop grids.
workshop on parallel and distributed simulation | 2003
Kalyan S. Perumalla; Alfred Park; Richard M. Fujimoto; George F. Riley
Federated simulation interfaces such as the high level architecture (HLA) were designed for interoperability, and as such are not traditionally associated with high-performance computing. We present results of a case study examining the use of federated simulations using runtime infrastructure (RTI) software to realize large-scale parallel network simulators. We examine the performance of two different federated network simulators, and describe RTI performance optimizations that were used to achieve efficient execution. We show that RTI-based parallel simulations can scale extremely well and achieve very high speedup. Our experiments yielded more than 80-fold scaled speedup in simulating large TCP/IP networks, demonstrating performance of up to 6 million simulated packet transmissions per second on a Linux cluster. Networks containing up to two million network nodes (routers and end systems) were simulated.
distributed simulation and real-time applications | 2008
Alfred Park; Richard M. Fujimoto
Issues concerning optimistic time management on public-resource computing infrastructures and desktop grids are explored. The master/worker (MW) approach used for these platforms calls for a rethinking of optimistic synchronization and the development of new mechanisms and protocols specific to this paradigm. Approaches to rollback, message cancellation, state management and state saving are described. Challenges specific to adapting optimism to this computing paradigm are discussed as well as optimizations and overhead reduction techniques. The impact of various key parallel discrete event simulation (PDES) application properties on performance of an optimistic MW implementation is evaluated using Aurora, a framework supporting PDES over desktop grids.
workshop on parallel and distributed simulation | 2009
Alfred Park; Richard M. Fujimoto
The master/worker (MW) paradigm can be used to implement parallel discrete event simulations (PDES) on metacomputing systems. MW PDES applications incur overheads not found in conventional PDES executions executing on tightly coupled machines. We introduce four techniques for reducing these overheads on public resource and desktop grid infrastructures Work unit caching, pipelined state updates, expedited message delivery, and adaptive work unit scheduling mechanisms are described that provide significant reduction in overall overhead when used in tandem. We present performance results showing that an optimized MW PDES system can exhibit performance comparable to a traditional PDES system for a queueing network and a particle physics simulation.