Seng Chuan Tay
National University of Singapore
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Seng Chuan Tay.
workshop on parallel and distributed simulation | 1997
Seng Chuan Tay; Yong Meng Teo; Siew Theng Kong
Excessive rollback recoveries due to overoptimistic event execution in Time Warp simulators often degrade their runtime performance. This paper presents a two-sided throttling scheme to dynamically adjust the event execution speed of Time Warp simulators. The proposed throttle is based on a new concept called global progress window, which allows the individual simulation process to be positioned on a global time scale, thereby to accelerate or suspend their event execution. As each simulation process can be throttled to a steady state, excessive rollback recoveries due to causality errors can be avoided. To quantify the effect of rollbacks and for purpose of comparing different Time Warp implementations, we propose two new measures called RPE (number of Rollback events Per committed Event), and E (relative Effectiveness in reducing rollback overhead). Our implementation results show that the proposed throttle effectively regulates the proceeding of each simulation process, resulting in a significant reduction in rollback thrashing and elapsed time.
workshop on parallel and distributed simulation | 1998
Seng Chuan Tay; Yong Meng Teo; Rassul Ayani
The paper presents an analytical model for evaluating the performance of Time Warp simulators. The proposed model is formalized based on two important time components in parallel and distributed processing: computation time and communication time. The communication time is modeled by buffer access time and message transmission time. Logical processes of the Time Warp simulation, and the processors executing them are assumed to be homogeneous. Performance metrics such as rollback probability, rollback distance, elapsed time and Time Warp efficiency are derived. More importantly, we also analyze the impact of cascading rollback waves on the overall Time Warp performance. By rendering the deviation in state numbers of sender-receiver pairs, we investigate the performance of throttled Time Warp scheme. Our analytical model shows that the deviation in state numbers and the communication delay have a profound impact on Time Warp efficiency. The performance model has been validated against implementation results obtained on a Fujitsu AP3000 parallel computer. The analytical framework can be readily used to estimate performance before the Time Warp simulator is implemented.
annual simulation symposium | 1998
Yong Meng Teo; Seng Chuan Tay; Siew Theng Kong
Parallel discrete-event simulation research has focused mainly on designing efficient parallel simulation protocols. However, the exploitation of parallel simulation technology in real-life applications has been hindered mainly by the lack of simulation support tools. The paper describes the design of SPaDES (Structured Parallel Discrete-Event Simulation), a parallel simulation environment for developing portable simulation models, and a platform for design experimentation of parallel simulation synchronization protocols. An implementation of the environment, SPaDES/C/sub ++/, cleanly separates simulation modeling and programming from the details of parallelization such as parallel simulation synchronization and parallel programming. For ease of portability and modular design, SPaDES/C/sub ++/ is implemented as a parallel simulation library. A comparison of SPaDES/C/sub ++/ with CSim and Simscript using two application examples is discussed.
modeling analysis and simulation on computer and telecommunication systems | 2001
Yong Meng Teo; Bhakti S. S. Onggo; Seng Chuan Tay
A new formal approach based on partial order set (poset) theory is proposed to analyze the space requirement of discrete-event parallel simulation. We divide the memory required by a simulation problem into memory to model the states of the real-world system, memory to maintain a list of future event occurrences, and memory required to implement the event synchronization protocol. We establish the relationship between poset theory and event orderings in simulation. Based on our framework, we analyze the space requirement using an open and a closed system as examples. Our analysis shows that apart from problem size and traffic intensity that affects the memory requirement, event ordering is an important factor that can be analyzed before implementation. In an open system, a weaker event ordered simulation requires more memory than strong ordering. However, the memory requirement is constant and independent of event ordering in closed systems.
annual simulation symposium | 1999
Yong Meng Teo; Seng Chuan Tay
Developing a parallel discrete event simulation from scratch requires an in-depth knowledge of the mapping process from the physical model to the simulation model, and a substantial effort in coping with numerous parallelism issues in the underlying synchronization protocols adopted. The lack of software tools and environments to reduce the development effort significantly is a major hindrance in adopting parallel simulation technology. The paper presents an overview of the SPaDES (Structured Parallel Discrete-Event Simulation) scalable parallel simulation framework. We focus on the performance analysis of SPaDES/C/sub ++/, an implementation of SPaDES on a distributed memory Fujitsu AP3000 parallel computer. SPaDES/C/sub ++/ hides the underlying complex parallel simulation synchronization and parallel programming details from the simulationist. We study various ways of improving SPaDES execution performance including periodic checkpointing of simulation states, aggregation of messages for logical processes that reside on the same physical processors, and increasing the computational granularity of run time processes to reduce the costs of synchronization and communication. Our empirical results show that the SPaDES framework can deliver good speedup for applications with large problem size and is scalable.
international symposium on parallel architectures algorithms and networks | 1994
Yong Meng Teo; Seng Chuan Tay
This paper addresses the use of parallel simulation techniques to speedup the simulation of multistage interconnection networks. The conventional null-message approach to resolving deadlock problem in conservative simulation is based on a lookahead mechanism. For some application domains, unfortunately, the lookahead information is not available. Consequently, the simulation using null messages will be trapped in a livelock. We propose a deadlock/livelock free scheme using null messages, but without the guaranteed lookahead, to coordinate the simulation, and different partitioning techniques for mapping of the simulation program onto multicomputers. A flushing mechanism to address the combinatoric explosion of using null-message in conservative simulation is also discussed. Our analysis shows that the proposed flushing mechanism effectively reduces the number of null messages from exponential to linear.<<ETX>>
annual simulation symposium | 1999
Yong Meng Teo; Hong Wang; Seng Chuan Tay
The paper provides a framework for studying the complex performance interactions in parallel simulation along three main components: simulation model, parallel simulation strategy/protocol, and execution platform. We propose a methodology for characterizing the potential parallelism of a simulation model based on analytical modeling techniques. A clear understanding of the degree of event parallelism inherent in the simulation problem/model is essential for the simulation practitioners to assess the performance benefits of exploiting parallelism before substantial programming effort is invested in its implementation. Establishing the baseline event parallelism available in a simulation model is crucial to the simulation practitioners for assessing the performance (parallelism) loss that may arise from the parallel synchronization protocol, and the architecture of the parallel execution platform used. We analyze how causality dependency of event affects the performance of simulation model, and determine the potential event parallelism in simulation models.
modeling analysis and simulation on computer and telecommunication systems | 2000
Seng Chuan Tay; Yong Meng Teo
In the Time Warp (TW) protocol, the system state must be checkpointed to facilitate the rollback operation. While increasing the checkpointing frequency increases the state saving cost, an infrequent scheme also escalates the coast forward effort when a large number of executed events are redone. This paper proposes a probabilistic approach to checkpointing. We derive the rollback probability, and compute the expected coast forward effort if a state is not saved. To reduce implementation overheads, the rollback probability and coast forward cost are predetermined and make available at runtime as a lookup table. Based on the derived expectation, a store vector is saved only if the expected coast forward effort is larger than the state saving cost and vice versa. Our experiments show that the cost model reduces the simulation elapsed time by close to 30% as compared to saving the system state after each event execution, and saving the system state at a predefined interval.
annual simulation symposium | 2001
Seng Chuan Tay; Yong Meng Teo
While constraining the speculation in time warp (TW) tends to decrease the number of false event executions, it also introduces an opportunity cost when the processors are not fully utilized. To obtain good runtime performance such a trade-off must be optimized. This paper studies the implications of regulating speculation in TW, and develops an analytic framework for optimizing the elapsed time of TW simulations. By aggregating the effect of three time components, namely computation time, communication time and processor idle time, our analytic framework optimizes the degree of speculation for AP3000 performance. Our experiments on a Fujitsu AP3000 distributed-memory parallel computer simulating several applications show that the predicted performance metrics deviate from the measured values by less than 8%. Both the analytical and experimental results have ascertained that speculation without rollbacks may not produce the best elapsed time. Instead, a controlled degree of causality error is preferred in most practical cases.
annual simulation symposium | 2000
Hong Wang; Yong Meng Teo; Seng Chuan Tay
The ability to predict the performance of a simulation application before its implementation is an important factor for the adoption of parallel simulation technology in industry. Ideally, a simulationist estimates the inherent parallelism of a simulation problem to determine whether it is worthwhile to invest resources to carry out a parallel simulation. We propose an analytic method for predicting the simulation parallelism of a simulation problem that is independent of implementation details. We assume that the system to be simulated is modelled as a network of logical processes, and each logical process models a queuing server center. Unlike many analytic models reported in the literature, we consider the causal relations among events in a simulation. Causality effects reduce event parallelism. Our proposed analytic method gives a tighter upper bound on performance speedup. Validation experiments show that our analytic prediction of simulation parallelism differs from that of critical path analysis by 2.9% and 18.8% in open and closed systems respectively.