Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Radharamanan Radhakrishnan is active.

Publication


Featured researches published by Radharamanan Radhakrishnan.


Lecture Notes in Computer Science | 1998

An Object-Oriented Time Warp Simulation Kernel

Radharamanan Radhakrishnan; Dale E. Martin; Malolan Chetlur; Dhananjai Madhava Rao; Philip A. Wilsey

The design of a Time Warp simulation kernel is made difficult by the inherent complexity of the paradigm. Hence it becomes critical that the design of such complex simulation kernels follow established design principles such as object-oriented design so that the implementation is simple to modify and extend. In this paper, we present a compendium of our efforts in the design and development of an object-oriented Time Warp simulation kernel, called warped. warped is a publically available Time Warp simulation kernel for experimentation and application development. The kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of Time Warp optimizations. It is written in C++, uses the MPI message passing standard for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Cray T3E, the Intel Paragon, and IBM-compatible PCs running Linux.


winter simulation conference | 1998

Unsynchronized parallel discrete event simulation

Dhananjai Madhava Rao; Narayanan V. Thondugulam; Radharamanan Radhakrishnan; Philip A. Wilsey

Distributed synchronization for parallel simulation is generally classified as being either optimistic or conservative. While considerable investigations have been conducted to analyze and optimize each of these synchronization strategies, very little study on the definition and strictness of causality have been conducted. Do we really need to preserve causality in all types of simulations? The paper attempts to answer this question. We argue that significant performance gains can be made by reconsidering this definition to decide if the parallel simulation needs to preserve causality. We investigate the feasibility of unsynchronized parallel simulation through the use of several queuing model simulations and present a comparative analysis between unsynchronized and Time Warp simulation.


workshop on parallel and distributed simulation | 1998

Optimizing communication in time-warp simulators

Malolan Chetlur; Nael B. Abu-Ghazaleh; Radharamanan Radhakrishnan; Philip A. Wilsey

In message passing environments, the message send time is dominated by overheads that are relatively independent of the message size. Therefore, fine grained applications (such as Time Warp simulators) suffer high overheads because of frequent communication. We investigate the optimization of the communication subsystem of Time Warp simulators using dynamic message aggregation. Under this scheme, Time Warp messages with the same destination LP, occurring in close temporal proximity are dynamically aggregated and sent as a single physical message. Several aggregation strategies that attempt to minimize the communication overhead without harming the progress of the simulation (because of messages being delayed) are developed. The performance of the strategies is evaluated for a network of workstations, and an SMP, using a number of applications that have different communication behavior.


Journal of Parallel and Distributed Computing | 2002

Analysis and Simulation of Mixed-Technology VLSI Systems

Dale E. Martin; Radharamanan Radhakrishnan; Dhananjai Madhava Rao; Malolan Chetlur; Krishnan Subramani; Philip A. Wilsey

Circuit simulation has proven to be one of the most important computer aided design (CAD) methods for verification and analysis of, integrated circuit designs. A popular approach to modeling circuits for simulation purposes is to use a hardware description language such as VHDL. VHDL has had a tremendous impact in fostering and accelerating CAD systems development in the digital arena. Similar efforts have also been carried out in the analog domain which has resulted in tools such as SPICE. However, with the growing trend of hardware designs that contain both analog and digital components, comprehensive design environments that seamlessly integrate analog and digital circuitry are needed. Simulation of digital or analog circuits is, however, exacerbated by high-resource (CPU and memory) demands that increase when analog and digital models are integrated in a mixed-mode (analog and digital) simulation. A cost-effective solution to this problem is the application of parallel discrete-event simulation (PDES) algorithms on a distributed memory platform such as a cluster of workstations. In this paper, we detail our efforts in architecting an analysis and simulation environment for mixed-technology VLSI systems. In addition, we describe the design issues faced in the application of PDES algorithms to mixed-technology VLSI system simulation.


international parallel processing symposium | 1997

External adjustment of runtime parameters in Time Warp synchronized parallel simulators

Radharamanan Radhakrishnan; Lantz Moore; Philip A. Wilsey

Several optimizations to the Time Warp synchronization protocol for parallel discrete event simulation have been proposed and studied. Many of these optimizations have included some form of dynamic adjustment (or control) of the operating parameters of the simulation (e.g. checkpoint interval, cancellation strategy). Traditionally dynamic parameter adjustment has been performed at the simulation object level; each simulation object collects measures of its operating behaviors (e.g. rollback frequency, rollback length, etc.) and uses them to adjust its operating parameters. The performance data collection functions and parameter adjustment are overhead costs that are incurred in the expectation of higher throughput. The paper presents a method of eliminating some of these overheads through the use of an external object to adjust the control parameters. That is, instead of inserting code for adjusting simulation parameters in the simulation object, an external control object is defined to periodically analyze each simulation objects performance data and revise that objects operating parameters. An implementation of an external control object in the WARPED Time Warp simulation kernel has been completed. The simulation parameters updated by the implemented control system are: checkpoint interval, and cancellation strategy (lazy or aggressive). A comparative analysis of three test cases shows that the external control mechanism provides speedups between 5%-17% over the best performing embedded dynamic adjustment algorithms.


annual simulation symposium | 1996

A comparative analysis of various Time Warp algorithms implemented in the WARPED simulation kernel

Radharamanan Radhakrishnan; Timothy J. McBrayer; Krishnan Subramani; Malolan Chetlur; Vijay Balakrishnan; Philip A. Wilsey

The Time Warp mechanism conceptually has the potential to speedup discrete event simulations on parallel platforms. However, practical implementations of optimistic mechanism have been hindered by several drawbacks, such as large memory usage, excessive rollbacks (instability), and wasted lookahead computation. Several optimizations and variations to the original Time Warp algorithm have been presented in the literature to optimistically synchronize parallel discrete event simulation. This paper uses a common simulation environment to present comparative performance results of several Time Warp optimizations in two different application domains, namely queuing model simulation and digital system simulation. The particular optimizations considered are: lowest-timestamp-first (LTSF) scheduling, periodic (fixed period) checkpointing, dynamic checkpointing, lazy cancellation and dynamic cancellation.


IEEE Transactions on Software Engineering | 2002

A formal specification and verification framework for Time Warp-based parallel simulation

Peter Frey; Radharamanan Radhakrishnan; Harold W. Carter; Philip A. Wilsey; Perry Alexander

The paper describes a formal framework developed using the Prototype Verification System (PVS) to model and verify distributed simulation kernels based on the Time Warp paradigm. The intent is to provide a common formal base from which domain specific simulators can be modeled, verified, and developed. PVS constructs are developed to represent basic Time Warp constructs. Correctness conditions for Time Warp simulation are identified, describing causal ordering of event processing and correct rollback processing. The PVS theorem prover and type-check condition system are then used to verify all correctness conditions. In addition, the paper discusses the frameworks reusability and extensibility properties in support of specification and verification of Time Warp extensions and optimizations.


workshop on parallel and distributed simulation | 1999

Time Warp simulation on clumps

Girindra D. Sharma; Radharamanan Radhakrishnan; Umesh Kumar V. Rajasekaran; Nael B. Abu-Ghazaleh; Philip A. Wilsey

Traditionally, parallel discrete-event simulators based on the Time Warp synchronization protocol have been implemented using either the shared memory programming model or the distributed memory, message passing programming model. This was because the preferred hardware platform was either a shared memory multiprocessor workstation or a network of uniprocessor workstations. However, with the advent of clumps (cluster of shared memory multiprocessors), a change in this dichotomous view becomes necessary. We explore the design and implementation issues involved in exploiting this new platform for Time Warp simulations. Specifically, we present two generic strategies for implementing Time Warp simulators on clumps. In addition, we present our experiences in implementing these strategies on an extant distributed memory, message passing Time Warp simulator (WARPED). Preliminary performance results comparing the modified clump-specific simulation kernel to the unmodified distributed memory, message passing simulation kernel are also presented.


ACM Transactions on Modeling and Computer Simulation | 2000

Web-based network analysis and design

Dhananjai Madhava Rao; Radharamanan Radhakrishnan; Philip A. Wilsey

The gradual acceptance of high-performance networks as a fundamental component of todays computing environment has allowed applications to evolve from static entities located on specific hosts to dynamic, distributed entities that are resident on one or more hosts. In addition, vital components of software and data used by an application may be distributed across the local/wide area network. Given such a fluid and dynamic environment, the design and analysis of high-performance communication networks (using off-the-shelf components offered by third party manufacturers) has been further complicated by the diversity of the available components. To alleviate these problems and to address the verification and validation issues involved in engineering such complex networks, a web-based framework for the design and analysis of computer networks was developed. Using the framework, a designer can explore design alternatives by constructing and analyzing configurations of the design using components offered by different researchers and manufacturers. The framework provides a flexible and robust environment for selecting and verifying the optimal solution from a large and complex solution space. This paper presents issues involved in the design and development of the framework.


Vlsi Design | 1999

Dynamic Cancellation: Selecting Time Warp Cancellation Strategies at Runtime*

Raghunandan Rajan; Radharamanan Radhakrishnan; Philip A. Wilsey

The performance of Time Warp parallel discrete event simulators can be affected by the cancellation strategy used to send anti-messages. Under aggressive cancellation, antimessage generation occurs immediately after a straggler message is detected. In contrast, lazy cancellation delays the sending of anti-messages until forward processing from a straggler message confirms that the premature computation did indeed generate an incorrect message. Previous studies have shown that neither approach is clearly superior to the other in all cases (even within the same application domain). Furthermore, no strategy exists to make a priori determination of the more favorable cancellation strategy. Most existing Time Warp systems merely provide a switch for the user to select the cancellation strategy employed. This paper explores the use of simulation time decision procedures to select cancellation strategies. The approach is termed Dynamic Cancellation and it assigns the capability for selecting cancellation strategies to the Logical Processes (LPs) in a Time Warp simulation. Thus, within a single parallel simulation both strategies may be employed by distinct LPs and even across the simulation lifetime of an LP. Empirical analysis using several control strategies show that dynamic cancellation always performs with the best static strategy and, in some cases, dynamic cancellation provides some nominal (5–10%) performance gain over the best static strategy.

Collaboration


Dive into the Radharamanan Radhakrishnan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Frey

Cadence Design Systems

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dale E. Martin

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge