Dhananjai Madhava Rao
Miami University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dhananjai Madhava Rao.
Lecture Notes in Computer Science | 1998
Radharamanan Radhakrishnan; Dale E. Martin; Malolan Chetlur; Dhananjai Madhava Rao; Philip A. Wilsey
The design of a Time Warp simulation kernel is made difficult by the inherent complexity of the paradigm. Hence it becomes critical that the design of such complex simulation kernels follow established design principles such as object-oriented design so that the implementation is simple to modify and extend. In this paper, we present a compendium of our efforts in the design and development of an object-oriented Time Warp simulation kernel, called warped. warped is a publically available Time Warp simulation kernel for experimentation and application development. The kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of Time Warp optimizations. It is written in C++, uses the MPI message passing standard for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Cray T3E, the Intel Paragon, and IBM-compatible PCs running Linux.
winter simulation conference | 1998
Dhananjai Madhava Rao; Narayanan V. Thondugulam; Radharamanan Radhakrishnan; Philip A. Wilsey
Distributed synchronization for parallel simulation is generally classified as being either optimistic or conservative. While considerable investigations have been conducted to analyze and optimize each of these synchronization strategies, very little study on the definition and strictness of causality have been conducted. Do we really need to preserve causality in all types of simulations? The paper attempts to answer this question. We argue that significant performance gains can be made by reconsidering this definition to decide if the parallel simulation needs to preserve causality. We investigate the feasibility of unsynchronized parallel simulation through the use of several queuing model simulations and present a comparative analysis between unsynchronized and Time Warp simulation.
Environmental Modelling and Software | 2009
Dhananjai Madhava Rao; Alexander Chernyakhovsky; Victoria Rao
The World Health Organization has activated a global preparedness plan to improve response to avian influenza outbreaks, control outbreaks, and avoid an H5N1 pandemic. The effectiveness of the plan will greatly benefit from identification of epicenters and temporal analysis of outbreaks. Accordingly, we have developed a simulation-based methodology to analyze the spread of H5N1 using stochastic interactions between waterfowl, poultry, and humans. We have incorporated our methodology into a user friendly, extensible software environment called SEARUMS. SEARUMS is an acronym for Studying the Epidemiology of Avian Influenza Rapidly Using Modeling and Simulation. It enables rapid scenario analysis to identify epicenters and timelines of H5N1 outbreaks using existing statistical data. The case studies conducted using SEARUMS have yielded results that coincide with several past outbreaks and provide non-intuitive inferences about global spread of H5N1. This article presents the methodology used for modeling the global epidemiology of avian influenza and discusses its impacts on human and poultry morbidity and mortality. The results obtained from the various case studies and scenario analyses conducted using SEARUMS along with verification experiments are also discussed. The experiments illustrate that SEARUMS has considerable potential to empower researchers, national organizations, and medical response teams with timely knowledge to combat the disease, mitigate its adverse effects, and avert a pandemic.
modeling analysis and simulation on computer and telecommunication systems | 2001
Dhananjai Madhava Rao; Philip A. Wilsey
Modeling and simulation of large, high resolution network models is a time consuming task even when parallel simulation techniques are employed. Processing voluminous, detailed simulation data further increases the complexity of analysis. Consequently, the models (or parts of the models) are abstracted to improve performance of the simulations by trading-off model details and fidelity. However abstraction defeats the purpose of studying high resolution network models and magnifies the problems of validation! An alternative approach is to dynamically (i.e., during the course of simulation) change the resolution of the model (or parts of the model). In our component based network modeling and simulation framework (NMSF), we have enabled dynamic changes to the resolution of a model using a novel methodology called dynamic component substitution (DCS). Using DCS, a set of components can be substituted by a functionally equivalent component (or vice versa) to change the resolution (or the level of abstraction) of a network model. DCS improves the overall efficiency of simulations through dynamic tradeoffs between resolution of a model, simulation performance, and analysis overheads. This paper presents an overview of DCS and the issues involved in enabling DCS in NMSF, an optimistically synchronized parallel simulation framework. The experiments conducted to evaluate the effectiveness of DCS are also illustrated. Our studies indicate that DCS provides an effective technique to considerably improve the overall efficiency of network simulations.
Journal of Parallel and Distributed Computing | 2002
Dale E. Martin; Radharamanan Radhakrishnan; Dhananjai Madhava Rao; Malolan Chetlur; Krishnan Subramani; Philip A. Wilsey
Circuit simulation has proven to be one of the most important computer aided design (CAD) methods for verification and analysis of, integrated circuit designs. A popular approach to modeling circuits for simulation purposes is to use a hardware description language such as VHDL. VHDL has had a tremendous impact in fostering and accelerating CAD systems development in the digital arena. Similar efforts have also been carried out in the analog domain which has resulted in tools such as SPICE. However, with the growing trend of hardware designs that contain both analog and digital components, comprehensive design environments that seamlessly integrate analog and digital circuitry are needed. Simulation of digital or analog circuits is, however, exacerbated by high-resource (CPU and memory) demands that increase when analog and digital models are integrated in a mixed-mode (analog and digital) simulation. A cost-effective solution to this problem is the application of parallel discrete-event simulation (PDES) algorithms on a distributed memory platform such as a cluster of workstations. In this paper, we detail our efforts in architecting an analysis and simulation environment for mixed-technology VLSI systems. In addition, we describe the design issues faced in the application of PDES algorithms to mixed-technology VLSI system simulation.
Journal of Parallel and Distributed Computing | 2002
Dhananjai Madhava Rao; Philip A. Wilsey
Abstract Many modern systems involve complex interactions between a large number of diverse entities that constitute these systems. Unfortunately, these large, complex systems frequently defy analysis by conventional analytical methods and their study is generally performed using simulation models. Further aggravating the situation, detailed simulations of large systems will frequently require days, weeks, or even months of computer time and lead to scaled down studies. These scaled down studies may be achieved by the creation of smaller, representative, models and/or by analysis with short duration simulation exercises. Unfortunately, scaled down simulation studies will frequently fail to exhibit behaviors of the full-scale system under study. Consequently, better simulation infrastructure is needed to support the analysis of ultra-large (models containing over 1 million components)-scale models. Simulation support for ultra-large-scale simulation models must be achieved using low-cost commodity computer systems. The expense of custom or high-end parallel systems prevent their widespread use. Consequently, we have developed an Ultra-large-Scale Simulation Framework (USSF). This paper presents the issues involved in the design and development of USSF. Parallel simulation techniques are used to enable optimal time versus resource tradeoffs in USSF. The techniques employed in the framework to reduce and regulate the memory requirements of the simulations are described. The API needed for model development is illustrated. The results obtained from the experiments conducted using various system models with two parallel simulation kernels (comparing a conventional approach with USSF) are also presented.
winter simulation conference | 2000
Dhananjai Madhava Rao; Philip A. Wilsey
Recent breakthroughs in communication and software engineering has resulted in significant growth of Web-based computing. Web-based techniques have been employed for modeling, simulation and analysis of systems. The models for simulation are usually developed using component based techniques. In a component based model, a system is represented as a set of interconnected components. A component is a well defined software module that is viewed as a black box, i.e., only its interface is of concern and not its implementation. However, the behavior of a component, which is necessary for simulation, could be implemented by different modelers including third party manufacturers. Web-based simulation environments enable effective sharing and reuse of components thereby minimizing model development overheads. In component based simulations, one or more components can be substituted during simulation with a functionally equivalent set of components. Such dynamic component substitutions (DCS) provide an effective technique for selectively changing the level of abstraction of a model during simulation. It provides a tradeoff between simulation overheads and model details. It can be used to effectively study large systems and accelerate rare event simulations to desired scenarios of interest. DCS may also be used to achieve fault-tolerance in Web-based simulations. This paper presents the ongoing research to design and implement support for DCS in a Web-based Environment for Systems Engineering (WESE).
ACM Transactions on Modeling and Computer Simulation | 2000
Dhananjai Madhava Rao; Radharamanan Radhakrishnan; Philip A. Wilsey
The gradual acceptance of high-performance networks as a fundamental component of todays computing environment has allowed applications to evolve from static entities located on specific hosts to dynamic, distributed entities that are resident on one or more hosts. In addition, vital components of software and data used by an application may be distributed across the local/wide area network. Given such a fluid and dynamic environment, the design and analysis of high-performance communication networks (using off-the-shelf components offered by third party manufacturers) has been further complicated by the diversity of the available components. To alleviate these problems and to address the verification and validation issues involved in engineering such complex networks, a web-based framework for the design and analysis of computer networks was developed. Using the framework, a designer can explore design alternatives by constructing and analyzing configurations of the design using components offered by different researchers and manufacturers. The framework provides a flexible and robust environment for selecting and verifying the optimal solution from a large and complex solution space. This paper presents issues involved in the design and development of the framework.
winter simulation conference | 2008
Dhananjai Madhava Rao; Alexander Chernyakhovsky
SEARUMS is an eco-modeling, bio-simulation, and analysis environment to study the global epidemiology of Avian Influenza. Originally developed in Java, SEARUMS enables comprehensive epidemiological analysis, forecast epicenters, and time lines of epidemics for prophylaxis; thereby mitigating disease outbreaks. However, SEARUMS-based simulations were time consuming due to the size and complexity of the models. In an endeavor to reduce time for simulation, we have redesigned the infrastructure of SEARUMS to operate as a time warp synchronized, parallel and distributed simulation. This paper presents our parallelization efforts along with empirical evaluation of various design alternatives that were explored to identify the ideal parallel simulation configuration. Our experiments indicate that the redesigned environment called SEARUMS++ achieves good scalability and performance, thus meeting a mission-critical objective.
international parallel and distributed processing symposium | 2001
Dhananjai Madhava Rao; Harold W. Carter; Philip A. Wilsey
Web-based simulations are performed by utilizing the resources of the Word Wide Web (WWW) such as proprietary components/models developed by third party modelers/manufacturers and web-based computational infrastructures (or compute servers). Access to such web-based resources, third party resources in particular, is usually circumscribed by a variety of pricing schemes. Therefore, optimal use of resources plays a critical role in minimizing the overall costs of web-based modeling and simulation which is directly dependent on the size of the model i.e., the total number of components constituting the model. Consequently, component aggregation and de-aggregation techniques that can be used to statically (before simulation) as well as dynamically (during simulation) vary the number of components constituting a model, have been developed. The techniques enable a range of tradeoffs between several modeling and simulation related parameters – thereby optimizing the resource consumption and overall costs. This paper presents a detailed discussion of the component aggregation and de-aggregation techniques along with the issues involved in implementing them in a Web-based Environment for Systems Engineering (WESE). Our studies indicate that these techniques provide an effective means to optimize the overall costs of web-based modeling and simulation.