Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dinesh Rajan is active.

Publication


Featured researches published by Dinesh Rajan.


ieee international conference on cloud computing technology and science | 2011

Converting a High Performance Application to an Elastic Cloud Application

Dinesh Rajan; Anthony Canino; Jesús A. Izaguirre; Douglas Thain

Over the past decade, high performance applications have embraced parallel programming and computing models. While parallel computing offers advantages such as good utilization of dedicated hardware resources, it also has several drawbacks such as poor fault-tolerance, scalability, and ability to harness available resources during run-time. The advent of cloud computing presents a viable and promising alternative to parallel computing because of its advantages in offering a distributed computing model. In this work, we establish directives that serve as guidelines for the design and implementation or identification of a suitable cloud computing framework to build or convert a high performance application to run in the cloud. We show that following these directives leads to an elastic implementation that has better scalability, run-time resource adaptability, fault tolerance, and portability across cloud computing platforms, while requiring minimal effort and intervention from the user. We illustrate this by converting an MPI implementation of replica exchange, a parallel tempering molecular dynamics application, to an elastic cloud application using the Work Queue framework that adheres to these directive. We observe better scalability and resource adaptability of this elastic application on multiple platforms, including a homogeneous cluster environment (SGE) and heterogeneous cloud computing environments such as Microsoft Azure and Amazon EC2.


real time technology and applications symposium | 2007

Network-Aware Dynamic Voltage and Frequency Scaling

Bren Mochocki; Dinesh Rajan; Xiaobo Sharon Hu; Christian Poellabauer; Kathleen Otten; Thidapat Chantem

Reducing energy consumption is an important consideration in embedded real-time system development. This work examines systems that contain a DVFS managed CPU executing packet producing tasks and a DPM-controlled network interface. We introduce a novel approach to minimize energy consumed by the network resource on such a system, through careful selection of voltage and frequency levels on the CPU. Contrary to existing claims which state that DVFS should not be employed when the CPU is not a significant consumer of energy, we show that our DVFS technique can reduce system energy by as much as 35%, even when the CPU energy consumption is negligible. Furthermore, we motivate the need to balance the CPU and network energy and present two techniques to do so. One is based on off-line analysis and the other is a conservative on-line approach. We then validate the proposed methods using both simulation and an implementation in the Linux kernel


embedded and real-time computing systems and applications | 2006

Workload-Aware Dual-Speed Dynamic Voltage Scaling

Dinesh Rajan; Russell Zuck; Christian Poellabauer

Dynamic voltage scaling (DVS) is a frequently used technique in mobile and embedded systems, aimed at reducing the energy consumption of mobile processors. In systems with a discrete number of frequency levels, existing dual-speed DVS approaches compute an optimal theoretical CPU speed and approximate it by choosing the two neighboring discrete speed levels. By comparing experimentally the energy savings attained with different frequency combinations on a mobile platform, this work shows that choosing the two neighboring frequency levels does not necessarily yield the highest energy savings. As a result of the above observation, this work introduces an online approach to dual-speed DVS that a) formulates a model for speed selection based on the workload characteristics of the current task set, b) computes a frequency pair that yields the best possible energy savings for a given taskset and workload


Journal of Chemical Information and Modeling | 2014

AWE-WQ: fast-forwarding molecular dynamics using the accelerated weighted ensemble.

Badi’ Abdul-Wahid; Haoyun Feng; Dinesh Rajan; Ronan Costaouec; Eric Darve; Douglas Thain; Jesús A. Izaguirre

A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates. However, while WE can sample correctly and efficiently, a scalable implementation applicable to interesting biomolecular systems is not available. We provide here a GPLv2 implementation called AWE-WQ of a WE algorithm using the master/worker distributed computing WorkQueue (WQ) framework. AWE-WQ is scalable to thousands of nodes and supports dynamic allocation of computer resources, heterogeneous resource usage (such as central processing units (CPU) and graphical processing units (GPUs) concurrently), seamless heterogeneous cluster usage (i.e., campus grids and cloud providers), and support for arbitrary MD codes such as GROMACS, while ensuring that all statistics are unbiased. We applied AWE-WQ to a 34 residue protein which simulated 1.5 ms over 8 months with peak aggregate performance of 1000 ns/h. Comparison was done with a 200 μs simulation collected on a GPU over a similar timespan. The folding and unfolded rates were of comparable accuracy.


international conference on cluster computing | 2013

Making work queue cluster-friendly for data intensive scientific applications

Michael Albrecht; Dinesh Rajan; Douglas Thain

Researchers with large-scale data-intensive applications often wish to scale up applications to run on multiple clusters, employing a middleware layer for resource management across clusters. However, at the very largest scales, such middleware is often “unfriendly” to individual clusters, which are usually designed to support communication within the cluster, not outside of it. To address this problem we have modified the Work Queue master-worker application framework to support a hierarchical configuration that more closely matches the physical architecture of existing clusters. Using a synthetic application we explore the properties of the system and evaluate its performance under multiple configurations, with varying worker reliability, network capabilities, and data requirements. We show that by matching the software and hardware architectures more closely we can gain both a modest improvement in runtime and a dramatic reduction in network footprint at the master. We then run a scalable molecular dynamics application (AWE) to examine the impact of hierarchy on performance, cost and efficiency for real scientific applications and see a 96% reduction in network footprint, making it much more palatable to system operators and opening the possibility of increasing the application scale by another order of magnitude or more.


international conference on e-science | 2012

Folding proteins at 500 ns/hour with Work Queue

Badi’ Abdul-Wahid; Li Yu; Dinesh Rajan; Haoyun Feng; Eric Darve; Douglas Thain; Jesús A. Izaguirre

Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.


wireless algorithms, systems, and applications | 2007

Adaptive Fragmentation for Latency Control and Energy Management in Wireless Real-time Environments

Dinesh Rajan; Christian Poellabauer

Wireless environments are typically characterized by unpredictable and unreliable channel conditions. In such environments, fragmentation of network-bound data is a commonly adapted technique to improve the probability of successful data transmissions and reduce the energy overheads incurred due to re-transmissions. The overall latencies involved with fragmentation and consequent re-assembly of fragments are often neglected which bear significant effects on the real-time guarantees of the participating applications. This work studies the latencies introduced as a result of the fragmentation performed at the link layer (MAC layer in IEEE 802.11) of the source device and their effects on end-to-end delay constraints of mobile applications (e.g., media streaming). Based on the observed effects, this work proposes a feedback-based adaptive approach that chooses an optimal fragment size to (a) satisfy end-to-end delay requirements of the distributed application and (b) minimize the energy consumption of the source device by increasing the probability of successful transmissions, thereby reducing re-transmissions and their associated costs.


ieee/acm international symposium cluster, cloud and grid computing | 2013

Case Studies in Designing Elastic Applications

Dinesh Rajan; Andrew Thrasher; Badi’ Abdul-Wahid; Jesús A. Izaguirre; Scott J. Emrich; Douglas Thain

Clusters, clouds, and grids offer access to large scale computational resources at low cost. This is especially appealing to scientific applications that require a very large scale to compete in the research space. However, the resources available across these platforms differ significantly in their availability, hardware, environment, performance, cost of use, and more. This requires the use of elastic applications that can adapt to the resources available at run-time, transparently handling heterogeneity and failures. In this paper, we present case studies of several elastic applications built using the Work Queue programming framework. From this experience, we offer six general guidelines for the design and implementation of elastic applications that run on thousands of processors.


embedded software | 2008

Wireless channel access reservation for embedded real-time systems

Dinesh Rajan; Christian Poellabauer; Xiaobo Sharon Hu; Liqiang Zhang; Kathleen Otten

Reservation-based channel access has been shown to be effective in providing Quality of Service (QoS) guarantees (e.g., timeliness) in wireless embedded real-time applications such as mobile media streaming and networked embedded control systems. While the QoS scheduling at the central authority (i.e., base station) has received extensive attention recently, the computation of resource requirements at each individual node has been widely ignored. An inappropriate resource requirement may lead to degraded support for real-time traffic and overprovisioning of scarce network resources. This work addresses this issue by presenting a strategy for nodes to determine minimal resource reservations that guarantee the real-time constraints of their network traffic. In addition, this paper examines the relationship between timeliness constraints of the traffic and resource requirements.


international conference on mobile and ubiquitous systems: networking and services | 2007

Cooperative Dynamic Voltage Scaling using Selective Slack Distribution in Distributed Real-Time Systems

Dinesh Rajan; Christian Poellabauer; Andrew Blanford; Bren Mochocki

This work is based on the observation that existing energy management techniques for mobile devices, such as dynamic voltage scaling (DVS), are non-cooperative in the sense that they reduce the energy consumption of a single device, disregarding potential consequences for other constraints (e.g., end-to- end deadlines) and/or other devices (e.g., energy consumption on neighboring devices). This paper argues that energy management in distributed real-time systems has to be end-to-end in nature, requiring a coordinated approach among communicating devices. A cooperative distributed energy management technique (Co-DVS) is proposed that: i) adapts and maintains end-to-end latencies within specified timeliness requirements (deadlines); and ii) enhances energy savings at the nodes with the highest pay-off factors that represent the relative benefits or significance of conserving energy at a node. The proposed technique employs a feedback-based approach to dynamically distribute end-to-end slack among the devices based on their pay-off factors.

Collaboration


Dive into the Dinesh Rajan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Douglas Thain

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bren Mochocki

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Haoyun Feng

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Kathleen Otten

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Russell Zuck

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge