Janakiram D
Indian Institute of Technology Madras
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Janakiram D.
workshop on middleware for pervasive and ad hoc computing | 2005
A.V.U. Phani Kumar; Adi Mallikarjuna V. Reddy; Janakiram D
With the advancement of technology in micro-electronics and wireless communication, small miniature devices called sensor nodes can be used to perform various tasks by forming themselves in to wireless sensor networks. In Wireless Sensor Networks(WSN), event detection is one of the main requirements for most of the applications. An event can be a simple event or a combination of two or more simple events (Composite Event). Detecting and reporting an event desired by the application (user) inspite of stringent constraints of sensor nodes like low energy, low bandwidth, frequent failures etc., is one of the main challenges in WSN. This can be achieved with less uncertainty and masking failures by considering collaboration among sensor nodes. We propose a framework for distributed event detection using collaboration in WSN. The framework consists of two protocols that build a tree by using a communication model similar to the Publish-Subscribe paradigm. This framework is a part of Component Oriented Middleware for Sensor networks (COMiS). In COMiS framework, components are loaded as and when required based on the application semantics. If collaboration is considered, the goal of the application can be easily accomplished even in case of failures of sensors and low energy of nodes.
International Journal of Sensor Networks | 2009
Adi Mallikarjuna V. Reddy; A.V.U. Phani Kumar; Janakiram D; G. Ashok Kumar
The design of operating system for Wireless Sensor Network (WSN) deviates from the traditional operating system design due to their specific characteristics like constrained resources, high dynamics and inaccessible deployment environments. We provide a classification framework that surveys the state of the art in WSN Operating Systems (OS). The purpose of this survey is two-fold, one is to classify the existing operating systems according to important OS features, and the other is to suggest appropriate OSs for different categories of WSN applications, mapping the application requirements and OS features. This classification helps in understanding the contrasting differences among existing operating systems, and lays a foundation to design an ideal WSN OS. We also classified existing WSN applications, to help the application developer in choosing the appropriate OS, based on the application requirements. Summary and analysis, and discussion of future research directions in this area have been presented.
international conference on parallel processing | 2006
M.V. Reddy; A.V. Srinivas; T. Gopinath; Janakiram D
The abundant computing resources available on the Internet has made grid computing over the Internet a viable solution, to scientific problems. The dynamic nature of the Internet necessitates dynamic reconfigurability of applications to handle failures and varying loads. Most of the existing grid solutions handle reconfigurability to a limited extent. These systems lack appropriate support to handle the failure of key-components, like coordinators, essential for the computational model. We propose a two layered peer-to-peer middleware, Vishwa, to handle reconfiguration of the application in the face of failure and system load. The two-layers, task management layer and reconfiguration layer, are used in conjunction by the applications to adapt and mask node failures. We show that our system is able to handle the failures of the key-components of a computation model. This is demonstrated in the case studies of two computational models, namely bag of tasks and connected problems, with an appropriate example for each
systems man and cybernetics | 2009
Vijay Srinivas Agneeswaran; Janakiram D
Data objects have to be replicated in large-scale distributed systems for reasons of fault tolerance, availability, and performance. Furthermore, computations may have to be scheduled on these objects, when these objects are part of a grid computation. Although replication mechanism for unstructured peer-to-peer (P2P) systems can place replicas on capable nodes, they may not be able to provide deterministic guarantees on searching. Replication mechanisms in structured P2P systems provide deterministic guarantees on searching but do not address node capability in replica placement. We propose Virat, a node-capability-aware P2P middleware for managing replicas in large-scale distributed systems. Virat uses a unique two-layered architecture that builds a structured overlay over an unstructured P2P layer, combining the advantages of both structured and unstructured P2P systems. Detailed performance comparison is made with a replication mechanism realized over OpenDHT, a state-of-the-art structured P2P system. We show that the 99th percentile response time for Virat does not exceed 600 ms, whereas for OpenDHT, it goes beyond 2000 ms in our test bed, created specifically for the aforementioned comparison.
principles and practice of declarative programming | 2004
Rajesh J; Janakiram D
Refactoring in object-orientation has gained increased attention due to its ability to improve design quality. Refactoring using design patterns (DPs) leads to production of high quality software systems. Although numerous tools related to refactoring exist, only a few of them apply design patterns in refactoring. Even these do not clearly specify where refactoring can be applied and when to apply appropriate design patterns.In this paper, we propose a tool, JIAD (Java based Intent-Aspects Detector) which addresses the refactoring issues such as, the scope for applying DPs in the code and the appropriate selection of DPs. The tool automates the identification of Intent-Aspects (IAs) which helps in applying suitable design patterns while refactoring the Java code. By automating the process of identifying IAs, the whole process of refactoring using DPs can be automated which enables rapid development of software systems. Also, the tool minimizes the number of possible errors while inferring the suitable DPs to apply refactoring. Our approach primarily focuses on Java code refactoring using declarative programming and AspectJ compiler. Finally, the tool is validated using two applications namely, JLex and Java2Prolog written in Java.
Operating Systems Review | 2005
A. Vijay Srinivas; Janakiram D
Scalability is an important issue in the construction of distributed systems. A number of theoretical and experimental studies have been made on scalability of distributed systems. However, they have been either studies on specific technologies or have studied scalability in isolation. The main conjecture of our work is that scalability must be perceived along with the related issues of availability, synchronization and consistency. In this context, we propose a scalability model which characterizes scalability as being dependent on these factors as well as the workload and faultload. The model is generic and can be used to compare scalability of similar systems. We illustrate this by a comparison between NFS and AFS, two well known distributed file systems. The model is also useful in identifying scalability bottlenecks in distributed systems. We have applied the model to optimize Virat, a wide-area shared object space that we have built.Scalability is an important issue in the construction of distributed systems. A number of theoretical and experimental studies have been made on scalability of distributed systems. However, they have been either studies on specific technologies or have studied scalability in isolation. The main conjecture of our work is that scalability must be perceived along with the related issues of availability, synchronization and consistency. In this context, we propose a scalability model which characterizes scalability as being dependent on these factors as well as the workload and faultload. The model is generic and can be used to compare scalability of similar systems. We illustrate this by a comparison between NFS and AFS, two well known distributed file systems. The model is also useful in identifying scalability bottlenecks in distributed systems. We have applied the model to optimize Virat, a wide-area shared object space that we have built.
ACM Sigsoft Software Engineering Notes | 2005
Janakiram D; M. S. Rajasree
Estimating quality of software systems has always been a good practice in software engineering. Presently, quality evaluation techniques are applied only as an afterthought to software design process. However, quality of a software system should be stated based on the end-users requirement for quality. Based on this observation, this paper proposes an estimation model called ReQuEst (Requirements-driven Quality Estimator). ReQuEst is an attempt to quantitatively estimate the quality of a system being designed from its analysis model. The quality is estimated in terms of adaptability and extendibility which are also important parameters in system design. During requirements analysis, evolving requirements are also analyzed to capture a few quality indicators from them. These indicators are used to compute the requirements for the above parameters from the analysis model. Thus, the analyst can quantitatively specify the quality demands of the system to be designed along with the functional requirements. These quality specifications enable the system designer to precisely design systems meeting the values specified. Further, the model can be used to estimate the maintainability of the system in terms of the above parameters.
international conference on cloud computing | 2013
Prateek Dhawalia; Sriram Kailasam; Janakiram D
Skew mitigation has been a major concern in distributed programming frameworks like MapReduce. It is becoming more prominent with the increasing complexity in user requirements and computation involved. We present Chisel, a self-regulating skew detection and mitigation policy for MapReduce applications. The novelty of the approach is that it involves no scanning or sampling of input data to detect skew and hence incurs low overhead, provides better resource utilization and maintains output order and file structure. It is also transparent to the users and can be used as a plugin whenever required. We use Hadoop to implement our skew handling policies. Chisel implements two skew handling policies for mitigating skew. It does late skew detection for map operators i.e at the last wave of map execution, where skewed maps are selected on the basis of remaining time to complete. More maps are created dynamically over remaining data per block. An early skew detection i.e before starting shuffle phase, is done for reduce operator. This prevents the expensive shuffle and sort phases from delaying skew detection and job completion time. Multiple reducers are created per skewed partition, each shuffling data from a subset of total maps and starts processing it when their portion of maps are over. They need not wait for the completion of all the maps. Therefore, the barrier between map and reduce phase no longer remains a constraint for effective resource utilization. Chisel additionally implements an online job profiler to determine the start point of reduce tasks and also modifies the capacity scheduler to distribute reduce tasks evenly in the cluster. Chisel significantly decreases the overall execution time of jobs and increases resource utilization. Improvement depends directly upon the availability of resources in the cluster and skewness in the job.
Journal of Parallel and Distributed Computing | 2005
M. A. Maluk Mohamed; A. Vijay Srinivas; Janakiram D
The advance of technology in terms of cellular communications and the increasing computing power of the mobile systems have made it convenient for people to use more of mobile systems rather than static systems. This has seen more of mobile devices in personal and distributed computing, thus making the computing power ubiquitous. The combination of wireless communication and cluster computing in many applications has led to the integration of these two technologies to emerge as Mobile Cluster Computing (MCC) paradigm. This has made parallel computing feasible on mobile clusters, by making use of the idle processing power of the static and mobile nodes that form the cluster. To realize such a system for parallel computing, various issues such as connectivity, architecture and operating system heterogeneities, timeliness issues, load fluctuations on machines, machine availability variations and failures in workstations and network connectivities need to be handled. Moset, an Anonymous Remote Mobile Cluster Computing (ARMCC) paradigm is being proposed to handle these issues. Moset provides transparency to mobility of nodes, distribution of computing resources and heterogeneity of wired and wireless networks. The model has been verified and validated by implementing a distributed image-rendering algorithm over a simulated mobile cluster model.
international conference on distributed computing and internet technology | 2007
Kovendhan Ponnavaikko; Janakiram D
In this paper, we address the problem of building and maintaining dynamic overlay networks on top of physical networks for the autonomous scheduling of divisible load Grid applications. While autonomous scheduling protocols exist to maximize steady-state throughputs for given overlay networks, not much work has been done on building the most efficient overlay. In our work, nodes use the bandwidth-centric principle to select other nodes with which overlay edges must be formed dynamically. The node which has the entire dataset initially (the scheduler) starts forming the overlay and the nodes which receive tasks at rates greater than their task execution rates further expand it. We use simulation studies to illustrate the functioning of our overlay forming mechanism, and its robustness to changes in the characteristics of the system resources.