Arshad Jhumka
University of Warwick
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arshad Jhumka.
dependable systems and networks | 2001
Martin Hiller; Arshad Jhumka; Neeraj Suri
We present a novel approach for analysing the propagation of data errors in software. The concept of error permeability is introduced as a basic measure upon which we define a set of related measures. These measures guide us in the process of analysing the vulnerability of software to find the modules that are most likely exposed to propagating errors. Based on the analysis performed with error permeability and its related measures, we describe how to select suitable locations for error detection mechanisms (EDMs) and error recovery mechanisms (ERMs). A method for experimental estimation of error permeability, based on fault injection, is described and the software of a real embedded control system analysed to show the type of results obtainable by the analysis framework. The results show that the developed framework is very useful for analysing error propagation and software vulnerability and for deciding where to place EDMs and ERMs.
international symposium on software testing and analysis | 2002
Martin Hiller; Arshad Jhumka; Neeraj Suri
In order to produce reliable software, it is important to have knowledge on how faults and errors may affect the software. In particular, designing efficient error detection mechanisms requires not only knowledge on which types of errors to detect but also the effect these errors may have on the software as well as how they propagate through the software. This paper presents the Propagation Analysis Environment (PROPANE) which is a tool for profiling and conducting fault injection experiments on software running on desktop computers. PROPANE supports the injection of both software faults (by mutation of source code) and data errors (by manipulating variable and memory contents). PROPANE supports various error types out-of-the-box and has support for user-defined error types. For logging, probes are provided for charting the values of variables and memory areas as well as for registering events during execution of the system under test. PROPANE has a flexible design making it useful for development of a wide range of software systems, e.g., embedded software, generic software components, or user-level desktop applications. We show examples of results obtained using PROPANE and how these can guide software developers to where software error detection and recovery could increase the reliability of the software system.
IEEE Transactions on Computers | 2004
Martin Hiller; Arshad Jhumka; Neeraj Suri
We present an approach for analyzing the propagation and effect of data errors in modular software enabling the profiling of the vulnerabilities of software to find 1) the modules and signals most likely exposed to propagating errors and 2) the modules and signals which, when subjected to error, tend to cause more damage than others from a systems operation point-of-view. We discuss how to use the obtained profiles to identify where dependability structures and mechanisms will likely be the most effective, i.e., how to perform a cost-benefit analysis for dependability. A fault-injection-based method for estimation of the various measures is described and the software of a real embedded control system is profiled to show the type of results obtainable by the analysis framework.
dependable systems and networks | 2002
Martin Hiller; Arshad Jhumka; Neeraj Suri
An important aspect in the development of dependable software is to decide where to locate mechanisms for efficient error detection and recovery. We present a comparison between two methods for selecting locations for error detection mechanisms, in this case executable assertions (EAs), in black-box, modular software. Our results show that by placing EAs based on error propagation analysis one may reduce the memory and execution time requirements as compared to experience- and heuristic-based placement while maintaining the obtained detection coverage. Further, we show the sensitivity of the EA-provided coverage estimation on the choice of the underlying error model. Subsequently, we extend the analysis framework such that error-model effects are also addressed and introduce measures for classifying signals according to their effect on system output when errors are present. The extended framework facilitates profiling of software systems from varied dependability perspectives and is also less susceptible to the effects of having different error models for estimating detection coverage.
international conference on distributed computing and internet technology | 2007
Arshad Jhumka; Sandeep S. Kulkarni
Several media access control (MAC) protocols proposed for wireless sensor networks assume nodes to be stationary. This can lead to poor network performance, as well as fast depletion of energy in systems where nodes are mobile. This paper presents several results for TDMA-based MAC protocol for mobile sensor networks, and also introduces a novel mobility-aware TDMA-based MAC protocol for mobile sensor networks. The protocol works by first splitting a given round into a control part, and a data part. The control part is used to manage mobility, whereas nodes transmit messages in the data part. In the data part, some slots are reserved for mobile nodes. We show that the protocol ensures collision-freedom in the data part of a schedule.
design, automation, and test in europe | 2005
Arshad Jhumka; Stephan Klaus; Sorin A. Huss
The paper introduces dependability as an optimization criterion in the system-level design process of embedded systems. Given the pervasiveness of embedded systems, especially in the area of highly dependable and safety-critical systems, it is imperative to consider dependability in the system level design process directly. This naturally leads to a multi-objective optimization problem, as cost and time have to be considered too. The paper proposes a genetic algorithm to solve this multi-objective optimization problem and to determine a set of Pareto optimal design alternatives in a single optimization run. Based on these alternatives, the designer can choose his best solution, finding the desired tradeoff between cost, schedulability, and dependability.
The Computer Journal | 2011
Arshad Jhumka; Matthew Leeke; Sambid Shrestha
Wireless sensor networks have enabled novel applications such as monitoring, where security is invariably a requirement. One aspect of security, namely source location privacy, is becoming an increasingly important property of some wireless sensor network applications. The fake source technique has been proposed as an efficient technique to handle the source location privacy problem. However, there are several factors that limit the usefulness of current results: (i) the selection of fake sources is dependent on sophisticated nodes, (ii) fake sources are known a priori and (iii) the selection of fake sources is based on a prohibitively expensive pre-configuration phase. In this paper, we investigate the privacy enhancement and energy efficiency of different implementations of the fake source technique that circumvents these limitations. Our results show that the fake source technique is indeed effective in enhancing privacy. Specifically, one implementation achieves near-perfect privacy when there is at least one fake source in the network, at the expense of increased energy consumption. In the presence of multiple attackers, the same implementation yields only a 30% decrease in capture ratio with respect to flooding. To address this problem, we propose a hybrid technique which achieves a corresponding 50% reduction in the capture ratio and a near-perfect privacy whenever at least one fake source exists in the network.
symposium on reliable distributed systems | 2001
Arshad Jhumka; Martin Hiller; Neeraj Suri
With the functionality of most embedded systems based on software (SW), interactions amongst SW modules arise, resulting in error propagation across them. During SW development, it would be helpful to have a framework that clearly demonstrates the error propagation and containment capabilities of the different SW components. In this paper, we assess the impact of inter-modular error propagation. Adopting a white-box SW approach, we make the following contributions: (a) we study and characterize the error propagation process and derive a set of metrics that quantitatively represents the inter-modular SW interactions, (b) we use a real embedded target system used in an aircraft arrestment system to perform fault-injection experiments to obtain experimental values for the metrics proposed, (c) we show how the set of metrics can be used to obtain the required analytical framework for error propagation analysis. We find that the derived analytical framework establishes a very close correlation between the analytical and experimental values obtained. The intent is to use this framework to be able to systematically develop SW such that inter-modular error propagation is reduced by design.
Autonomous Agents and Multi-Agent Systems | 2013
Henry P. W. Franks; Nathan Griffiths; Arshad Jhumka
Coordination in open multi-agent systems (MAS) can reduce costs to agents associated with conflicting goals and actions, allowing artificial societies to attain higher levels of aggregate utility. Techniques for increasing coordination typically involve incorporating notions of conventions, namely socially adopted standards of behaviour, at either an agent or system level. As system designers cannot necessarily create high quality conventions a priori, we require an understanding of how agents can dynamically generate, adopt and adapt conventions during their normal interaction processes. Many open MAS domains, such as peer-to-peer and mobile ad-hoc networks, exhibit properties that restrict the application of the mechanisms that are often used, especially those requiring the incorporation of additional components at an agent or society level. In this paper, we use Influencer Agents (IAs) to manipulate convention emergence, which we define as agents with strategies and goals chosen to aid the emergence of high quality conventions in domains characterised by heterogeneous ownership and uniform levels of agent authority. Using the language coordination problem (Steels in Artif Life 2(3):319–392, 1995), we evaluate the effect of IAs on convention emergence in a population. We show that relatively low proportions of IAs can (i) effectively manipulate the emergence of high-quality conventions, and (ii) increase convention adoption and quality. We make no assumptions involving agent mechanism design or internal architecture beyond the usual assumption of rationality. Our results demonstrate the fragility of convention emergence in the presence of malicious or faulty agents that attempt to propagate low quality conventions, and confirm the importance of social network structure in convention adoption.
symposium on reliable distributed systems | 2013
Edward Chuah; Arshad Jhumka; Sai Narasimhamurthy; John Hammond; James C. Browne; Bill Barth
Bursts of abnormally high use of resources are thought to be an indirect cause of failures in large cluster systems, but little work has systematically investigated the role of high resource usage on system failures, largely due to the lack of a comprehensive resource monitoring tool which resolves resource use by job and node. The recently developed TACC_Stats resource use monitor provides the required resource use data. This paper presents the ANCOR diagnostics system that applies TACC_Stats data to identify resource use anomalies and applies log analysis to link resource use anomalies with system failures. Application of ANCOR to first identify multiple sources of resource anomalies on the Ranger supercomputer, then correlate them with failures recorded in the message logs and diagnosing the cause of the failures, has identified four new causes of compute node soft lockups. ANCOR can be adapted to any system that uses a resource use monitor which resolves resource use by job.