Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dhiraj K. Pradhan is active.

Publication


Featured researches published by Dhiraj K. Pradhan.


acm special interest group on data communication | 1997

A cluster-based approach for routing in dynamic networks

P. Krishna; Nitin H. Vaidya; Mainak Chatterjee; Dhiraj K. Pradhan

The design and analysis of routing protocols is an important issue in dynamic networks such as packet radio and ad-hoc wireless networks. Most conventional protocols exhibit their least desirable behavior for highly dynamic interconnection topologies. We propose a new methodology for routing and topology information maintenance in dynamic networks. The basic idea behind the protocol is to divide the graph into a number of overlapping clusters. A change in the network topology corresponds to a change in cluster membership. We present algorithms for creation of clusters, as well as algorithms to maintain them in the presence of various network events. Compared to existing and conventional routing protocols, the proposed cluster-based approach incurs lower overhead during topology updates and also has quicker reconvergence. The effectiveness of this approach also lies in the fact that existing routing protocols can be directly applied to the network --- replacing the nodes by clusters.


international conference on distributed computing systems | 1997

Improving performance of TCP over wireless networks

Bikram S. Bakshi; P. Krishna; Nitin H. Vaidya; Dhiraj K. Pradhan

Transmission Control Protocol (TCP) assumes a relatively reliable underlying network where most packet losses are due to congestion. In a wireless network, however, packet losses will occur more often due to unreliable wireless links than due to congestion. When using TCP over wireless links, each packet loss on the wireless link results in congestion control measures being invoked at the source. This causes severe performance degradation. In this paper, we study the effect of: burst errors on wireless links; packet size variation on the wired network; local error recovery by the base station; and explicit feedback by the base station, on the performance of TCP over wireless networks. It is shown that the performance of TCP is sensitive to the packet size, and that significant performance improvements are obtained if a good packet size is used. While local recovery by the base station using link-level retransmissions is found to improve performance, timeouts can still occur at the source, causing redundant packet retransmissions. We propose an explicit feedback mechanism, to prevent these timeouts during local recovery. Results indicate significant performance improvements when explicit feedback from the base station is used. A major advantage of our approaches over existing proposals is that no state maintenance is required at any intermediate host. Experiments are performed using the Network Simulator (NS) from Lawrence Berkeley Labs. The simulator has been extended to incorporate wireless link characteristics.


IEEE Computer | 1995

Fault injection: a method for validating computer-system dependability

Jeffrey A. Clark; Dhiraj K. Pradhan

A fault tolerant computer systems dependability must be validated to ensure that its redundancy has been correctly implemented and the system will provide the desired level of reliable service. Fault injection-the deliberate insertion of faults into an operational system to determine its response offers an effective solution to this problem. We survey several fault injection studies and discuss tools such as React (Reliable Architecture Characterization Tool) that facilitate its application. >


IEEE Transactions on Parallel and Distributed Systems | 1991

Consensus with dual failure modes

Fred J. Meyer; Dhiraj K. Pradhan

The problem of achieving consensus in a distributed system is discussed. Systems are treated in which either or both of two types of faults may occur: dormant (essentially omission and timing faults) and arbitrary (exhibiting arbitrary behavior, commonly referred to as Byzantine). Previous results showed that are number of dormant faults may be tolerated when there are no arbitrary faults and that, at most, (n-1/3) arbitrary faults may be tolerated when there are no dormant faults (n is the number of processors). A continuum is established between the previous results: an algorithm exists iff n>f/sub max/+2m/sub max/ and c>f/sub max/+m/sub max/ (where c is the system connectivity), when faults are constrained so that there are at most f/sub max/ and at most m/sub max/ of these that are arbitrary. An algorithm is given and compared to known algorithms. A method is given to establish virtual links so that the communications graph appears completely connected. >


international test conference | 1992

Recursive Learning: An attractive alternative to the decision tree for test generation in digital ci

Wolfgang Kunz; Dhiraj K. Pradhan

Most test generators for combinational and sequential circuits use a branch and bound technique in order to systematically explore the search space when trying to generate a test vector. This paper presents an alternative method. Instead of using a decision tree to implicitly try all combinations of signal values for a given set of signals we use a learning routine which can be called recursively. Given enough recursions, it is guaranteed that we can identify all necessary assignments at a given stage of the algorithm. Our method is general in the sense that it can be combined with any logic alphabet and can be integrated in any FAN- based test generator for combinational circuits. Furthermore, recursive learning is equally applicable for test generation in sequential circuits and can even be used in hierarchical approaches. We show experimental results that demonstrate the attractiveness of our approach by comparing recursive learning with the conventional branch and bound technique for test generation in combinational circuits.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1994

Recursive learning: a new implication technique for efficient solutions to CAD problems-test, verification, and optimization

Wolfgang Kunz; Dhiraj K. Pradhan

Motivated by the problem of test pattern generation in digital circuits, this paper presents a novel technique called recursive learning that is able to perform a logic analysis on digital circuits. By recursively calling certain learning functions, it is possible to extract all logic dependencies between signals in a circuit and to perform precise implications for a given set of value assignments. This is of fundamental importance because it represents a new solution to the Boolean satisfiability problem. Thus, what we present is a new and uniform conceptual framework for a wide range of CAD problems including, but not limited to, test pattern generation, design verification, as well as logic optimization problems. Previous test generators for combinational and sequential circuits use a decision tree to systematically explore the search space when trying to generate a test vector. Recursive learning represents an attractive alternative. Using recursive learning with sufficient depth of recursion during the test generation process guarantees that implications are performed precisely; i.e., all necessary assignments for fault detection are identified at every stage of the algorithm so that no backtracks can occur. Consequently, no decision tree is needed to guarantee the completeness of the test generation algorithm. Recursive learning is not restricted to a particular logic alphabet and can be combined with most test generators for combinational and sequential circuits. Experimental results that demonstrate the efficiency of recursive learning are compared with the conventional branch-and-bound technique for test generation in combinational circuits. In particular, redundancy identification by recursive learning is demonstrated to be much more efficient than by previously reported techniques. In an important recent development, recursive learning has been shown to provide significant progress in design verification problems. Also importantly, recursive learning-based techniques have already been shown to be useful for logic optimization. Specifically, techniques based on recursive learning have already yielded better optimized circuits than the well known MIS-II. >


IEEE Transactions on Computers | 1991

A new framework for designing and analyzing BIST techniques and zero aliasing compression

Dhiraj K. Pradhan; Sandeep K. Gupta

A general framework for shift register-based signature analysis is presented, and a mathematical model for this framework-based on coding theory-is developed. There are two key features of this formulation, first, it allows for uniform treatment of LFSR, MISR, and multiple MISR-based signature analyzer. In addition, using this formulation, a new compression scheme for multiple output CUT is proposed. This scheme, referred to as multiinput LFSR, has the potential to achieve better aliasing than other schemes such as the multiple MISR scheme of comparable hardware complexity. Several results on aliasing are presented, and certain known results are shown to be direct consequences of the formulation. Also developed are error models that take into account the circuit topology and the effect of faults at the outputs. Using these models, exact closed-form expressions for aliasing probability are developed. A closed-form aliasing expression for MISR under an independent error model is provided. >


ieee international symposium on fault tolerant computing | 1996

Recoverable mobile environment: design and trade-off analysis

Dhiraj K. Pradhan; P. Krishna; Nitin H. Vaidya

The mobile wireless environment poses challenging problems in designing fault-tolerant systems because of the dynamics of mobility, and limited bandwidth available on wireless links. Traditional fault-tolerance schemes, therefore, cannot be directly applied to these systems. Mobile systems are often subject to environmental conditions which can cause loss of communications or data. Because of the consumer orientation of most mobile systems, run-time faults must be corrected with minimal (if any) intervention from the user. The fault-tolerance capability must, therefore, be transparent to the user. The paper presents recovery schemes for the failure of a mobile host. It portrays the limitations of the mobile wireless environment, and their impact on recovery protocols. The adaptation of well-known recovery schemes are presented which suit the mobile environment. The performance of these schemes has been analyzed to determine those environments where a particular recovery scheme is best suited. The performance of the recovery schemes primarily depends on: the wireless bandwidth; the communication-mobility ratio of the user; and the failure rate of the mobile host.


IEEE Transactions on Computers | 1994

Roll-forward checkpointing scheme: a novel fault-tolerant architecture

Dhiraj K. Pradhan; Nitin H. Vaidya

We propose a novel architecture for a fault-tolerant multiprocessor environment. It is assumed that the multiprocessor organization consists of a pool of active processing modules and either a small number of spare modules or active modules with some spare processing capacity. A fault-tolerance scheme is developed for duplex systems using checkpoints. Our scheme, unlike traditional checkpointing schemes, requires no rollbacks for recovering from single faults. The objective is to achieve performance of a triple modular redundant system using duplex system redundancy. >


theory and applications of satisfiability testing | 2004

NiVER: non-increasing variable elimination resolution for preprocessing SAT instances

Sathiamoorthy Subbarayan; Dhiraj K. Pradhan

The original algorithm for the SAT problem, Variable Elimination Resolution (VER/DP) has exponential space complexity. To tackle that, the backtracking-based DPLL procedure [2] is used in SAT solvers. We present a combination of two techniques: we use NiVER, a special case of VER, to eliminate some variables in a preprocessing step, and then solve the simplified problem using a DPLL SAT solver. NiVER is a strictly formula size not increasing resolution based preprocessor. In the experiments, NiVER resulted in up to 74% decrease in N (Number of variables), 58% decrease in K (Number of clauses) and 46% decrease in L (Literal count). In many real-life instances, we observed that most of the resolvents for several variables are tautologies. Such variables are removed by NiVER. Hence, despite its simplicity, NiVER does result in easier instances. In case NiVER removable variables are not present, due to very low overhead, the cost of NiVER is insignificant. Empirical results using the state-of-the-art SAT solvers show the usefulness of NiVER. Some instances cannot be solved without NiVER preprocessing. NiVER consistently performs well and hence, can be incorporated into all general purpose SAT solvers.

Collaboration


Dive into the Dhiraj K. Pradhan's collaboration.

Top Co-Authors

Avatar

Jimson Mathew

Indian Institute of Technology Patna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hafizur Rahaman

Indian Institute of Engineering Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Kunz

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge