Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pradip K. Srimani is active.

Publication


Featured researches published by Pradip K. Srimani.


IEEE Transactions on Parallel and Distributed Systems | 2001

A strategy to manage cache consistency in a disconnected distributed environment

S.K.S. Gupta; Pradip K. Srimani

Modern distributed systems involving large number of nonstationary clients (mobile hosts, MH) connected via unreliable low-bandwidth communication channels are very prone to frequent disconnections. This disconnection may occur because of different reasons: The clients may voluntarily switch off (to save battery power), or a client may be involuntarily disconnected due to its own movement in a mobile network (hand-off, wireless link failures, etc.). A mobile computing environment is characterized by slow wireless links and relatively underprivileged hosts with limited battery powers. Still, when data at the server changes, the client hosts must be made aware of this fact in order for them to invalidate their cache, otherwise the host would continue to answer queries with the cached values returning incorrect data. The nature of the physical medium coupled with the fact that disconnections from the network are very frequent in mobile computing environments demand a cache invalidation strategy with minimum possible overheads. In this paper, we present a new cache maintenance scheme, called AS. The objective of the proposed scheme is to minimize the overhead for the MHs to validate their cache upon reconnection, to allow stateless servers, and to minimize the bandwidth requirement. The general approach is (1) to use asynchronous invalidation messages and (2) to buffer invalidation messages from servers at the MHs Home Location Cache (HLC) while the MH is disconnected from the network and redeliver these invalidation messages to the MH when it gets reconnected to the network. Use of asynchronous invalidation messages minimizes access latency, buffering of invalidation messages minimizes the overhead of validating MHs cache after each disconnection and use of HLC off-loads the overhead of maintaining state of MHs cache from the servers. The MH can be disconnected from the server either voluntarily or involuntarily. We capture the effects of both by using a single parameter: The percentage of time a mobile host is disconnected from the network. We demonstrate the efficacy of our scheme through simulation and performance modeling. In particular, we show that the average data access latency and the number of uplink requests by a MH decrease by using the proposed strategy at the cost of using buffer space at the HLC. We provide analytical comparison between our proposed scheme and the existing scheme for cache management in a mobile environment. Extensive experimental results are provided to compare the schemes in terms of performance metrics like latency, number of uplink requests, etc., under both a high and a low rate of change of data at servers for various values of the parameters. A mathematical model for the scheme is developed which matches closely with the simulation results.


workshop on mobile computing systems and applications | 1999

An adaptive protocol for reliable multicast in mobile multi-hop radio networks

Sandeep K. S. Gupta; Pradip K. Srimani

In this paper we propose a new protocol for reliable multicast in a multi-hop mobile radio network. The protocol is reliable, i.e., it guarantees message delivery to all multicast nodes even when the topology of the network changes during multicasting. The proposed protocol uses a core based shared tree. The multicast tree may get fragmented due to node movements. A notion of forwarding region is introduced which is used to glue together fragments of a multicast trees. The gluing process involves flooding of the forwarding region of the nodes which witness topology change due to node movements. Delivery of multicast messages to mobile nodes is expedited through (i) pushing of the message by witness nodes in their forwarding regions and (ii) pulling of messages by a mobile node during the (re)joining process. Hence, the protocol conserves network bandwidth by using a combination of the push-pull approach and limiting network flooding to only minimal parts of the network that is affected by topology change. The protocol adapts to both topology change and the distribution of the multicast group members to minimize the use of system resources.


Computers & Mathematics With Applications | 2003

Self-stabilizing Algorithms for Minimal Dominating Sets and Maximal Independent Sets

Sandra Mitchell Hedetniemi; Stephen T. Hedetniemi; David Pokrass Jacobs; Pradip K. Srimani

In the self-stabilizing algorithmic paradigm for distributed computation each node has only a local view of the system, yet in a finite amount of time, the system converges to a global state satisfying some desired property. In this paper we present polynomial time self-stabilizing algorithms for finding a dominating bipartition, a maximal independent set, and a minimal dominating set in any graph.


IEEE Transactions on Parallel and Distributed Systems | 1996

A new family of Cayley graph interconnection networks of constant degree four

Premkumar Vadapalli; Pradip K. Srimani

We propose a new family of interconnection networks that are Cayley graphs with constant node degree 4. These graphs are regular, have logarithmic diameter, and are maximally fault tolerant. We investigate different algebraic properties of these networks (including fault tolerance) and propose optimal routing algorithms. As far as we know, this is the first family of Cayley graphs of constant degree 4.


Information Processing Letters | 1992

Another distributed algorithm for multiple entries to a critical section

Pradip K. Srimani; Rachamallu L. N. Reddy

Abstract A new algorithm is proposed to allow K simultaneous entries into a critical section in a distributed system. The proposed algorithm needs in the worst case half as many message exchanges as in a recently published algorithm (K. Raymond, Inform. Process. Lett. 30 (1989) 189–193). A simplistic average case analysis of the proposed algorithm is presented which shows that we get almost 50% savings in number of messages in the average case also.


Journal of Parallel and Distributed Computing | 1996

Conditional Fault Diameter of Star Graph Networks

Yordan Rouskov; Shahram Latifi; Pradip K. Srimani

It is well known that star graphs are strongly resilient like thencubes in the sense that they are optimally fault tolerant and the fault diameter is increased only by one in the presence of maximum number of allowable faults. We investigate star graphs under the conditions offorbidden faulty sets, where all the neighbors of any node cannot be faulty simultaneously; we show that under these conditions star graphs can tolerate upto (2n? 5) faulty nodes and the fault diameter is increased only by 2 in the worst case in presence of maximum number of faults. Thus, star graphs enjoy the similar property of strong resilience under forbidden faulty sets like then-cubes. We have developed algorithms to trace the vertex disjoint paths under different conditions.


IEEE Transactions on Software Engineering | 1993

An examination of fault exposure ratio

Yashwant K. Malaiya; A. von Mayrhauser; Pradip K. Srimani

The fault exposure ratio, K, is an important factor that controls the per-fault hazard rate, and hence, the effectiveness of the testing of software. The authors examine the variations of K with fault density, which declines with testing time. Because faults become harder to find, K should decline if testing is strictly random. However, it is shown that at lower fault densities K tends to increase. This is explained using the hypothesis that real testing is more efficient than strictly random testing especially at the end of the test phase. Data sets from several different projects (in USA and Japan) are analyzed. When the two factors, e.g., shift in the detectability profile and the nonrandomness of testing, are combined the analysis leads to the logarithmic model that is known to have superior predictive capability. >


parallel computing | 2002

Parallel data intensive computing in scientific and commercial applications

Mario Cannataro; Domenico Talia; Pradip K. Srimani

Applications that explore, query, analyze, visualize, and, in general, process very large scale data sets are known as Data Intensive Applications. Large scale data intensive computing plays an increasingly important role in many scientific activities and commercial applications, whether it involves data mining of commercial transactions, experimental data analysis and visualization, or intensive simulation such as climate modeling. By combining high performance computation, very large data storage, high bandwidth access, and high-speed local and wide area networking, data intensive computing enhances the technical capabilities and usefulness of most systems. The integration of parallel and distributed computational environments will produce major improvements in performance for both computing intensive and data intensive applications in the future. The purpose of this introductory article is to provide an overview of the main issues in parallel data intensive computing in scientific and commercial applications and to encourage the reader to go into the more in-depth articles later in this special issue.


IEEE Transactions on Parallel and Distributed Systems | 2003

Adaptive core selection and migration method for multicast routing in mobile ad hoc networks

Sandeep K. S. Gupta; Pradip K. Srimani

Several multicast protocols such as Protocol Independent Multicast (PIM) (Deering et al., 1996) and Core-Based Trees (CBT) (Ballardie et al., 1993) use the notion of group-shared trees. The reason is that construction of minimal-cost tree spanning all members of the multicast group is expensive, hence these protocols use a core-based group-shared tree to distribute packets from all the sources. A core-based tree is a shortest-path tree rooted at some core node. The core node is also referred to as a center node or a rendezvous point. Core nodes may be chosen from some preselected set of nodes or some heuristics may be employed to select core nodes. We present distributed core selection and migration protocols for mobile ad hoc networks with dynamically changing network topology. Most protocols for core selection in static networks are not suitable for ad hoc networks, since these algorithms depend on knowledge of entire network topology, which is not available or is too expensive to maintain in an ad hoc network with dynamic topology. The proposed core location method is based on the notion of median node of the current multicast tree instead of the median node of the entire network. The rationale is that the mobile ad hoc network graphs are in general sparse and, hence, the multicast tree is a good approximation of the entire network for the current purpose. Our adaptive distributed core selection and migration method uses the fact that the median of a tree is equivalent to the centroid of that tree. The significance of this observation is due to the fact that the computation of a trees centroids does not require any distance information. Mobile ad hoc networks have limited bandwidth which needs to be conserved. Hence, we use the cost of multicast tree as the sum of weights of all the links in the tree, which signifies the total bandwidth consumed for multicasting a packet. We compare the cost of shortest-path tree rooted at the tree median, Cost/sub TM/, with the cost of shortest-path tree rooted at the median of the graph, Cost/sub GM/, which requires complete topology information to compute. A network graph model for generating random ad hoc mobile networks is developed to perform this comparison. The simulation results show that for large size networks, the ratio Cost/sub TM//Cost/sub GM/ lies between 0.8 to 1.2 for different multicast groups. Further, as the size of the multicast group increases the ratio approaches 1.


international parallel and distributed processing symposium | 2003

Self-stabilizing protocols for maximal matching and maximal independent sets for ad hoc networks

Wayne Goddard; Stephen T. Hedetniemi; David Pokrass Jacobs; Pradip K. Srimani

We propose two distributed algorithms to maintain, respectively, a maximal matching and a maximal independent set in a given ad hoc network; our algorithms are fault tolerant (reliable) in the sense that the algorithms can detect occasional link failures and/or new link creations in the network (due to mobility of the hosts) and can readjust the global predicates. We provide time complexity analysis of the algorithms in terms of the number of rounds needed for the algorithm to stabilize after a topology change, where a round is defined as a period of time in which each node in the system receives beacon messages from all its neighbors. In any ad hoc network, the participating nodes periodically transmit beacon messages for message transmission as well as to maintain the knowledge of the local topology at the node; as a result, the nodes get the information about their neighbor nodes synchronously (at specific time intervals). Thus, the paradigm to analyze the complexity of the self-stabilizing algorithms in the context of ad hoc networks is very different from the traditional concept of an adversary daemon used in proving the convergence and correctness of self-stabilizing distributed algorithms in general.

Collaboration


Dive into the Pradip K. Srimani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bhabani P. Sinha

Indian Statistical Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sumit Sur

Colorado State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge