Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where P. R. Kumar is active.

Publication


Featured researches published by P. R. Kumar.


IEEE Transactions on Information Theory | 2000

The capacity of wireless networks

Piyush Gupta; P. R. Kumar

When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput /spl lambda/(n) obtainable by each node for a randomly chosen destination is /spl Theta/(W//spl radic/(nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmissions range is optimally chosen, the bit-distance product that can be transported by the network per second is /spl Theta/(W/spl radic/An) bit-meters per second. Thus even under optimal circumstances, the throughput is only /spl Theta/(W//spl radic/n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.


Wireless Networks | 2004

The number of neighbors needed for connectivity of wireless networks

Feng Xue; P. R. Kumar

Unlike wired networks, wireless networks do not come with links. Rather, links have to be fashioned out of the ether by nodes choosing neighbors to connect to. Moreover the location of the nodes may be random.The question that we resolve is: How many neighbors should each node be connected to in order that the overall network is connected in a multi-hop fashion? We show that in a network with n randomly placed nodes, each node should be connected to Θ(log n) nearest neighbors. If each node is connected to less than 0.074log n nearest neighbors then the network is asymptotically disconnected with probability one as n increases, while if each node is connected to more than 5.1774log n nearest neighbors then the network is asymptotically connected with probability approaching one as n increases. It appears that the critical constant may be close to one, but that remains an open problem.These results should be contrasted with some works in the 1970s and 1980s which suggested that the “magic number” of nearest neighbors should be six or eight.


IEEE Wireless Communications | 2005

A cautionary perspective on cross-layer design

Vikas Kawadia; P. R. Kumar

Recently, in an effort to improve the performance of wireless networks, there has been increased interest in protocols that rely on interactions between different layers. However, such cross-layer design can run at cross purposes with sound and longer-term architectural principles, and lead to various negative consequences. This motivates us to step back and reexamine holistically the issue of cross-layer design and its architectural ramifications. We contend that a good architectural design leads to proliferation and longevity, and illustrate this with some historical examples. Even though the wireless medium is fundamentally different from the wired one, and can offer undreamt of modalities of cooperation, we show that the conventional layered architecture is a reasonable way to operate wireless networks, and is in fact optimal up to an order. However the temptation and perhaps even the need to optimize by incorporating cross-layer adaptation cannot be ignored, so we examine the issues involved. We show that unintended cross-layer interactions can have undesirable consequences on overall system performance. We illustrate them by certain cross-layer schemes loosely based on recent proposals. We attempt to distill a few general principles for cross-layer design. Moreover, unbridled cross-layer design can lead to spaghetti design, which can stifle further innovation and be difficult to upkeep. At a critical time when wireless networks may be on the cusp of massive proliferation, the architectural considerations may be paramount. We argue that it behooves us to exercise caution while engaging in cross-layer design.


IEEE Transactions on Information Theory | 2004

A network information theory for wireless communication: scaling laws and optimal operation

Liang-Liang Xie; P. R. Kumar

How much information can be carried over a wireless network with a multiplicity of nodes, and how should the nodes cooperate to transfer information? To study these questions, we formulate a model of wireless networks that particularly takes into account the distances between nodes, and the resulting attenuation of radio signals, and study a performance measure that weights information by the distance over which it is transported. Consider a network with the following features. I) n nodes located on a plane, with minimum separation distance /spl rho//sub min/>0. II) A simplistic model of signal attenuation e/sup -/spl gamma//spl rho////spl rho//sup /spl delta// over a distance /spl rho/, where /spl gamma//spl ges/0 is the absorption constant (usually positive, unless over a vacuum), and /spl delta/>0 is the path loss exponent. III) All receptions subject to additive Gaussian noise of variance /spl sigma//sup 2/. The performance measure we mainly, but not exclusively, study is the transport capacity C/sub T/:=sup/spl Sigma/on/sub /spl lscr/=1//sup m/R/sub /spl lscr///spl middot//spl rho//sub /spl lscr//, where the supremum is taken over m, and vectors (R/sub 1/,R/sub 2/,...,R/sub m/) of feasible rates for m source-destination pairs, and /spl rho//sub /spl lscr// is the distance between the /spl lscr/th source and its destination. It is the supremum distance-weighted sum of rates that the wireless network can deliver. We show that there is a dichotomy between the cases of relatively high and relatively low attenuation. When /spl gamma/>0 or /spl delta/>3, the relatively high attenuation case, the transport capacity is bounded by a constant multiple of the sum of the transmit powers of the nodes in the network. However, when /spl gamma/=0 and /spl delta/<3/2, the low-attenuation case, we show that there exist networks that can provide unbounded transport capacity for fixed total power, yielding zero energy priced communication. Examples show that nodes can profitably cooperate over large distances using coherence and multiuser estimation when the attenuation is low. These results are established by developing a coding scheme and an achievable rate for Gaussian multiple-relay channels, a result that may be of interest in its own right.


IEEE Transactions on Automatic Control | 1986

Optimal control of production rate in a failure prone manufacturing system

Ram Akella; P. R. Kumar

Consider a manufacturing system producing a single commodity. The manufacturing system can be in one of two states: functional and failed. It moves back and forth between these two states as a continuous time Markov chain, with mean time between failures = 1/q1, and mean time to repair 1/q2. When functional, the manufacturing system can produce at up to a maximum rate d. When failed, it cannot produce the commodity at all.


international conference on computer communications | 2003

Power control and clustering in ad hoc networks

Vikas Kawadia; P. R. Kumar

In this paper, we consider the problem of power control when nodes are nonhomogeneously dispersed in space. In such situations, one seeks to employ per packet power control depending on the source and destination of the packet. This gives rise to a joint problem which involves not only power control but also clustering. We provide three solutions for joint clustering and power control. The first protocol, CLUSTERPOW, aims to increase the network capacity by increasing spatial reuse. We provide a simple and modular architecture to implement CLUSTERPOW at the network layer. The second, Tunnelled CLUSTERPOW, allows a finer optimization by using encapsulation, but we do not know of an efficient way to implement it. The last, MINPOW, whose basic idea is not new, provides an optimal routing solution with respect to the total power consumed in communication. Our contribution includes a clean implementation of MINPOW at the network layer without any physical layer support. We establish that all three protocols ensure that packets ultimately reach their intended destinations. We provide a software architectural framework for our implementation as a network layer protocol. The architecture works with any routing protocol, and can also be used to implement other power control schemes. Details of the implementation in Linux are provided.


international symposium on information theory | 2001

Towards an information theory of large networks: an achievable rate region

Piyush Gupta; P. R. Kumar

We study communication networks of arbitrary size and topology and communicating over a general vector discrete memoryless channel (DMC). We propose an information-theoretic constructive scheme for obtaining an achievable rate region in such networks. Many well-known capacity-defining achievable rate regions can be derived as special cases of the proposed scheme. A few such examples are the physically degraded and reversely degraded relay channels, the Gaussian multiple-access channel, and the Gaussian broadcast channel. The proposed scheme also leads to inner bounds for the multicast and allcast capacities. Applying the proposed scheme to a specific wireless network of n nodes located in a region of unit area, we show that a transport capacity of /spl Theta/(n) bit-meters per second (bit-meters/s) is feasible in a certain family of networks, as compared to the best possible transport capacity of /spl Theta/(/spl radic/n) bit-meters/s in Gupta et al. (2000), where the receiver capabilities were limited. Even though the improvement is shown for a specific class of networks, a clear implication is that designing and employing more sophisticated multiuser coding schemes can provide sizable gains in at least some large wireless networks.


Operations Research | 1988

Optimality of zero-inventory policies for unreliable manufacturing systems

Tomasz R. Bielecki; P. R. Kumar

We show that there are ranges of parameter values describing an unreliable manufacturing system for which zero-inventory policies are exactly optimal even when there is uncertainty in manufacturing capacity. This result may be initially surprising since it runs counter to the argument that inventories are buffers against uncertainty and therefore one must strive to maintain a strictly positive inventory as long as there is any uncertainty. However, there is a deeper reason why this argument does not hold, and why a zero-inventory policy can be optimal even in the presence of uncertainty. This provable optimality reinforces the case for zero-inventory policies, which is currently made on the separate grounds that it enforces a healthy discipline on the entire manufacturing process.


IEEE Transactions on Automatic Control | 1991

Distributed scheduling based on due dates and buffer priorities

Steve C. H. Lu; P. R. Kumar

Several distributed scheduling policies are analyzed for a large semiconductor manufacturing facility, where jobs of wafers, each with a desired due date, follow essentially the same route through the manufacturing system, returning several times to many of the service centers for the processing of successive layers. It is shown that for a single nonacyclic flow line the first-buffer-first-serve policy, which assigns priorities to buffers in the order that they are visited, is stable, whenever the arrival rate, allowing for some burstiness, is less than the system capacity. The last-buffer-first-serve policy (LBFS), where the priority ordering is reversed, is also stable. The earliest-due-date policy, where priority is based on the due date of a part, as well as another due-date-based policy of interest called the least slack policy (LS), where priority is based on the slack of a part, defined as the due date minus an estimate of the remaining delay, are also proved to be stable. >


IEEE Transactions on Semiconductor Manufacturing | 1994

Efficient scheduling policies to reduce mean and variance of cycle-time in semiconductor manufacturing plants

Steve C. H. Lu; Deepa Ramaswamy; P. R. Kumar

The problem of reducing the mean and variance of cycle time in semiconductor manufacturing plants is addressed. Such plants feature a characteristic reentrant process flow, where lots repeatedly return at different stages of their production to the same service stations for further processing, consequently creating much competition for machines. We introduce a new class of scheduling policies, called Fluctuation Smoothing policies. Unanimously, our policies achieved the best mean cycle time and Standard Deviation of Cycle Time, in all the configurations of plant models and release policies tested. As an example, under the recommended Workload Regulation Release policy, for a heavily loaded Research and Development Fabrication Line model, our Fluctuation Smoothing policies achieved a reduction of 22.4% in the Mean Queueing Time, and a reduction of 52.0% in the Standard Deviation of Cycle Time, over the baseline FIFO policy. These conclusions are based on extensive simulations conducted on two models of semiconductor manufacturing plants. The first is a model of a Research and Development Fabrication Line. The second is an aggregate model intended to approximate a full scale production line. Statistical tests are used to corroborate our conclusions. >

Collaboration


Dive into the P. R. Kumar's collaboration.

Researchain Logo
Decentralizing Knowledge