Abhay Parekh
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Abhay Parekh.
IEEE Journal on Selected Areas in Communications | 2007
Raul Hernan Etkin; Abhay Parekh; David Tse
We study a spectrum sharing problem in an unlicensed band where multiple systems coexist and interfere with each other. Due to asymmetries and selfish system behavior, unfair and inefficient situations may arise. We investigate whether efficiency and fairness can be obtained with self-enforcing spectrum sharing rules. These rules have the advantage of not requiring a central authority that verifies compliance to the protocol. Any self-enforcing protocol must correspond to an equilibrium of a game. We first analyze the possible outcomes of a one shot game, and observe that in many cases an inefficient solution results. However, systems often coexist for long periods and a repeated game is more appropriate to model their interaction. In this repeated game the possibility of building reputations and applying punishments allows for a larger set of self-enforcing outcomes. When this set includes the optimal operating point, efficient, fair, and incentive compatible spectrum sharing becomes possible. We present examples that illustrate that in many cases the performance loss due to selfish behavior is small. We also prove that our results are tight and quantify the best achievable performance in a non-cooperative scenario
IEEE Transactions on Information Theory | 2010
Guy Bresler; Abhay Parekh; David Tse
Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within 1 bit/s/Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal level. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal level.
international conference on computer communications | 1992
Abhay Parekh; Robert G. Gallager
Worst-case bounds on delay and backlog are derived for leaky bucket constrained sessions in arbitrary topology networks of generalized processor sharing servers. When only a subset of the sessions are leaky bucket constrained succinct per-session bounds that are independent of the behavior of the other sessions and also of the network topology are given. However, these bounds are only shown to hold for each session that is guaranteed a backlog clearing rate that exceeds the token arrival rate of its leaky bucket. When all of the sessions are leaky bucket constrained, a much larger class of networks called consistent relative session treatment networks is analyzed. The session i route is treated as a whole, yielding tighter bounds than those that result from adding the worst-case delays (backlogs) at each of the servers in the route. The bounds on delay and backlog for each session are computed and shown to be achieved by staggered regimes when an independent sessions relaxation holds. Propagation delay is also incorporated into the model.<<ETX>>
Information Processing Letters | 1991
Abhay Parekh
We analyze a simple greedy algorithm for finding small dominating sets in undirected graphs of N nodes and M edges. We show that dg < N + 1 - v2M + 1, where dg is the cardinality of the dominating set returned by the algorithm.
international conference on computer communications | 1998
Israel Cidon; Amit Gupta; Tony Hsiao; Asad Khamisy; Abhay Parekh; Raphael Rom; Moshe Sidi
ATM networks are moving to a state where large production networks are deployed and require a universal, open and efficient ATM network control platform (NCP). The emerging PNNI (Private Network to Network Interface) standard introduces an internetworking architecture which can also be used as an intranetwork interface. However, PNNI fails in the latter due to performance limitations, limited functionality and the lack of open interfaces for functional extensions. OPENET is an open high-performance NCP based on performance and functional enhancements to PNNI. It addresses the issues of scalability, high performance and functionality. OPENET focuses on intranetworking and is fully compatible with PNNI in the internetwork environment. The major novelties of the OPENET architecture compared to PNNI is its focus on network control performance. A particular emphasis is given to the increase of the overall rate of connection handling, to the reduction of the call establishment latency and to the efficient utilization of the network resources. These performance enhancements are achieved by the use of a native ATM distribution tree for utilization updates, lightweight signalling and extensive use of caching and pre-calculation of routes. OPENET also extends PNNI functionality. It utilizes a new signalling paradigm that better supports fast reservation and multicast services, a control communication infrastructure which enables the development of augmented services such as directory, hand-off, billing, security etc. OPENET was implemented by the High-Speed Networking group at Sun Labs and is under operational tests.
allerton conference on communication, control, and computing | 2012
Kangwook Lee; Hao Zhang; Ziyu Shao; Minghua Chen; Abhay Parekh; Kannan Ramchandran
We present a general framework for a distributed VoD content distribution problem by formulating an optimization problem yielding a highly distributed implementation that is highly scalable and resilient to changes in demand. Our solution takes into account several individual node resource constraints including disk space, network link bandwidth, and node-I/O degree bound. First, we present a natural formulation that is NP-hard. Next, we design a simple fractional storage architecture based on codes to “fluidify” the content, thereby yielding a convex content placement problem. Third, we use a recently developed Markov approximation technique to solve the NP-hard problem of topology selection under node degree bound, and propose a simple distributed solution. We prove analytically that our algorithm achieves close-to-optimal performance. We establish via simulations that the system is robust to changes in user demand or in network condition and churn.
integrated network management | 2011
David Hausheer; Abhay Parekh; Jean Walrand; Galina Schwartz
Networking researchers complain that the current Internet is ossified, i.e. that it can hardly be changed. We believe that one of the fundamental reasons for that is the lack of appropriate incentives for providers to invest in new technology, especially in the absence of a compelling new architecture and a killer application that would benefit from an alternative architecture. There is a chicken-and-egg problem: In order to come up with exciting new applications, there needs to be an infrastructure supporting them. Researchers have proposed to build network testbeds (e.g. GENI/FIRE) to test new network architectures and protocols at larger scale. However, these testbeds appear to have little attraction for users, in particular for commercially oriented application developers. OpenFlow is an alternative approach enabling experimental protocols in production networks. However, one of its limitations is lack of addressing provider incentives. In this position paper, we therefore sketch the characteristics that we think a new Internet platform should have in order to be compelling. We argue for a platform that offers rich programmability at low performance cost and that separates traffic to enhance security and limit interference among applications. Moreover, the platform should be open and accessible to a wide community of users and have a high usability in terms of being easily programmable by application developers. Finally, we believe the new platform should provide support for running sophisticated applications across multiple provider domains.
modeling, analysis, and simulation on computer and telecommunication systems | 2013
Kangwook Lee; Lisa Yan; Abhay Parekh; Kannan Ramchandran
We propose, analyze and implement a general architecture for massively parallel VoD content distribution. We allow for devices that have a wide range of reliability, storage and bandwidth constraints. Each device can act as a cache for other devices and can also communicate with a central server. Some devices may be dedicated caches with no co-located users. Our goal is to allow each user device to be able to stream any movie from a large catalog, while minimizing the load of the central server. First, we architect and formulate a static optimization problem that accounts for various network bandwidth and storage capacity constraints, as well as the maximum number of network connections for each device. Not surprisingly this formulation is NP-hard. We then use a Markov approximation technique in a primal-dual framework to devise a highly distributed algorithm which is provably close to the optimal. Next we test the practical effectiveness of the distributed algorithm in several ways. We demonstrate remarkable robustness to system scale and changes in demand, user churn, network failure and node failures via a packet level simulation of the system. Finally, we describe our results from numerous experiments on a full implementation of the system with 60 caches and 120 users on 20 Amazon EC2 instances. In addition to corroborating our analytical and simulation-based findings, the implementation allows us to examine various system-level tradeoffs. Examples of this include: (i) the split between server to cache and cache to device traffic, (ii) the tradeoff between cache update intervals and the time taken for the system to adjust to changes in demand, and (iii) the tradeoff between the rate of virtual topology updates and convergence. These insights give us the confidence to claim that a much larger system on the scale of hundreds of thousands of highly heterogeneous nodes would perform as well as our current implementation.
IEEE ACM Transactions on Networking | 1993
Abhay Parekh; Robert G. Gallager
IEEE ACM Transactions on Networking | 1992
Abhay Parekh