Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where T. V. Lakshman is active.

Publication


Featured researches published by T. V. Lakshman.


acm special interest group on data communication | 1998

High-speed policy-based packet forwarding using efficient multi-dimensional range matching

T. V. Lakshman; Dimitrios Stiliadis

The ability to provide differentiated services to users with widely varying requirements is becoming increasingly important, and Internet Service Providers would like to provide these differentiated services using the same shared network infrastructure. The key mechanism, that enables differentiation in a connectionless network, is the packet classification function that parses the headers of the packets, and after determining their context, classifies them based on administrative policies or real-time reservation decisions. Packet classification, however, is a complex operation that can become the bottleneck in routers that try to support gigabit link capacities. Hence, many proposals for differentiated services only require classification at lower speed edge routers and also avoid classification based on multiple fields in the packet header even if it might be advantageous to service providers. In this paper, we present new packet classification schemes that, with a worst-case and traffic-independent performance metric, can classify packets, by checking amongst a few thousand filtering rules, at rates of a million packets per second using range matches on more than 4 packet header fields. For a special case of classification in two dimensions, we present an algorithm that can handle more than 128K rules at these speeds in a traffic independent manner. We emphasize worst-case performance over average case performance because providing differentiated services requires intelligent queueing and scheduling of packets that precludes any significant queueing before the differentiating step (i.e., before packet classification). The presented filtering or classification schemes can be used to classify packets for security policy enforcement, applying resource management decisions, flow identification for RSVP reservations, multicast look-ups, and for source-destination and policy based routing. The scalability and performance of the algorithms have been demonstrated by implementation and testing in a prototype system.


IEEE Transactions on Circuits and Systems for Video Technology | 1992

Statistical analysis and simulation study of video teleconference traffic in ATM networks

Daniel P. Heyman; Ali Tabatabai; T. V. Lakshman

Source modeling and performance issues are studied using a long (30 min) sequence of real video teleconference data. It is found that traffic periodicity can cause different sources with identical statistical characteristics to experience differing cell-loss rates. For a single-stage multiplexer model, some of this source-periodicity effect can be mitigated by appropriate buffer scheduling and one effective scheduling policy is presented. For the sequence analyzed, the number of cells per frame follows a gamma (or negative binomial) distribution. The number of cells per frame is a stationary stochastic process. For traffic studies, neither an autoregressive model of order two nor a two-state Markov chain model is good because they do not model correctly the occurrence of frames with a large number of cells, which are a primary factor in determining cell-loss rates. The order two autoregressive model, however, fits the data well in a statistical sense. A multistate Markov chain model that can be derived from three traffic parameters is sufficiently accurate for use in traffic studies. >


acm special interest group on data communication | 2013

Towards an elastic distributed SDN controller

Advait Dixit; Fang Hao; Sarit Mukherjee; T. V. Lakshman; Ramana Rao Kompella

Distributed controllers have been proposed for Software Defined Networking to address the issues of scalability and reliability that a centralized controller suffers from. One key limitation of the distributed controllers is that the mapping between a switch and a controller is statically configured, which may result in uneven load distribution among the controllers. To address this problem, we propose ElastiCon, an elastic distributed controller architecture in which the controller pool is dynamically grown or shrunk according to traffic conditions and the load is dynamically shifted across controllers. We propose a novel switch migration protocol for enabling such load shifting, which conforms with the Openflow standard. We also build a prototype to demonstrate the efficacy of our design.


IEEE Journal on Selected Areas in Communications | 1995

Fundamental bounds and approximations for ATM multiplexers with applications to video teleconferencing

Anwar Elwalid; Daniel P. Heyman; T. V. Lakshman; Debasis Mitra; Alan Weiss

The main contributions of this paper are two-fold. First, we prove fundamental, similarly behaving lower and upper bounds, and give an approximation based on the bounds, which is effective for analyzing ATM multiplexers, even when the traffic has many, possibly heterogeneous, sources and their models are of high dimension. Second, we apply our analytic approximation to statistical models of video teleconference traffic, obtain the multiplexing systems capacity as determined by the number of admissible sources for given cell-loss probability, buffer size and trunk bandwidth, and, finally, compare with results from simulations, which are driven by actual data from coders. The results are surprisingly close. Our bounds are based on large deviations theory. The main assumption is that the sources are Markovian and time-reversible. Our approximation to the steady-state buffer distribution is called Chenoff-dominant eigenvalue since one parameter is obtained from Chernoffs theorem and the other is the systems dominant eigenvalue. Fast, effective techniques are given for their computation. In our application we process the output of variable bit rate coders to obtain DAR(1) source models which, while of high dimension, require only knowledge of the mean, variance, and correlation. We require cell-loss probability not to exceed 10/sup -6/, trunk bandwidth ranges from 45 to 150 Mb/s, buffer sizes are such that maximum delays range from 1 to 60 ms, and the number of coder-sources ranges from 15 to 150. Even for the largest systems, the time for analysis is a fraction of a second, while each simulation takes many hours. Thus, the real-time administration of admission control based on our analytic techniques is feasible. >


international conference on computer communications | 2012

Network aware resource allocation in distributed clouds

Mansoor Alicherry; T. V. Lakshman

We consider resource allocation algorithms for distributed cloud systems, which deploy cloud-computing resources that are geographically distributed over a large number of locations in a wide-area network. This distribution of cloud-computing resources over many locations in the network may be done for several reasons, such as to locate resources closer to users, to reduce bandwidth costs, to increase availability, etc. To get the maximum benefit from a distributed cloud system, we need efficient algorithms for resource allocation which minimize communication costs and latency. In this paper, we develop efficient resource allocation algorithms for use in distributed clouds. Our contributions are as follows: Assuming that users specify their resource needs, such as the number of virtual machines needed for a large computational task, we develop an efficient 2-approximation algorithm for the optimal selection of data centers in the distributed cloud. Our objective is to minimize the maximum distance, or latency, between the selected data centers. Next, we consider use of a similar algorithm to select, within each data center, the racks and servers where the requested virtual machines for the task will be located. Since the network inside a data center is structured and typically a tree, we make use of this structure to develop an optimal algorithm for rack and server selection. Finally, we develop a heuristic for partitioning the requested resources for the task amongst the chosen data centers and racks. We use simulations to evaluate the performance of our algorithms over example distributed cloud systems and find that our algorithms provide significant gains over other simpler allocation algorithms.


international conference on computer communications | 2013

Traffic engineering in software defined networks

Sugam Agarwal; Murali S. Kodialam; T. V. Lakshman

Software Defined Networking is a new networking paradigm that separates the network control plane from the packet forwarding plane and provides applications with an abstracted centralized view of the distributed network state. A logically centralized controller that has a global network view is responsible for all the control decisions and it communicates with the network-wide distributed forwarding elements via standardized interfaces. Google recently announced [5] that it is using a Software Defined Network (SDN) to interconnect its data centers due to the ease, efficiency and flexibility in performing traffic engineering functions. It expects the SDN architecture to result in better network capacity utilization and improved delay and loss performance. The contribution of this paper is on the effective use of SDNs for traffic engineering especially when SDNs are incrementally introduced into an existing network. In particular, we show how to leverage the centralized controller to get significant improvements in network utilization as well as to reduce packet losses and delays. We show that these improvements are possible even in cases where there is only a partial deployment of SDN capability in a network. We formulate the SDN controllers optimization problem for traffic engineering with partial deployment and develop fast Fully Polynomial Time Approximation Schemes (FPTAS) for solving these problems. We show, by both analysis and ns-2 simulations, the performance gains that are achievable using these algorithms even with an incrementally deployed SDN.


IEEE ACM Transactions on Networking | 1996

What are the implications of long-range dependence for VBR-video traffic engineering?

Daniel P. Heyman; T. V. Lakshman

The authors explore the influence of long-range dependence in broadband traffic engineering. The classification of stochastic processes {X/sub t/} into those with short or long-range dependence is based on the asymptotic properties of the variance of the sum S/sub m/=X/sub 1/+X/sub 2/+/spl middot//spl middot//spl middot/+X/sub m/. Suppose this process describes the number of packets (or ATM cells) that arrive at a buffer; X/sub t/ is the number that arrive in the tth time slice (e.g., 10 ms). We use a generic buffer model to show how the distribution of S/sub m/ (for all values of m) determines the buffer occupancy. From this model we show that long-range dependence does not affect the buffer occupancy when the busy periods are not large. Numerical experiments show this property is present when data from four video conferences and two entertainment video sequences (which have long-range dependence) are used as the arrival process, even when the transmitting times are long enough to make the probability of buffer overflow 0.07. We generated sample paths from Markov chain models of the video traffic (these have short-range dependence). Various operating characteristics, computed over a wide range of loadings, closely agree when the data trace and the Markov chain paths are used to drive the model. From this, we conclude that long-range dependence is not a crucial property in determining the buffer behavior of variable bit rate (VBR)-video sources.


IEEE ACM Transactions on Networking | 1996

Source models for VBR broadcast-video traffic

Daniel P. Heyman; T. V. Lakshman

Traffic from video services is expected to be a substantial portion of the traffic carried by emerging broadband integrated networks. For variable bit rate (VBR) coded video, statistical source models are needed to design networks that achieve acceptable picture quality at minimum cost and to design traffic shaping and control mechanisms. For video teleconference traffic Heyman et al. (1992) showed that traffic is sufficiently accurately characterized by a multistate Markov chain model that can be derived from three traffic parameters (mean, correlation, and variance). The present authors describe modeling results for sequences with frequent scene changes (the previously studied video teleconferences have very little scene variation) such as entertainment television, news, and sports broadcasts. The authors analyze 11 long sequences of broadcast video traffic data. Unlike video teleconferences, the different sequences studied have different details regarding distributions of cells per frame. The authors present source models applicable to the different sequences and evaluate their accuracy as predictors of cell losses in asynchronous transfer mode (ATM) networks. The modeling approach is the same for all of the sequences but use of a single model based on a few physically meaningful parameters and applicable to all sequences does not seem to be possible.


international conference on computer communications | 1997

Window-based error recovery and flow control with a slow acknowledgement channel: a study of TCP/IP performance

T. V. Lakshman; Upamanyu Madhow; Bernhard Suter

With the envisaged growth in Internet access services over networks with asymmetric links such as asymmetric digital subscriber line (ADSL) and hybrid fiber coax (HFC), it becomes crucial to evaluate the performance of window-based protocols over systems in which the reverse link is considerably slower than the forward link. Even if the actual bandwidth asymmetry is moderate, high effective asymmetries can result because of bidirectional traffic. Our objective is to determine, whether TCP/IP performs reasonably in a setting in which the reverse link is the primary bottleneck. Our main results are as follows. (1) For both the prevalent Tahoe version with Fast Retransmit and the Reno version of TCP we determine the throughput as a function of buffering, round-trip times and normalized asymmetry (taken to be the ratio of the transmission time of ACKs in the reverse path to that of data packets in the forward path). We identify three modes of operation which are dependent on the forward buffer sizes and the normalized asymmetry. (2) Asymmetry increases the TCPs already high sensitivity to random packet losses that might be caused by transient bursts in real-time traffic. Specifically, random loss leads to significant throughput deterioration when the product of the loss probability, the asymmetry and the square of the bandwidth delay product is large. (3) Congestion in the reverse path adds considerably to the TCPs unfairness when multiple connections share the reverse link. Link bandwidth sharing is unfair even for connections with identical round-trip times and hence use of per connection buffer allocation on the reverse path appears essential.


IEEE ACM Transactions on Networking | 2003

Dynamic routing of restorable bandwidth-guaranteed tunnels using aggregated network resource usage information

Murali S. Kodialam; T. V. Lakshman

This paper presents new algorithms for dynamic routing of restorable bandwidth-guaranteed paths. We assume that connection requests one-by-one and have to be routed with no a priori knowledge of future arrivals. In order to guarantee restorability, in addition to determining an active path to route each request, an alternate link (node) disjoint backup (restoration) path has to be determined for the request at the time of connection initiation. This joint on-line routing problem is becoming particularly important in optical networks and in multiprotocol label switching (MPLS)-based networks due to the trend in backbone networks toward dynamic provisioning of bandwidth-guaranteed or wavelength paths. A straightforward solution for the restoration problem is to find two disjoint paths. However, this results in excessive resource usage. Given a restoration objective, such as protection against single-link failures, backup path bandwidth usage can be reduced by judicious sharing of backup paths amongst certain active paths while still maintaining restorability. The best sharing performance is achieved if the routing of every path in progress in the network is known to the routing algorithm at the time of a new path setup. We give an integer programming formulation for this problem which is new. Complete path routing knowledge is a reasonable assumption for a centralized routing algorithm. However, it is not often desirable, particularly when distributed routing is preferred. We show that an aggregate information scenario which uses only aggregated and not per-path information provides sufficient information for a suitably developed algorithm to be able to perform almost as well as the complete information scenario. Disseminating this aggregate information is feasible using proposed traffic engineering extensions to routing protocols. We formulate the dynamic restorable bandwidth routing problem in this aggregate information scenario and develop efficient routing algorithms. We show that the performance of our aggregate information-based algorithm is close to the complete information bound.

Collaboration


Dive into the T. V. Lakshman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James B. Orlin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Upamanyu Madhow

University of Illinois at Urbana–Champaign

View shared research outputs
Researchain Logo
Decentralizing Knowledge