Venkatesh Tamarapalli
Indian Institute of Technology Guwahati
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Venkatesh Tamarapalli.
international conference on communications | 2015
Parikshit Juluri; Venkatesh Tamarapalli; Deep Medhi
Dynamic adaptive HTTP (DASH) based streaming is steadily becoming the most popular online video streaming technique. DASH streaming provides seamless playback by adapting the video quality to the network conditions during the video playback. A DASH server supports adaptive streaming by hosting multiple representations of the video and each representation is divided into small segments of equal playback duration. At the client end, the video player uses an adaptive bitrate selection (ABR) algorithm to decide the bitrate to be selected for each segment depending on the current network conditions. Currently, proposed ABR algorithms ignore the fact that the segment sizes significantly vary for a given video bitrate. Due to this, even though an ABR algorithm is able to measure the network bandwidth, it may fail to predict the time to download the next segment In this paper, we propose a segment-aware rate adaptation (SARA) algorithm that considers the segment size variation in addition to the estimated path bandwidth and the current buffer occupancy to accurately predict the time required to download the next segment We also developed an open source Python based emulated DASH video player, that was used to compare the performance of SARA and a basic ABR. Our results show that SARA provides a significant gain over the basic algorithm in the video quality delivered, without noticeably impacting the video switching rates.
IEEE Transactions on Intelligent Transportation Systems | 2015
Hari Prabhat Gupta; S. V. Rao; Venkatesh Tamarapalli
Coverage and connectivity are important metrics used to evaluate the quality of service of wireless sensor networks (WSNs) monitoring a field of interest (FoI). Most of the literature assumes that the sensors are deployed directly in the FoI. In this paper we assume that the sensors are stochastically deployed outside the FoI. For such WSNs, we derive probabilistic expressions for k-coverage and connectivity using exact geometry. We validate our analysis and demonstrate its utility to estimate the minimum number of sensors required for a desired level of coverage and connectivity. We also demonstrate an on-campus traffic monitoring system to count the number of vehicles, detect the direction of vehicle, and to identify the vehicle (two-wheeler or four-wheeler) using sensors along both sides of the road.
communication systems and networks | 2016
Hema Kumar Yarnagula; Shubham Luhadia; Soumak Datta; Venkatesh Tamarapalli
With the widespread use of dynamic adaptive streaming over HTTP (DASH) for online video streaming, ensuring the users quality of experience (QoE) is of importance to both service and network providers to improve their revenue. DASH aims to adapt the bitrate based on the available bandwidth, while minimizing the number of playback interruptions. This is typically achieved with a rate adaptation algorithm, that chooses an appropriate representation for the next video segment. Most of the algorithms use buffer occupancy, measured throughput, or a combination of these to decide the best representation for next segment. In this paper, we investigate the influence of rate adaptation algorithms on the QoE metrics. We implement five different rate adaptation algorithms and experimentally evaluate them under varying bandwidth and network scenarios. We use objective metrics such as, playback start time, average bitrate played, number of bitrate switching events, number of interruptions and duration of the interruptions to assess the QoE. Our results demonstrate that algorithms that consider both throughput and buffer occupancy results in better QoE. Further, we observe that algorithms considering segment size remove the interruptions alongside improving average bitrate played. We observe that due to the mutual dependency of QoE metrics, most of the algorithms do not necessarily improve QoE while selecting the best bitrate.
international conference on communications | 2016
Rakesh Tripathi; S. Vignesh; Venkatesh Tamarapalli
Many popular e-commerce applications run on geo-distributed data centers requiring high availability. Fault-tolerant distributed data centers are designed by provisioning spare compute capacity to support the load of failed data center, apart from ensuring data durability. The main challenge during the planning phase is how to provision spare capacity such that the total cost of ownership (TCO) is minimized. While the literature handled spare capacity provisioning by minimizing the number of servers, variation in electricity cost and PUE corroborate the need to minimize the operating cost for capacity provisioning. We develop an MILP model for spare capacity provisioning for geo-distributed data centers with durability requirements. We consider spare capacity provisioning problem with the objective of minimizing TCO. We model variation in the demand, fluctuation in electricity prices across locations, cost of state replication, carbon tax across different countries, and delay constraints while formulating the optimization model. Solving the model shows that TCO is reduced while leveraging the electricity price variation and demand multiplexing. The proposed model outperforms the CDN model by 50% and the minimum server model by 34%. Results also demonstrate the effect of power usage effectiveness (PUE), latency, number of data centers and demand on the TCO.
communication systems and networks | 2016
Rakesh Tripathi; S. Vignesh; Venkatesh Tamarapalli
Almost all modern online services run on geo-distributed data centers, and fault tolerance is one of the primary requirement that decides the revenue of the service provider. Recent experiences have shown that the failure of a data center (at a site) is inevitable. In order to mask the failure, spare compute capacity needs to be provisioned across the distributed data center, which leads to additional cost. While the existing literature addresses the capacity provisioning problem only to minimize the number of servers, we argue that the operating cost needs to be considered as well. Since the operating cost and client demand vary both across space and time, we consider cost-aware capacity provisioning to account for their impact on the operating cost of data centers. We propose an optimization framework to minimize the Total Cost of Ownership (TCO) of the cloud provider while designing fault-tolerant geo-distributed data centers. We model the variation in the demand, fluctuation of electricity price and carbon tax across different countries, and delay constraints while computing the spare capacity. On solving the proposed optimization model using real world data, we notice a saving in the TCO (that includes cost of servers and operating cost) of about 17% compared to the model that only minimizes the number of extra servers. Results also highlight the relationship of power usage effectiveness (PUE), over-provisioning for fault tolerance, choice of data center locations, and latency requirements on the TCO. In particular, we notice that the approach of minimizing TCO is beneficial when the electricity prices vary significantly and the PUE is high, which appears to be the case with most of the cloud providers.
design of reliable communication networks | 2015
Parikshit Juluri; Venkatesh Tamarapalli; Deep Medhi
Dynamic Adaptive Streaming over HTTP (DASH) is slowly becoming the most popular online video streaming technology. DASH enables the video player to adapt the quality of the multimedia content being downloaded in order to match the varying network conditions. The key challenge with DASH is to decide the optimal video quality for the next video segment under the current network conditions. The aim is to download the next segment before the player experiences buffer-starvation. Several rate adaptation methodologies proposed so far rely on the TCP throughput measurements and the current buffer occupancy. However, these techniques, do not consider any information regarding the next segment that is to be downloaded. They assume that the segment sizes are uniform and assign equal weights to all the segments. However, due to the video encoding techniques employed, different segments of the video with equal playback duration are found to be of different sizes. In the current paper, we propose to list the individual segment characteristics in the Media Presentation Description (MPD) file during the preprocessing stage; this is later used in the segment download time estimations. We also propose a novel rate adaptation methodology that uses the individual segment sizes in addition to the measured TCP throughput and the buffer occupancy estimate for the best video rate to be used for the next segments.
IEEE Transactions on Network and Service Management | 2017
Rakesh Tripathi; S. Vignesh; Venkatesh Tamarapalli; Deep Medhi
Many critical e-commerce and financial services are deployed on geo-distributed data centers for scalability and availability. Recent market surveys show that failure of a data center is inevitable resulting in a huge financial loss. Fault-tolerance in distributed data centers is typically handled by provisioning spare capacity to mask failure at a site. We argue that the operating cost and data replication cost (for data availability) must be considered in spare capacity provisioning along with minimizing the number of servers. Since the operating cost and client demand vary across space and time, we propose cost-aware capacity provisioning to minimize the total cost of ownership (TCO) for fault-tolerant data centers. We formulate the problem of spare capacity provisioning in fault-tolerant distributed data centers using mixed integer linear programming (MILP), with an objective of minimizing the TCO. The model accounts for heterogeneous client demand, data replication strategies (single and multiple site), variation in electricity price and carbon tax, and delay constraints while computing the spare capacity. Solving the MILP using real-world data, we observed a saving in the TCO to the tune of 35% compared to a model that minimizes the total number of servers and 43% compared to the model that minimizes the average response time. We demonstrate that our model is beneficial when the cost of electricity, carbon tax, and bandwidth vary significantly across the locations, which seems to be the problem for most of the operators.
IEEE Communications Letters | 2017
Rakesh Tripathi; S. Vignesh; Venkatesh Tamarapalli
Integrating renewable energy and ensuring high availability are two major requirements for geo-distributed data centers. Availability is ensured by provisioning spare capacity across the data centers to mask data center failures (either partial or complete). We propose a mixed integer linear programming formulation for capacity planning while minimizing the total cost of ownership (TCO) for highly available, green, distributed data centers. We minimize the cost due to power consumption and server deployment while targeting a minimum usage of green energy. Solving our model shows that capacity provisioning considering green energy integration not only lowers carbon footprint but also reduces the TCO. Results show that up to 40% green energy usage is feasible with a marginal increase in the TCO compared with the other cost-aware models.
Journal of Parallel and Distributed Computing | 2017
Rakesh Tripathi; S. Vignesh; Venkatesh Tamarapalli; Anthony T. Chronopoulos; Hajar Siar
Abstract In this paper we propose an algorithm for load balancing in distributed data centers based on game theory. We model the load balancing problem as a non-cooperative game among the front-end proxy servers. We model the operating cost associated with a data center as a weighted linear combination of the energy cost and the latency cost. We propose a non-cooperative load balancing game with the objective of minimizing the operating cost and obtain the structure of Nash equilibrium. Based on this structure, a distributed load balancing algorithm is designed. We compare the performance of the proposed algorithm with the existing approaches. Numerical results demonstrate that the solution achieved by the proposed algorithm approximates the global optimal solution in terms of the cost and it also ensures fairness among the users.
Communication (NCC), 2016 Twenty Second National Conference on | 2016
Shilpa Budhkar; Venkatesh Tamarapalli
The increase in popularity of online video streaming services and the self-scalability property of peer-to-peer (P2P) overlays contributed to the development of P2P live streaming systems. In these systems, peers connect with each other to retrieve video content. The store-and-forward strategy to distribute stream induces forwarding delay at each hop from the source to a peer. Therefore, designing peer selection strategy to minimize end-to-end delay is an important research problem in P2P live streaming systems. In this paper, we develop a two-tier peer selection strategy which minimizes playback lag and startup delay. In the proposed two-tier peer selection strategy, peers are selected at both the levels (tracker and peer) based on propagation delay, upload capacity, playback lag and buffering level. The proposed two-tier peer selection strategy is compared with an existing system, Fast-Mesh using simulations. The results show that playback lag is reduced by 20-25% and startup delay is reduced by 10-15%.