T. N. Vijaykumar
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by T. N. Vijaykumar.
communication systems and networks | 2017
Yiyang Chang; Ashkan Rezaei; Balajee Vamanan; Jahangir Hasan; Sanjay G. Rao; T. N. Vijaykumar
The conventional approach to scaling Software-Defined Networking (SDN) controllers today is to partition switches based on network topology, with each partition being controlled by a single physical controller, running all SDN applications. However, topological partitioning is limited by the fact that (i) performance of latency-sensitive (e.g., monitoring) SDN applications associated with a given partition may be impacted by co-located compute-intensive (e.g., route computation) applications; (ii) simultaneously achieving low convergence time and response times might be challenging; and (iii) communication between instances of an application across partitions may increase latencies. To tackle these issues, in this paper, we explore functional slicing, a complementary approach to scaling, where multiple SDN applications belonging to the same topological partition may be placed in physically distinct servers. We present Hydra, a framework for distributed SDN controllers based on functional slicing. Hydra chooses partitions based on convergence time as the primary metric, but places application instances across partitions in a manner that keeps response times low while considering communication between applications of a partition, and instances of an application across partitions. Evaluations using the Floodlight controller show the importance and effectiveness of Hydra in simultaneously keeping convergence times on failures small, while sustaining higher throughput per partition and ensuring responsiveness to latency sensitive applications.
acm special interest group on data communication | 2018
Jaichen Xue; Muhammad Usama Chaudhry; Balajee Vamanan; T. N. Vijaykumar; Mithuna Thottethodi
1 MOTIVATION Many modern, interactive datacenter applications have tight latency requirements due to stringent service-level agreements (e.g., under 200 ms for Web Search). TCP-based datacenter networks significantly lengthen the application latency. Remote Direct Memory Access (RDMA) substantially reduces latencies compared to TCP by bypassing the operating system via hardware support at the network interface (e.g., RDMA over InfiniBand and RDMA over Converged Ethernet (RoCE) can cut TCP’s latency by 10x [8]). As such, RDMA may soon replace TCP in datacenters. Employing RDMA in datacenters, however, poses a challenge. RDMA provides hop-by-hop flow control and ratebased end-to-end congestion control [4]. However, RDMA’s congestion control is suboptimal for the well-known datacenter congestion problem, called incast, where multiple flows collide at a switch causing queuing delays and long latency tails [1] despite good network design [7]. Though such congestion affects only a small fraction of the flows (e.g., 0.1%), datacenter applications’ unique characteristics imply that the average latency is worsened. For example, because Web Search aggregates replies from thousands of
acm/ieee international conference on mobile computing and networking | 2017
Ashiwan Sivakumar; Chuan Jiang; Yun Seong Nam; Shankaranarayanan Puzhavakath Narayanan; Vijay Gopalakrishnan; Sanjay G. Rao; Subhabrata Sen; Mithuna Thottethodi; T. N. Vijaykumar
Despite much recent progress, Web page latencies over cellular networks remain much higher than those over wired networks. Proxies that execute Web page JavaScript (JS) and push objects needed by the client can reduce latency. However, a key concern is the scalability of the proxy which must execute JS for many concurrent users. In this paper, we propose to scale the proxies, focusing on a design where the proxys execution is solely to push the needed objects and the client completely executes the page as normal. Such redundant execution is a simple, yet effective approach to cutting network latencies, which dominate page load delays in cellular settings. We develop whittling, a technique to identify and execute in the proxy only the JS code necessary to identify and push the objects required for the client page load, while skipping other code. Whittling is closely related to program slicing, but with the important distinction that it is acceptable to approximate the program slice in the proxy given the clients complete execution. Experiments with top Alexa Web pages show NutShell can sustain, on average, 27\% more user requests per second than a proxy performing fully redundant execution, while preserving, and sometimes enhancing, the latency benefits.
2017 International Conference on Cloud and Autonomic Computing (ICCAC) | 2017
Nitin; Mithuna Thottethodi; T. N. Vijaykumar; Milind Kulkarni
Recent proposals extend MapReduce, a widely-used Big Data processing framework, with sampling to improve performance by producing approximate results with statistical error bounds. However, because these systems perform global uniform sampling across the entire key space of input data, they may completely miss rare keys which may be unacceptable in some applications. Well-known stratified sampling avoids missing rare keys by obtaining the same number of samples for each key which also achieves good performance by sampling popular keys infrequently and rare keys more often. While online stratified sampling has been done in centralized settings, straightforward extension to MapReduces distributed setting cannot easily leverage the number of per-key samples seen globally by all the Mappers to reduce the sampling rate of each Mapper in the future. Because there are hundreds of Mappers in a typical MapReduce job, such feedback can drastically reduce oversampling and improve performance. We present MaDSOS (MapReduce with Distributed Stratified Online Sampling) which makes two contributions: (1) Instead of a fixed n per-key samples and the resultant sampling rates, we propose a telescoping algorithm that uses fixed sampling rates of the form 1/2^k and, between n and 2n samples. (2) We propose a collaborative feedback scheme, that is enabled by the specific form of sampling rates and the leniency in the sample counts, to efficiently cut the sampling rates, and thus oversampling, once the desired number of samples have been seen globally. For our MapReduce benchmarks, MaDSOS improves performance by 59% over Hadoop while guaranteeing never to miss rare keys and achieves 2.5% per-key error compared to 100% worst-case error under global sampling at a fixed rate for all the keys.
Archive | 2012
Faraz Ahmad; Seyong Lee; Mithuna Thottethodi; T. N. Vijaykumar
Journal of Parallel and Distributed Computing | 2013
Faraz Ahmad; Seyong Lee; Mithuna Thottethodi; T. N. Vijaykumar
Microfluidics and Nanofluidics | 2013
Ahmed M. Amin; Raviraj Thakur; Seth M. Madren; Han Sheng Chuang; Mithuna Thottethodi; T. N. Vijaykumar; Steven T. Wereley; Stephen C. Jacobson
arXiv: Hardware Architecture | 2012
Hamza Bin Sohail; Balajee Vamanan; T. N. Vijaykumar
Archive | 2010
Ahmed M. Amin; Han Sheng Chuang; Steven T. Wereley; Mithuna Thottethodi; T. N. Vijaykumar; Stephen C. Jacobson
12th International Conference on Miniaturized Systems for Chemistry and Life Sciences, MicroTAS 2008 | 2008
Han Sheng Chuang; Ahmed M. Amin; Steve Wereley; Mithuna Thottethodi; T. N. Vijaykumar; Stephen C. Jacobson