Sivasankar Radhakrishnan
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sivasankar Radhakrishnan.
international symposium on microarchitecture | 2010
Amin Vahdat; Mohammad Al-Fares; Nathan Farrington; Radhika Niranjan Mysore; George Porter; Sivasankar Radhakrishnan
Scale-out architectures supporting flexible, incremental scalability are common for computing and storage. However, the network remains the last bastion of the traditional scale-up approach, making it the data centers weak link. Through the UCSD Triton network architecture, the authors explore issues in managing the network as a single plug-and-play virtualizable fabric scalable to hundreds of thousands of ports and petabits per second of aggregate bandwidth.
conference on emerging network experiment and technology | 2011
Sivasankar Radhakrishnan; Yuchung Cheng; Jerry Chu; Arvind Jain; Barath Raghavan
Todays web services are dominated by TCP flows so short that they terminate a few round trips after handshaking; this handshake is a significant source of latency for such flows. In this paper we describe the design, implementation, and deployment of the TCP Fast Open protocol, a new mechanism that enables data exchange during TCPs initial handshake. In doing so, TCP Fast Open decreases application network latency by one full round-trip time, decreasing the delay experienced by such short TCP transfers. We address the security issues inherent in allowing data exchange during the three-way handshake, which we mitigate using a security token that verifies IP address ownership. We detail other fall-back defense mechanisms and address issues we faced with middleboxes, backwards compatibility for existing network stacks, and incremental deployment. Based on traffic analysis and network emulation, we show that TCP Fast Open would decrease HTTP transaction network latency by 15% and whole-page load time over 10% on average, and in some cases up to 40%.
acm special interest group on data communication | 2012
Sivasankar Radhakrishnan; Rong Pan; Amin Vahdat; George Varghese
Application performance in cloud data centers often depends crucially on network bandwidth, not just the aggregate data transmitted as in typical SLAs. We describe a mechanism for data center networks called NetShare that requires no hardware changes to routers but allows bandwidth to be allocated predictably across services based on weights. The weights are either specified by a manager, or automatically assigned at each switch port based on a virtual machine heuristic for isolation. Bandwidth unused by a service is shared proportionately by other services, providing weighted hierarchical max-min fair sharing. On a testbed of Fulcrum switches, we demonstrate that NetShare provides bandwidth isolation in various settings, including multipath networks.
architectures for networking and communications systems | 2013
Sivasankar Radhakrishnan; Malveeka Tewari; Rishi Kapoor; George Porter; Amin Vahdat
Solving “Big Data” problems requires bridging massive quantities of compute, memory, and storage, which requires a very high bandwidth network. Recently proposed direct connect networks like HyperX [1] and Flattened Butterfly [20] offer large capacity through paths of varying lengths between servers, and are highly cost effective for common data center workloads. However data center deployments are constrained to multi-rooted tree topologies like Fat-tree [2] and VL2 [16] due to shortest path routing and the limitations of commodity data center switch silicon. In this work we present Dahu1, simple enhancements to commodity Ethernet switches to support direct connect networks in data centers. Dahu avoids congestion hot-spots by dynamically spreading traffic uniformly across links, and forwarding traffic over non-minimal paths where possible. By performing load balancing primarily using local information, Dahu can act more quickly than centralized approaches, and responds to failure gracefully. Our evaluation shows that Dahu delivers up to 500% improvement in throughput over ECMP in large scale HyperX networks with over 130,000 servers, and up to 50% higher throughput in an 8,192 server Fat-tree network.
networked systems design and implementation | 2010
Mohammad Al-Fares; Sivasankar Radhakrishnan; Barath Raghavan; Nelson Huang; Amin Vahdat
acm special interest group on data communication | 2009
Radhika Niranjan Mysore; Andreas Pamboris; Nathan Farrington; Nelson Huang; Pardis Miri; Sivasankar Radhakrishnan; Vikram Subramanya; Amin Vahdat
acm special interest group on data communication | 2010
Nathan Farrington; George Porter; Sivasankar Radhakrishnan; Hamid Hajabdolali Bazzaz; Vikram Subramanya; Yeshaiahu Fainman; George Papen; Amin Vahdat
networked systems design and implementation | 2014
Sivasankar Radhakrishnan; Yilong Geng; Vimalkumar Jeyakumar; Abdul Kabbani; George Porter; Amin Vahdat
usenix conference on hot topics in cloud ccomputing | 2013
Sivasankar Radhakrishnan; Vimalkumar Jeyakumar; Abdul Kabbani; George Porter; Amin Vahdat
RFC | 2014
Yuchung Cheng; Jerry Chu; Sivasankar Radhakrishnan; Arvind Jain