Aniruddha Bohra
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aniruddha Bohra.
ieee international conference computer and communications | 2007
Vishnu Navda; Aniruddha Bohra; Samrat Ganguly; Dan Rubenstein
802.11a, b, and g standards were designed for deployment in cooperative environments, and hence do not include mechanisms to protect from jamming attacks. In this paper, we explore how to protect 802.11 networks from jamming attacks by having the legitimate transmission hop among channels to hide the transmission from the jammer. Using a combination of mathematical analysis and prototype experimentation in an 802.11a environment, we explore how much throughput can be maintained in comparison to the maintainable throughput in a cooperative, jam-free environment. Our experimental and analytical results show that in todays conventional 802.11a networks, we can achieve up to 60% of the original throughput. Our mathematical analysis allows us to extrapolate the throughput that can be maintained when the constraint on the number of orthogonal channels used for both legitimate communication and for jamming is relaxed.
ieee international conference computer and communications | 2007
Ravi Kokku; Aniruddha Bohra; Samrat Ganguly; Arun Venkataramani
Background transfers, or transfers that humans do not actively wait on, dominate the Internet today. In todays best-effort Internet, background transfers can interfere with foreground transfers causing long wait times, thereby hurting human productivity. In this paper, we present the design and implementation of a background network, Harp, that addresses this problem. Harp has three significant advantages over recent end-host based background transport protocols; Harp (i) uses multiple paths to exploit path diversity and load imbalance in the Internet to tailor network resource allocation to human needs, (ii) provides better fairness and utilization compared to unipath end-host protocols, and (ill) can be deployed at either end-hosts or enterprise gateways, thereby aligning the incentive for deployment with the goals of network customers. Our evaluation using simulations and a prototype on Planetlab suggests that Harp improves foreground TCP transfer time by a factor of five and background transfer time by a factor of two using just two extra paths per connection.
ieee international conference on high performance computing data and analytics | 2006
Jian Liang; Aniruddha Bohra; Hui Zhang; Samrat Ganguly; Rauf Izmailov
Traditional network file systems, like NFS, do not extend to wide-area due to low bandwidth and high network latency. We present WireFS, a Wide Area File System, which enables delegation of metadata management to nodes at client sites (homes). The home of a file stores the most recent copy of the file, serializes all updates, and streams updates to the central file server. WireFS uses access history to migrate the home of a file to the client site which accesses the file most frequently. n nWe formulate the home migration problem as an integer programming problem, and present two algorithms: a dynamic programming approach to find the optimal solution, and a non-optimal but more efficient greedy algorithm. We show through extensive simulations that even in the WAN setting, access latency over WireFS is comparable to NFS performance in the LAN setting; the migration overhead is also marginal.
networking architecture and storages | 2011
Stephen Rago; Aniruddha Bohra; Cristian Ungureanu
Typical NFS clients write in a lazy fashion: they leave dirty pages in the page cache and defer writing to the server until later. This reduces network traffic when applications repeatedly modify the same set of pages. However, this approach can lead to memory pressure, when the number of available pages on the client system is so low that the system must work harder to reclaim dirty pages. System performance is poor under memory pressure. We show examples of this problem and present two mechanisms to solve it: eager write back and eager page laundering. These mechanisms change the clients data management policy from lazy to eager, resulting in higher throughput for sequential writes. In addition, we show that NFS servers suffer from out-of-order file operations, which further reduce performance. We introduce request ordering, a server mechanism to process operations (as much as possible) in the order they were sent by the client, which improves read performance substantially. We have implemented these techniques in the Linux operating system. I/O performance is improved, with the most pronounced improvement visible for sequential access to large files. We see about 33% improvement in the performance of streaming write workloads and more than triple the performance of streaming read workloads. We evaluate several nonsequential workloads and show that these techniques do not degrade performance, and can sometimes improve performance. We also design and evaluate an adversarial workload to show that the eager policies can perform worse in some pathological cases.
International Journal of Parallel, Emergent and Distributed Systems | 2013
Stephen Rago; Aniruddha Bohra; Cristian Ungureanu
Typical NFS clients write in a lazy fashion: they leave dirty pages in the page cache and defer writing to the server until later. This reduces network traffic when applications repeatedly modify the same set of pages. However, this approach can lead to memory pressure, when the number of available pages on the client system is so low that the system must work harder to reclaim dirty pages. System performance is poor under memory pressure. We show examples of this problem and present two mechanisms to solve it: eager write back and eager page laundering. These mechanisms change the clients data management policy from lazy to eager, resulting in higher throughput for sequential writes. In addition, we show that NFS servers suffer from out-of-order file operations, which further reduce performance. We introduce request ordering, a server mechanism to process operations (as much as possible) in the order they were sent by the client, which improves read performance substantially. We have implemented these techniques in the Linux operating system. I/O performance is improved, with the most pronounced improvement visible for sequential access to large files. We see about 33% improvement in the performance of streaming write workloads and more than triple the performance of streaming read workloads. We evaluate several nonsequential workloads and show that these techniques do not degrade performance, and can sometimes improve performance. We also design and evaluate an adversarial workload to show that the eager policies can perform worse in some pathological cases.
file and storage technologies | 2010
Cristian Ungureanu; Benjamin Atkin; Akshat Aranya; Salil Gokhale; Stephen Rago; Grzegorz Calkowski; Cezary Dubnicki; Aniruddha Bohra
Archive | 2006
Hui Zhang; Aniruddha Bohra; Samrat Ganguly; Rauf Izmailov; Jian Liang
Archive | 2006
Samrat Ganguly; Aniruddha Bohra; Rauf Izmailov; Yoshihide Kikuchi
Archive | 2007
Ravindranath Kokku; Aniruddha Bohra; Samrat Ganguly; Rauf Izmailov
Archive | 2007
Samrat Ganguly; Vishnu Navda; Aniruddha Bohra; Daniel S. Rubenstein