Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vijay Mann is active.

Publication


Featured researches published by Vijay Mann.


international world wide web conferences | 2004

Decentralized orchestration of composite web services

Girish Chafle; Sunil Chandra; Vijay Mann; Mangala Gowri Nanda

Web services make information and software available programmatically via the Internet and may be used as building blocks for applications. A composite web service is one that is built using multiple component web services and is typically specified using a language such as BPEL4WS or WSIPL. Once its specification has been developed, the composite service may be orchestrated either in a centralized or in a decentralized fashion. Decentralized orchestration offers performance improvements in terms of increased throughput and scalability and lower response time. However, decentralized orchestration also brings additional complexity to the system in terms of error recovery and fault handling. Further, incorrect design of a decentralized system can lead to potential deadlock or non-optimal usage of system resources. This paper investigates build time and runtime issues related to decentralized orchestration of composite web services. We support our design decisions with performance results obtained on a decentralized setup using BPEL4WS to describe the composite web services and BPWS4J as the underlying runtime environment to orchestrate them.


NETWORKING'11 Proceedings of the 10th international IFIP TC 6 conference on Networking - Volume Part I | 2011

VMFlow: leveraging VM mobility to reduce network power costs in data centers

Vijay Mann; Avinash Kumar; Partha Dutta; Shivkumar Kalyanaraman

Networking costs play an important role in the overall costs of a modern data center. Network power, for example, has been estimated at 10-20% of the overall data center power consumption. Traditional power saving techniques in data centers focus on server power reduction through Virtual Machine (VM) migration and server consolidation, without taking into account the network topology and the current network traffic. On the other hand, recent techniques to save network power have not yet utilized the various degrees of freedom that current and future data centers will soon provide. These include VM migration capabilities across the entire data center network, on demand routing through programmable control planes, and high bisection bandwidth networks. This paper presents VMFlow: a framework for placement and migration of VMs that takes into account both the network topology as well as network traffic demands, to meet the objective of network power reduction while satisfying as many network demands as possible. We present network power aware VM placement and demand routing as an optimization problem. We show that the problem is NP-complete, and present a fast heuristic for the same. Next, we present the design of a simulator that implements this heuristic and simulates its executions over a data center network with a CLOS topology. Our simulation results using real data center traces demonstrate that, by applying an intelligent VM placement heuristic, VMFlow can achieve 15-20% additional savings in network power while satisfying 5-6 times more network demands as compared to recently proposed techniques for saving network power.


network operations and management symposium | 2012

CrossRoads: Seamless VM mobility across data centers through software defined networking

Vijay Mann; Anilkumar Vishnoi; Kalapriya Kannan; Shivkumar Kalyanaraman

Most enterprises today run their applications on virtual machines (VMs). VM mobility - both live and offline, can provide enormous flexibility and also bring down OPEX (Operational Expenditure) costs. However, both live and offline migration of VMs is still limited to within a local network because of the complexities associated with cross subnet live and offline migration. These complexities mainly arise from the hierarchical addressing used by various layer 3 routing protocols. For cross data center VM mobility, virtualization vendors require that the network configuration of the new data center where a VM migrates must be similar to that of the old data center. This severely restricts wide spread use of VM migration across data center networks. For offline migration, the above limitations can be overcome by reconfiguring IP addresses for the migrated VMs. However, even this effort is non-trivial and time consuming as these IP addresses are embedded in various configuration files inside these VMs. As enterprises grow and new data centers emerge in different geographic locations, there is a need to interconnect these data centers in a way that allows seamless VM mobility. In this context, we present CrossRoads - a network fabric that provides layer agnostic and seamless live and offline VM mobility across multiple data centers. We leverage software defined networking and implement an OpenFlow based prototype of CrossRoads. CrossRoads extends the idea of location independence based on pseudo addresses proposed in recent research to work with a control plane overlay of OpenFlow network controllers in various data centers. We evaluate CrossRoads on an innovative testbed that leverages nested virtualization to emulate two data centers. Our results confirm that CrossRoads has negligible performance overhead as compared to a Default layer 2 network - its average performance was no worse than 2.3% as compared to Default fabric across all experiments. In some experiments, it even outperformed the Default by up to 30%.


communication systems and networks | 2014

Avalanche: Data center Multicast using software defined networking

Aakash Iyer; Praveen Kumar; Vijay Mann

Group communication is extensively used in modern data centers. Multicast lends itself naturally to these communication patterns. Traditionally, concerns around reliability, scalability and security have resulted in poor adoption of IP multicast in the Internet. However, data center networks with their structured topologies and tighter control present an opportunity to address these concerns. Software defined networking (SDN) architectures, such as OpenFlow, further provide the opportunity to not merely adopt but also innovate multicast in data centers. In this paper, we present Avalanche - An SDN based system that enables multicast in commodity switches used in data centers. As part of Avalanche, we develop a new multicast routing algorithm called Avalanche Routing Algorithm (AvRA). AvRA attempts to minimize the size of the routing tree it creates for any given multicast group. In typical data center topologies like Tree and FatTree, AvRA reduces to an optimal routing algorithm that becomes a solution to the Steiner Tree problem. Avalanche leverages SDN to take advantage of the rich path diversity commonly available in data centers networks, and thereby achieves highly efficient bandwidth utilization. We implement Avalanche as an OpenFlow controller module. Our emulation of Avalanche with Mininet Hi-Fi shows that it improves application data rate by up to 12%, and lowers packet loss by 51%, on an average, compared to IP Multicast. We also build a simulator to evaluate Avalanche at scale. For the PortLand FatTree topology, Avalanche results in at least a 35% reduction, compared to IP Multicast, in the number of links that are less than 5% utilized, once the number of multicast groups exceeds 1000. Lastly, our results confirm that AvRA results in smaller trees compared to traditional IP Multicast routing.


IFIP'12 Proceedings of the 11th international IFIP TC 6 conference on Networking - Volume Part I | 2012

Remedy: network-aware steady state VM management for data centers

Vijay Mann; Akanksha Gupta; Partha Dutta; Anilkumar Vishnoi; Parantapa Bhattacharya; Rishabh Poddar; Aakash Iyer

Steady state VM management in data centers should be network-aware so that VM migrations do not degrade network performance of other flows in the network, and if required, a VM migration can be intelligently orchestrated to decongest a network hotspot. Recent research in network-aware management of VMs has focused mainly on an optimal network-aware initial placement of VMs and has largely ignored steady state management. In this context, we present the design and implementation of Remedy. Remedy ranks target hosts for a VM migration based on the associated cost of migration, available bandwidth for migration and the network bandwidth balance achieved by a migration. It models the cost of migration in terms of additional network traffic generated during migration. We have implemented Remedy as an OpenFlow controller application that detects the most congested links in the network and migrates a set of VMs in a network-aware manner to decongest these links. Our choice of target hosts ensures that neither the migration traffic nor the flows that get rerouted as a result of migration cause congestion in any part of the network. We validate our cost of migration model on a virtual software testbed using real VM migrations. Our simulation results using real data center traffic data demonstrate that selective network aware VM migrations can help reduce unsatisfied bandwidth by up to 80-100%.


network operations and management symposium | 2006

Problem Determination in Enterprise Middleware Systems using Change Point Correlation of Time Series Data

Manoj K. Agarwal; Manish Gupta; Vijay Mann; Narendran Sachindran; Nikos Anerousis; Lily B. Mummert

Clustered enterprise middleware systems employing dynamic workload scheduling are susceptible to a variety of application malfunctions that can manifest themselves in a counterintuitive fashion and cause debilitating damage. Until now, diagnosing problems in that domain involves investigating log files and configuration settings and requires in-depth knowledge of the middleware architecture and application design. This paper presents a method for problem determination using change point detection techniques and problem signatures consisting of a combination of changes (or absence of changes) in different metrics. We implemented this approach on a clustered middleware system and applied it to the detection of the storm drain condition: a debilitating problem encountered in clustered systems with counterintuitive symptoms. Our experimental results show that the system detects 93% of storm drain faults with no false positives


international conference on cluster computing | 2007

Identifying sources of Operating System Jitter through fine-grained kernel instrumentation

Ravi Kothari; Vijay Mann

Understanding the behavior and impact of various sources of operating system jitter (OS jitter) is important not only for tuning a system for HPC applications, but also for the ongoing efforts to create light-weight versions of commercial operating systems such as Linux, that can be used on compute nodes of large scale HPC systems. In this paper, we present a tool that helps in identifying sources of OS Jitter in a commodity operating system such as Linux and measures the impact of OS Jitter through fine grained kernel instrumentation. Our methodology comprises of running a user-level micro-benchmark and measuring the latencies experienced by the benchmark. We then associate each latency to operating system daemons and interrupts using data obtained from kernel instrumentation. We present experimental results that help identify the biggest contributors to the total OS Jitter perceived by an application on a commodity operating system such as Linux. Our results revealed that while 63% of the total jitter comes from timer interrupts, the rest comes from various system daemons and interrupts, most of which can be easily eliminated. The tool presented in this paper can also be used to tune ldquoout of the boxrdquo commodity operating systems as well as to detect new sources of operating system jitter that get introduced as software get installed and upgraded on a tuned system.


international conference on web services | 2005

Orchestrating composite Web services under data flow constraints

Girish Chafle; Sunil Chandra; Vijay Mann; Mangala Gowri Nanda

A composite service is typically specified using a language such as BPEL4WS and orchestrated by a single coordinator node in a centralized manner. The coordinator receives the client request, makes the required data transformations and invokes the component Web services as per the specification. However, in certain scenarios businesses might want to impose restrictions on access to the data they provide or the source from which they can accept data. Centralized orchestration can lead to violation of these data flow constraints as the central coordinator has access to the input and output data of all the component Web services. In many cases existing methods of data encryption and authentication are not sufficient to handle such constraints. These data flow constraints, thus, present obstacles for composite Web service orchestration. In this paper we propose a solution for orchestrating composite Web services under data flow constraints. The solution is based on decentralized orchestration, in which a composite Web service is broken into a set of partitions, one partition per component Web service. To overcome data flow constraints, each partition is executed within the same domain as the corresponding component Web service and hence, has the same access rights. However, there are, in general, many ways to decentralize a composite Web service. We apply a rule based filtering mechanism to choose a set of partitions that does not violate the specified dataflow constraints.


distributed event-based systems | 2014

Effective switch memory management in OpenFlow networks

Anilkumar Vishnoi; Rishabh Poddar; Vijay Mann; Suparna Bhattacharya

OpenFlow networks require installation of flow rules in a limited capacity switch memory (Ternary Content Addressable Memory or TCAMs, in particular) from a logically centralized controller. A controller can manage the switch memory in an OpenFlow network through events that are generated by the switch at discrete time intervals. Recent studies have shown that data centers can have up to 10,000 network flows per second per server rack today. Increasing the TCAM size to accommodate these large number of flow rules is not a viable solution since TCAM is costly and power hungry. Current OpenFlow controllers handle this issue by installing flow rules with a default idle timeout after which the switch automatically evicts the rule from its TCAM. This results in inefficient usage of switch memory for short lived flows when the timeout is too high and in increased controller workload for frequent flows when the timeout is too low. In this context, we present SmartTime - an OpenFlow controller system that combines an adaptive timeout heuristic to compute efficient idle timeouts with proactive eviction of flow rules, which results in effective utilization of TCAM space while ensuring that TCAM misses (or controller load) does not increase. To the best of our knowledge, SmartTime is the first real implementation of an intelligent flow management strategy in an OpenFlow controller that can be deployed in current OpenFlow networks. In our experiments using multiple real data center packet traces and cache sizes, SmartTime adaptive policy consistently outperformed the best performing static idle timeout policy or random eviction policy by up to 58% in terms of total cost.


communication systems and networks | 2013

Living on the edge: Monitoring network flows at the edge in cloud data centers

Vijay Mann; Anilkumar Vishnoi; Sarvesh Bidkar

Scalable network wide flow monitoring has remained a distant dream because of the strain it puts on network router resources. Recent proposals have advocated the use of coordinated sampling or host based flow monitoring to enable a scalable network wide monitoring service. As most hosts in data centers get virtualized with the emergence of the cloud, the hypervisor on a virtualized host adds another network layer in the form of a vSwitch (virtual switch). The vSwitch now forms the new edge of the network. In this paper, we explore the implications of enabling network wide flow monitoring inside virtual switches in the hosts. Monitoring of network flows inside the server vSwitch can enable scalability due to its distributed nature. However, assumptions that held true for flow monitoring inside a physical switch need to be revisited since vSwitches are usually not limited by the same level of resource constraints that exist for physical switches and routers. On the other hand, vSwitches do not implement flow monitoring in hardware, as it is done in some physical switches. We present the design and implementation of EMC2 - a scalable network wide monitoring service for cloud data centers. We also conduct an extensive evaluation of various switch based flow monitoring techniques and share our findings. Our results indicate that while layer-3 flow monitoring protocols such as NetFlow can give a very good network coverage without using too many resources, protocols that sample packet headers (such as sFlow) need to be carefully configured. A badly configured sFlow vSwitch can degrade application network throughput by up to 17% and can also choke the management network by generating monitoring data at a very high rate.

Researchain Logo
Decentralizing Knowledge