Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vijay K. Naik is active.

Publication


Featured researches published by Vijay K. Naik.


pacific rim international symposium on dependable computing | 2010

End-to-End Performability Analysis for Infrastructure-as-a-Service Cloud: An Interacting Stochastic Models Approach

Rahul Ghosh; Kishor S. Trivedi; Vijay K. Naik; Dong Seong Kim

Handling diverse client demands and managing unexpected failures without degrading performance are two key promises of a cloud delivered service. However, evaluation of a cloud service quality becomes difficult as the scale and complexity of a cloud system increases. In a cloud environment, service request from a user goes through a variety of provider specific processing steps from the instant it is submitted until the service is fully delivered. Measurement-based evaluation of cloud service quality is expensive especially if many configurations, workload scenarios, and management methods are to be analyzed. To overcome these difficulties, in this paper we propose a general analytic model based approach for an end-to-end perform ability analysis of a cloud service. We illustrate our approach using Infrastructure-as-a-Service (IaaS) cloud, where service availability and provisioning response delays are two key QoS metrics. A novelty of our approach is in reducing the complexity of analysis by dividing the overall model into sub-models and then obtaining the overall solution by iteration over individual sub-model solutions. In contrast to a single one-level monolithic model, our approach yields a high fidelity model that is tractable and scalable. Our approach and underlying models can be readily extended to other types of cloud services and are applicable to public, private and hybrid clouds.


dependable systems and networks | 2011

A scalable availability model for Infrastructure-as-a-Service cloud

Francesco Longo; Rahul Ghosh; Vijay K. Naik; Kishor S. Trivedi

High availability is one of the key characteristics of Infrastructure-as-a-Service (IaaS) cloud. In this paper, we show a scalable method for availability analysis of large scale IaaS cloud using analytic models. To reduce the complexity of analysis and the solution time, we use an interacting Markov chain based approach. The construction and the solution of the Markov chains is facilitated by the use of a high-level Petri net based paradigm known as stochastic reward net (SRN). Overall solution is composed by iteration over individual SRN sub-model solutions. Dependencies among the sub-models are resolved using fixed-point iteration, for which existence of a solution is proved. We compare the solution obtained from the interacting sub-models with a monolithic model and show that errors introduced by decomposition are insignificant. Additionally, we provide closed form solutions of the sub-models and show that our approach can handle very large size IaaS clouds.


conference on high performance computing (supercomputing) | 1993

Performance analysis of job scheduling policies in parallel supercomputing environments

Vijay K. Naik; Mark S. Squillante; Sanjeev Setia

The authors analyze three general classes of scheduling policies under a workload typical of large-scale scientific computing. These policies differ in the manner in which processors are partitioned among the jobs as well as the way in which jobs are prioritized for execution on the partitions. The results indicate that existing static schemes to not perform well under varying workloads. Adaptive policies tend to make better scheduling decisions, but their ability to adjust to workload changes is limited. Dynamic partitioning policies, on the other hand, yield the best performance and can be tuned to provide desired performance differences among jobs with varying resource demands.


Future Generation Computer Systems | 2013

Modeling and performance analysis of large scale IaaS Clouds

Rahul Ghosh; Francesco Longo; Vijay K. Naik; Kishor S. Trivedi

For Cloud based services to support enterprise class production workloads, Mainframe like predictable performance is essential. However, the scale, complexity, and inherent resource sharing across workloads make the Cloud management for predictable performance difficult. As a first step towards designing Cloud based systems that achieve such performance and realize the service level objectives, we develop a scalable stochastic analytic model for performance quantification of Infrastructure-as-a-Service (IaaS) Cloud. Specifically, we model a class of IaaS Clouds that offer tiered services by configuring physical machines into three pools with different provisioning delay and power consumption characteristics. Performance behaviors in such IaaS Clouds are affected by a large set of parameters, e.g., workload, system characteristics and management policies. Thus, traditional analytic models for such systems tend to be intractable. To overcome this difficulty, we propose a multi-level interacting stochastic sub-models approach where the overall model solution is obtained iteratively over individual sub-model solutions. By comparing with a single-level monolithic model, we show that our approach is scalable, tractable, and yet retains high fidelity. Since the dependencies among the sub-models are resolved via fixed-point iteration, we prove the existence of a solution. Results from our analysis show the impact of workload and system characteristics on two performance measures: mean response delay and job rejection probability.


measurement and modeling of computer systems | 1994

Analysis of the impact of memory in distributed parallel processing systems

Vinod G. J. Peris; Mark S. Squillante; Vijay K. Naik

We consider an important tradeoff between processor and memory allocation in distributed parallel processing systems. To study this tradeoff, we formulate stochastic models of parallel program behavior, distributed parallel processing environments and memory overheads incurred by parallel programs as a function of their processor allocation. A mathematical analysis of the models is developed, which includes the effects of contention for shared resources caused by paging activity. We conduct a detailed analysis of real large-scale scientific applications and use these results to parameterize our models. Our results show that memory overhead resulting from processor allocation decisions can have a significant effect on system performance in distributed parallel environments, strongly suggesting that memory considerations must be incorporated in the resource allocation policies for parallel systems. We also demonstrate the importance of the inter-locality miss ratio, which is introduced in this paper and analyzed for the first time.


symposium on reliable distributed systems | 2010

Quantifying Resiliency of IaaS Cloud

Rahul Ghosh; Francesco Longo; Vijay K. Naik; Kishor S. Trivedi

Cloud based services may experience changes – internal, external, large, small – at any time. Predicting and quantifying the effects on the quality-of-service during and after a change are important in the resiliency assessment of a cloud based service. In this paper, we quantify the resiliency of infrastructure-as-a-service (IaaS) cloud when subject to changes in demand and available capacity. Using a stochastic reward net based model for provisioning and servicing requests in a IaaS cloud, we quantify the resiliency of IaaS cloud w.r.t. two key performance measures – job rejection rate and provisioning response delay.We demonstrate the application of optical combs to implement tunable programmable microwave photonic filters. We design well-known multi-tap microwave photonic filters; however, the utilization of an optical comb with a dispersive medium enables scaling of these filters to a large number of taps. We use optical line-by-line pulse shaping to program tap weights, which allows us to shape the filters bandpass. Our scheme is simple and easily implementable, which provides filters with arbitrary tap weights. As an example, we implement filters with Gaussian apodized tap weights, which achieve more than 35-dB sidelobe suppression. Our experiments provide usable bandwidth, free of sampling spurs, over a Nyquist zone of 5 GHz, equal to half of our 10-GHz comb repetition frequency. Furthermore, we introduce a simple new technique, based on a programmable optical delay line, to uniformly tune the passband center frequency across the free spectral range (FSR) of the filter, ideally without changing the bandpass shape. We demonstrate this scheme by tuning the filter over a full FSR, equal to 10.4 GHz in our experiments.


Ibm Journal of Research and Development | 1997

Dynamic resource management on distributed systems using reconfigurable applications

José E. Moreira; Vijay K. Naik

Efficient management of distributed resources, under conditions of unpredictable and varying workload, requires enforcement of dynamic resource management policies. Execution of such policies requires a relatively fine-grain control over the resources allocated to jobs in the system. Although this is a difficult task using conventional job management and program execution models, reconfigurable applications can be used to make it viable. With reconfigurable applications, it is possible to dynamically change, during the course of program execution, the number of concurrently executing tasks of an application as well as the resources allocated. Thus, reconfigurable applications can adapt to internal changes in resource requirements and to external changes affecting available resources. In this paper, we discuss dynamic management of resources on distributed systems with the help of reconfigurable applications. We first characterize reconfigurable parallel applications. We then present a new programming model for reconfigurable applications and the Distributed Resource Management System (DRMS), an integrated environment for the design, development, execution, and resource scheduling of reconfigurable applications. Experiments were conducted to verify the functionality and performance of application reconfiguration under DRMS. A detailed breakdown of the costs in reconfiguration is presented with respect to several different applications. Our results indicate that application reconfiguration is effective under DRMS and can be beneficial in improving individual application performance as well as overall system performance. We observe a significant reduction in average job response time and an improvement in overall system utilization.


Journal of Parallel and Distributed Computing | 1997

Processor Allocation in Multiprogrammed Distributed-Memory Parallel Computer Systems

Vijay K. Naik; Sanjeev Setia; Mark S. Squillante

In this paper, we examine three general classes of space-sharing scheduling policies under a workload representative of large-scale scientific computing. These policies differ in the way processors are partitioned among the jobs as well as in the way jobs are prioritized for execution on the partitions. We consider new static, adaptive and dynamic policies that differ from previously proposed policies by exploiting user-supplied information about the resource requirements of submitted jobs. We examine the performance characteristics of these policies from both the system and user perspectives. Our results demonstrate that existing static schemes do not perform well under varying workloads, and that the system scheduling policy for such workloads must distinguish between jobs with large differences in execution times. We show that obtaining good performance under adaptive policies requires somea prioriknowledge of the job mix in these systems. We further show that a judiciously parameterized dynamic space-sharing policy can outperform adaptive policies from both the system and user perspectives.


integrated network management | 2007

Efficient Resource Virtualization and Sharing Strategies for Heterogeneous Grid Environments

Pawel Garbacki; Vijay K. Naik

Resource visualization has emerged as a powerful technique for customized resource provisioning in grid and data center environments. In this paper, we describe efficient strategies for policy-based controlling of virtualization of the physical resources. With these strategies, visualization is controlled taking into account workload requirements, available capacities of physical resources, and the governing policies. Realizing this control requires simultaneous handling of three problems: (i) determining the virtual resource configurations, (ii) the mapping of resulting virtual resources to physical resources, and (iii) the mapping of workloads to the virtual resources. We pose this as an optimization problem and solve this problem using a linear programming (LP) based approach. We evaluate this approach by implementing it in the Harmony grid environment consisting of heterogeneous resources and heterogeneous workload. Experimental results indicate that our approach is efficient and effective. We extend this approach further by using a two-phase heuristic that allows the decision making component to scale up to handle large scale grid systems.


measurement and modeling of computer systems | 1999

The impact of job memory requirements on gang-scheduling performance

Sanjeev Setia; Mark S. Squillante; Vijay K. Naik

Almost all previous research on gang-scheduling has ignored the impact of real job memory requirements on the performance of the policy. This is despite the fact that on parallel supercomputers, because of the problems associated with demand paging, executing jobs are typically allocated enough memory so that their entire address space is memory-resident. In this paper, we examine the impact of job memory requirements on the performance of gang-scheduling policies. We first present an analysis of the memory-usage characteristics of jobs in the production workload on the Cray T3E at the San Diego Supercomputer Center. We also characterize the memory usage of some of the applications that form part of the workload on the LLNL ASCI supercomputer. Next, we examine the issue of long-term scheduling on MPPs, i.e., we study policies for deciding which jobs among a set of competing jobs should be allocated memory and thus should be allowed to execute on the processors of the system. Using trace-driven simulation, we evaluate the impact of using different long-term scheduling policies on the overall performance of Distributed Hierarchical Control (DHC), a gang-scheduling policy that has been studied extensively in the research literature.

Collaboration


Dive into the Vijay K. Naik's collaboration.

Researchain Logo
Decentralizing Knowledge