Mohsen Amini Salehi
University of Louisiana at Lafayette
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohsen Amini Salehi.
international conference on cloud computing | 2012
Mohsen Amini Salehi; P. Radha Krishna; Krishnamurty Sai Deepak; Rajkumar Buyya
Energy efficiency is one of the main challenge hat data centers are facing nowadays. A considerable portion of the consumed energy in these environments is wasted because of idling resources. To avoid wastage, offering services with variety of SLAs (with different prices and priorities) is a common practice. The question we investigate in this research is how the energy consumption of a data center that offers various SLAs can be reduced. To answer this question we propose an adaptive energy management policy that employs virtual machine(VM) preemption to adjust the energy consumption based on user performance requirements. We have implementedour proposed energy management policy in Haize a as a real scheduling platform for virtualized data centers. Experimental results reveal 18% energy conservation (up to 4000 kWh in 30 days) comparing with other baseline policies without any major increase in SLA violation.
Journal of Parallel and Distributed Computing | 2012
Mohsen Amini Salehi; Bahman Javadi; Rajkumar Buyya
Resource provisioning is one of the challenges in federated Grid environments. In these environments each Grid serves requests from external users along with local users. Recently, this resource provisioning is performed in the form of Virtual Machines (VMs). The problem arises when there are insufficient resources for local users to be served. The problem gets complicated further when external requests have different QoS requirements. Serving local users could be solved by preempting VMs from external users which impose overheads on the system. Therefore, the question is how the number of VM preemptions in a Grid can be minimized. Additionally, how we can decrease the likelihood of preemption for requests with more QoS requirements. We propose a scheduling policy in InterGrid, as a federated Grid, which reduces the number of VM preemptions and dispatches external requests in a way that fewer requests with QoS constraints get affected by preemption. Extensive simulation results indicate that the number of VM preemptions is decreased at least by 60%, particularly, for requests with more QoS requirements.
Concurrency and Computation: Practice and Experience | 2014
Mohsen Amini Salehi; Bahman Javadi; Rajkumar Buyya
Resource provisioning is one of the main challenges in large‐scale distributed systems such as federated Grids. Recently, many resource management systems in these environments have started to use the lease abstraction and virtual machines (VMs) for resource provisioning. In the large‐scale distributed systems, resource providers serve requests from external users along with their own local users. The problem arises when there is not sufficient resources for local users, who have higher priority than external ones, and need resources urgently. This problem could be solved by preempting VM‐based leases from external users and allocating them to the local ones. However, preempting VM‐based leases entails side effects in terms of overhead time as well as increasing makespan of external requests. In this paper, we model the overhead of preempting VMs. Then, to reduce the impact of these side effects, we propose and compare several policies that determine the proper set of lease(s) for preemption. We evaluate the proposed policies through simulation as well as real experimentation in the context of InterGrid under different working conditions. Evaluation results demonstrate that the proposed preemption policies serve up to 72% more local requests without increasing the rejection ratio of external requests. Copyright
Journal of Parallel and Distributed Computing | 2016
Mohsen Amini Salehi; Jay Smith; Anthony A. Maciejewski; Howard Jay Siegel; Edwin K. P. Chong; Jonathan Apodaca; Luis Diego Briceno; Timothy Renner; Vladimir Shestak; Joshua Ladd; Andrew M. Sutton; David L. Janovy; Sudha Govindasamy; Amin Alqudah; Rinku Dewri; Puneet Prakash
Heterogeneous parallel and distributed computing systems frequently must operate in environments where there is uncertainty in system parameters. Robustness can be defined as the degree to which a system can function correctly in the presence of parameter values different from those assumed. In such an environment, the execution time of any given task may fluctuate substantially due to factors such as the content of data to be processed. Determining a resource allocation that is robust against this uncertainty is an important area of research. In this study, we define a stochastic robustness measure to facilitate resource allocation decisions in a dynamic environment where tasks are subject to individual hard deadlines and each task requires some input data to start execution. In this environment, the tasks that cannot meet their deadlines are dropped (i.e., discarded). We define methods to determine the stochastic completion times of tasks in the presence of the task dropping. The stochastic task completion time is used in the definition of the stochastic robustness measure. Based on this stochastic robustness measure, we design novel resource allocation techniques that work in immediate and batch modes, with the goal of maximizing the number of tasks that meet their individual deadlines. We compare the performance of our technique against several well-known approaches taken from the literature and adapted to our environment. Simulation results of this study demonstrate the suitability of our new technique in a dynamic heterogeneous computing system. Calculating stochastic task completion time in heterogeneous system with task dropping.A model to quantify resource allocation robustness and propose mapping heuristics.Evaluating immediate and batch mappings and optimizing queue-size limit of batch mode.Analyzing impact of over-subscription level on immediate and batch allocation modes.Providing a model in the batch mode to run mapping events before machines become idle.
Software - Practice and Experience | 2014
Mohsen Amini Salehi; Adel Nadjaran Toosi; Rajkumar Buyya
The paper describes creation of a contention‐aware environment in a large‐scale distributed system where the contention occurs to access resources between external and local requests. To resolve the contention, we propose and implement a preemption mechanism in the InterGrid platform, which is a platform for large‐scale distributed system and uses virtual machines for resource provisioning. The implemented mechanism enables the resource providers to increase their resource utilization through contributing resources to the InterGrid platform without delaying their local users. The paper also evaluates the impact of applying various policies for preempting user requests. These policies affect resource contention, average waiting time, and imposed overhead to the system. Experiments conducted in real settings demonstrate efficacy of the preemption mechanism in resolving resource contention and the influence of preemption policies on the amount of imposed overhead and average waiting time. Copyright
advanced information networking and applications | 2012
Mohsen Amini Salehi; Bahman Javadi; Rajkumar Buyya
Many applications in federated Grids have quality-of-service (QoS) constraints such as deadline. Admission control mechanisms assure QoS constraints of the applications by limiting the number of user requests accepted by a resource provider. However, in order to maximize their profit, resource owners are interested in accepting as many requests as possible. In these circumstances, the question that arises is: what is the effective number of requests that can be accepted by a resource provider in a way that the number of accepted external requests is maximized and, at the same time, QoS violations are minimized. In this paper, we answer this question in the context of a virtualized federated Grid environment, where each Grid serves requests from external users along with its local users and requests of local users have preemptive priority over external requests. We apply analytical queuing model to address this question. Additionally, we derive a preemption-aware admission control policy based on the proposed model. Simulation results under realistic working conditions indicate that the proposed policy improves the number of completed external requests (up to 25%). In terms of QoS violations, the 95% confidence interval of the average difference with other policies is between (14.79%, 18.56%).
Archive | 2016
Murali K. Pusala; Mohsen Amini Salehi; Jayasimha Katukuri; Ying Xie; Vijay V. Raghavan
In this study, we provide an overview of the state-of-the-art technologies in programming, computing, and storage of the massive data analytics landscape. We shed light on different types of analytics that can be performed on massive data. For that, we first provide a detailed taxonomy on different analytic types along with examples of each type. Next, we highlight technology trends of massive data analytics that are available for corporations, government agencies, and researchers. In addition, we enumerate several instances of opportunities that exist for turning massive data into knowledge. We describe and position two distinct case studies of massive data analytics that are being investigated in our research group: recommendation systems in e-commerce applications; and link discovery to predict unknown association of medical concepts. Finally, we discuss the lessons we have learnt and open challenges faced by researchers and businesses in the field of massive data analytics.
IEEE Transactions on Parallel and Distributed Systems | 2018
Xiangbo Li; Mohsen Amini Salehi; Magdy A. Bayoumi; Nian-Feng Tzeng; Rajkumar Buyya
Video streams, either in the form of Video On-Demand (VOD) or live streaming, usually have to be converted (i.e., transcoded) to match the characteristics of viewers’ devices (e.g., in terms of spatial resolution or supported formats). Transcoding is a computationally expensive and time-consuming operation. Therefore, streaming service providers have to store numerous transcoded versions of a given video to serve various display devices. With the sharp increase in video streaming, however, this approach is becoming cost-prohibitive. Given the fact that viewers’ access pattern to video streams follows a long tail distribution, for the video streams with low access rate, we propose to transcode them in an on-demand (i.e., lazy) manner using cloud computing services. The challenge in utilizing cloud services for on-demand video transcoding, however, is to maintain a robust QoS for viewers and cost-efficiency for streaming service providers. To address this challenge, in this paper, we present the Cloud-based Video Streaming Services (CVS2) architecture. It includes a QoS-aware scheduling component that maps transcoding tasks to the Virtual Machines (VMs) by considering the affinity of the transcoding tasks with the allocated heterogeneous VMs. To maintain robustness in the presence of varying streaming requests, the architecture includes a cost-efficient VM Provisioner component. The component provides a self-configurable cluster of heterogeneous VMs. The cluster is reconfigured dynamically to maintain the maximum affinity with the arriving workload. Simulation results obtained under diverse workload conditions demonstrate that CVS2 architecture can maintain a robust QoS for viewers while reducing the incurred cost of the streaming service provider by up to 85 percent.
mobile cloud computing & services | 2017
Mahmoud Darwich; Ege Beyazit; Mohsen Amini Salehi; Magdy A. Bayoumi
Video transcoding is the process of converting a video to the format supported by the viewers device. Video transcoding requires a huge storage and computational resources, thus, many video stream providers choose to carry it out on the cloud. Video streaming providers generally need to prepare several formats of the same video (termed pre-transcoding) and stream the appropriate format to the viewer. However, pre-transcoding requires enormous storage space and imposes a significant cost to the stream provider. More importantly, pre-transcoding proven to be inefficient due to long-tail access pattern to video streams in a repository. To reduce the incurred cost, in this research, we propose a method to partially pre-transcode video streams and re-transcode the rest of it in an on-demand manner. We will develop a method to strike a trade-off between pre-transcoding and on-demand transcoding of video streams to reduce the overall cost. Experimental results show the efficiency of our approach, particularly, when a high percentage of videos are accessed frequently. In such repositories, the proposed approach reduces the incurred cost by up to 70%.
Software Architecture for Big Data and the Cloud | 2017
Deepak Poola; Mohsen Amini Salehi; Kotagiri Ramamohanarao; Rajkumar Buyya
During the recent years, workflows have emerged as an important abstraction for collaborative research and managing complex large-scale distributed data analytics. Workflows are increasingly becoming prevalent in various distributed environments, such as clusters, grids, and clouds. These environments provide complex infrastructures that aid workflows in scaling and parallel execution of their components. However, they are prone to performance variations and different types of failures. Thus, workflow management systems need to be robust against performance variations and tolerant against failures. Numerous research studies have investigated fault-tolerant aspect of the workflow management system in different distributed systems. In this study, we analyze these efforts and provide an in-depth taxonomy of them. We present the ontology of faults and fault-tolerant techniques then position the existing workflow management systems with respect to the taxonomies and the techniques. In addition, we classify various failure models, metrics, tools, and support systems. Finally, we identify and discuss the strengths and weaknesses of the current techniques and provide recommendations on future directions and open areas for the research community.