Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Salim Hariri is active.

Publication


Featured researches published by Salim Hariri.


IEEE Transactions on Parallel and Distributed Systems | 2002

Performance-effective and low-complexity task scheduling for heterogeneous computing

Haluk Rahmi Topcuoglu; Salim Hariri; Min-You Wu

Efficient application scheduling is critical for achieving high performance in heterogeneous computing environments. The application scheduling problem has been shown to be NP-complete in general cases as well as in several restricted cases. Because of its key importance, this problem has been extensively studied and various algorithms have been proposed in the literature which are mainly for systems with homogeneous processors. Although there are a few algorithms in the literature for heterogeneous processors, they usually require significantly high scheduling costs and they may not deliver good quality schedules with lower costs. In this paper, we present two novel scheduling algorithms for a bounded number of heterogeneous processors with an objective to simultaneously meet high performance and fast scheduling time, which are called the Heterogeneous Earliest-Finish-Time (HEFT) algorithm and the Critical-Path-on-a-Processor (CPOP) algorithm. The HEFT algorithm selects the task with the highest upward rank value at each step and assigns the selected task to the processor, which minimizes its earliest finish time with an insertion-based approach. On the other hand, the CPOP algorithm uses the summation of upward and downward rank values for prioritizing tasks. Another difference is in the processor selection phase, which schedules the critical tasks onto the processor that minimizes the total execution time of the critical tasks. In order to provide a robust and unbiased comparison with the related work, a parametric graph generator was designed to generate weighted directed acyclic graphs with various characteristics. The comparison study, based on both randomly generated graphs and the graphs of some real applications, shows that our scheduling algorithms significantly surpass previous approaches in terms of both quality and cost of schedules, which are mainly presented with schedule length ratio, speedup, frequency of best results, and average scheduling time metrics.


Lecture Notes in Computer Science | 2004

Autonomic computing: an overview

Manish Parashar; Salim Hariri

The increasing scale complexity, heterogeneity and dynamism of networks, systems and applications have made our computational and information infrastructure brittle, unmanageable and insecure. This has necessitated the investigation of an alternate paradigm for system and application design, which is based on strategies used by biological systems to deal with similar challenges – a vision that has been referred to as autonomic computing. The overarching goal of autonomic computing is to realize computer and software systems and applications that can manage themselves in accordance with high-level guidance from humans. Meeting the grand challenges of autonomic computing requires scientific and technological advances in a wide variety of fields, as well as new software and system architectures that support the effective integration of the constituent technologies. This paper presents an introduction to autonomic computing, its challenges, and opportunities.


Proceedings. Eighth Heterogeneous Computing Workshop (HCW'99) | 1999

Task scheduling algorithms for heterogeneous processors

Haluk Rahmi Topcuoglu; Salim Hariri; Min-You Wu

Scheduling computation tasks on processors is the key issue for high-performance computing. Although a large number of scheduling heuristics have been presented in the literature, most of them target only homogeneous resources. The existing algorithms for heterogeneous domains are not generally efficient because of their high complexity and/or the quality of the results. We present two low-complexity efficient heuristics, the Heterogeneous Earliest-Finish-Time (HEFT) algorithm and the Critical-Path-on-a-Processor (CPOP) algorithm for scheduling directed acyclic weighted task graphs (DAGs) on a bounded number of heterogeneous processors. We compared the performances of these algorithms against three previously proposed heuristics. The comparison study showed that our algorithms outperform previous approaches in terms of performance (schedule length ratio and speedup) and cost (time-complexity).


autonomic computing workshop | 2003

AutoMate: enabling autonomic applications on the grid

Manish Agarwal; Viraj Bhat; Hua Liu; Vincent Matossian; V. Putty; Cristina Schmidt; Guangsen Zhang; L.-X. Zhen; Manish Parashar; Bithika Khargharia; Salim Hariri

The increasing complexity, heterogeneity and dynamism of networks, systems, services applications have made our computational/information infrastructure brittle, unmanageable and insecure. This has necessitated the investigation of a new paradigm for design, development and deployment based on strategies used by biological systems to deal with complexity, heterogeneity, and uncertainty, i.e. autonomic computing. This paper introduces the AutoMate project and describes its key components. The overall objective of AutoMate is to investigate key technologies to enable the development of autonomic grid applications that are context aware and are capable of self-configuring, self-composing, self-optimizing and self-adapting. Specifically, it will investigate the definition of autonomic components, the development of autonomic applications as dynamic composition of autonomic components, and the design of key enhancements to existing grid middleware and runtime services to support these applications.


international performance computing and communications conference | 2003

Autonomia: an autonomic computing environment

Xiangdong Dong; Salim Hariri; Lizhi Xue; Huoping Chen; Ming Zhang; Sathija Pavuluri; Soujanya Rao

The proliferation of Internet technologies, services and devices, have made the current networked system designs, and management tools incapable of designing reliable, secure networked systems and services. In fact, we have reached a level of complexity, heterogeneity, and a rapid change rate that our information infrastructure is becoming unmanageable and insecure. This had led researchers to consider alternative designs and management techniques that are based on strategies used by biological systems to deal with complexity, heterogeneity and uncertainty. The approach is referred to as autonomic computing. An autonomic computing system is the system that has the capabilities of being self-defining, self-healing, self-configuring, self-optimizing, etc. We present our approach to implement an autonomic computing infrastructure, Autonomia that provides dynamically programmable control and management services to support the development and deployment of smart (intelligent) applications. The Autonomia environment provides the application developers with all the tools required to specify the appropriate control and management schemes to maintain any quality of service requirement or application attribute/functionality (e.g., performance, fault, security, etc.) and the core autonomic middleware services to maintain the autonomic requirements of a wide range of network applications and services. We have successfully implemented a proof-of-concept prototype system that can support the self-configuring, self-deploying and self-healing of any networked application.


international conference on autonomic computing | 2004

A component-based programming model for autonomic applications

Hua Liu; Manish Parashar; Salim Hariri

The emergence of pervasive wide-area distributed computing environments, such as pervasive information systems and computational grids, has enabled new generations of applications that are based on seamless access, aggregation and interaction. However, the inherent complexity, heterogeneity and dynamism of these systems require a change in how the applications are developed and managed. In this paper we present a component-based programming framework to support the development of autonomic self-managed applications. The framework enables the development of autonomic components and the formulation of autonomic applications as the dynamic composition and management of autonomic components. The operation of the proposed framework is illustrated using a forest fire application.


IEEE Transactions on Software Engineering | 1986

Distributed program reliability analysis

V.K.P. Kumar; Salim Hariri; Cauligi S. Raghavendra

The reliability of distributed processing systems can be expressed in terms of the reliability of the processing elements that run the programs, the reliability of the processing elements holding the required files, and the reliability of the communication links used in file transfers. The authors introduce two reliability measures, namely distributed program reliability and distributed system reliability, to accurately model the reliability of distributed systems. The first measure describes the probability of successful execution of a distributed program which runs on some processing elements and needs to communicate with other processing elements for remote files, while the second measure describes the probability that all the programs of a given set can run successfully. The notion of minimal file spanning trees is introduced to efficiently evaluate these reliability measures. Graph theory techniques are used to systematically generate file spanning trees that provide all the required connections. The technique is general and can be used in a dynamic environment for efficient reliability evaluation.


Cluster Computing | 2006

The Autonomic Computing Paradigm

Salim Hariri; Bithika Khargharia; Houping Chen; Jingmei Yang; Yeliang Zhang; Manish Parashar; Hua Liu

The advances in computing and communication technologiesand software tools have resulted in an explosive growth innetworked applications and information services that coverall aspects of our life. These services and applications are in-herently complex, dynamic and heterogeneous. In a similarway, the underlying information infrastructure, e.g. the In-ternet, is large, complex, heterogeneous and dynamic, glob-allyaggregatinglargenumbersofindependentcomputingandcommunication resources, data stores and sensor networks.The combination of the two results in application develop-ment, configuration and management complexities that breakcurrent computing paradigms, which are based on static be-haviors, interactions and compositions of components and/orservices.Asaresult,applications,programmingenvironmentsand information infrastructures are rapidly becoming brittle,unmanageable and insecure. This has led researchers to con-sider alternative programming paradigms and managementtechniques that are based on strategies used by biological sys-tems to deal with complexity, dynamism, heterogeneity anduncertainty.Autonomiccomputingisinspiredbythehumanautonomicnervoussystemthathandlescomplexityanduncertainties,andaims at realizing computing systems and applications capableof managing themselves with minimum human intervention.In this paper we first give an overview of the architecture


IEEE Transactions on Knowledge and Data Engineering | 2005

A new dependency and correlation analysis for features

Guangzhi Qu; Salim Hariri; Mazin S. Yousif

The quality of the data being analyzed is a critical factor that affects the accuracy of data mining algorithms. There are two important aspects of the data quality, one is relevance and the other is data redundancy. The inclusion of irrelevant and redundant features in the data mining model results in poor predictions and high computational overhead. This paper presents an efficient method concerning both the relevance of the features and the pairwise features correlation in order to improve the prediction and accuracy of our data mining algorithm. We introduce a new feature correlation metric Q/sub Y/(X/sub i/,X/sub j/) and feature subset merit measure e(S) to quantify the relevance and the correlation among features with respect to a desired data mining task (e.g., detection of an abnormal behavior in a network service due to network attacks). Our approach takes into consideration not only the dependency among the features, but also their dependency with respect to a given data mining task. Our analysis shows that the correlation relationship among features depends on the decision task and, thus, they display different behaviors as we change the decision task. We applied our data mining approach to network security and validated it using the DARPA KDD99 benchmark data set. Our results show that, using the new decision dependent correlation metric, we can efficiently detect rare network attacks such as User to Root (U2R) and Remote to Local (R2L) attacks. The best reported detection rates for U2R and R2L on the KDD99 data sets were 13.2 percent and 8.4 percent with 0.5 percent false alarm, respectively. For U2R attacks, our approach can achieve a 92.5 percent detection rate with a false alarm of 0.7587 percent. For R2L attacks, our approach can achieve a 92.47 percent detection rate with a false alarm of 8.35 percent.


Cluster Computing | 2008

Autonomic power and performance management for computing systems

Bithika Khargharia; Salim Hariri; Mazin S. Yousif

Abstract With the increased complexity of platforms, the growing demand of applications and data centers’ servers sprawl, power consumption is reaching unsustainable limits. The need to improved power management is becoming essential for many reasons including reduced power consumption & cooling, improved density, reliability & compliance with environmental standards. This paper presents a theoretical framework and methodology for autonomic power and performance management in e-business data centers. We optimize for power and performance (performance-per-watt) at each level of the hierarchy while maintaining scalability. We adopt mathematically-rigorous optimization approach to minimize power while meeting performance constraints. Our experimental results show around 72% savings in power while maintaining performance as compared to static power management techniques and 69.8% additional savings with both global and local optimizations.

Collaboration


Dive into the Salim Hariri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoonhee Kim

Sookmyung Women's University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cauligi S. Raghavendra

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge