Osama Al-Haj Hassan
University of Georgia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Osama Al-Haj Hassan.
winter simulation conference | 2007
Gregory A. Silver; Osama Al-Haj Hassan; John A. Miller
Ontologies allow researchers, domain experts, and software agents to share a common understanding of the concepts and relationships of a domain. The past few years have seen the publication of ontologies for a large number of domains. The modeling and simulation community is beginning to see potential for using these ontologies in the modeling process. This paper presents a method for using the knowledge encoded in ontologies to facilitate the development of simulation models. It suggests a technique that establishes relationships between domain ontologies and a modeling ontology and then uses the relationships to instantiate a simulation model as ontology instances. Techniques for translating these instances into XML based markup languages and then into executable models for various software packages are also presented.
international conference on web services | 2009
Osama Al-Haj Hassan; Lakshmish Ramaswamy; John A. Miller
The recent surge of popularity has established Mashups as an important category of Web 2.0 applications. Mashups are essentially Web services that are often created by end-users. They aggregate and manipulate data from sources around the World Wide Web. Surprisingly, there are very few studies on the scalability and performance of mashups. In this paper, we study caching as a vehicle for enhancing the scalability and the efficiency of mashups. Although caching has long been used to improve the performance of Web services, mashups pose some unique challenges that necessitate a more dynamic approach to caching. Towards this end, we present MACE - a cache specifically designed for mashups. In designing the MACE framework this paper makes three technical contributions. First, we present a model for representing mashups and analyzing their performance. Second, we propose an indexing scheme that enables efficient reuse of cached data for newly created mashups. Finally, this paper also describes a novel caching policy that analyzes the costs and benefits of caching data at various stages of different mashups and selectively stores data that is most effective in improving system scalability. We report experiments studying the performance of the MACE system.
collaborative computing | 2008
Osama Al-Haj Hassan; Lakshmish Ramaswamy; John A. Miller; Khaled Rasheed; E. Rodney Canfield
Recently, overlay network-based collaborative applications such as instant messaging, content sharing, and Internet telephony are becoming increasingly popular. Many of these applications rely upon data-replication to achieve better performance, scalability, and reliability. However, replication entails various costs such as storage for holding replicas and communication overheads for ensuring replica consistency. While simple rule-of-thumb strategies are popular for managing the cost-benefit tradeoffs of replication, they cannot ensure optimal resource utilization. This paper explores a multi-objective optimization approach for replica management, which is unique in the sense that we view the various factors influencing replication decisions such as access latency, storage costs, and data availability as objectives, and not as constraints. This enables us to search for solutions that yield close to optimal values for these parameters. We propose two novel algorithms, namely multi-objective Evolutionary (MOE) algorithm and multi-objective Randomized Greedy (MORG) algorithm for deciding the number of replicas as well as their placement within the overlay. While MOE yields higher quality solutions, MORG is better in terms of computational efficiency. The paper reports a series of experiments that demonstrate the effectiveness of the proposed algorithms.
international conference on web services | 2010
Osama Al-Haj Hassan; Lakshmish Ramaswamy; John A. Miller
Recently, mashups are gaining tremendous popularity as an important Web 2.0 application. Mashups provide end-users with an opportunity to create personalized Web services which aggregate and manipulate data from multiple diverse sources distributed across the Web. However, this increase in personalization also results in new scalability and performance challenges. Surprisingly, there are very few studies on the performance aspect of mashups. In this paper, we propose two novel techniques to enhance the scalability and performance of mashup platforms. The first is an efficient mashup merging scheme that avoids duplicate computations and unnecessary data retrievals by detecting common operator sequences in different mashups and executing them together. Second, we propose a canonical form-based mashup reordering scheme that not only transforms individual mashups to their most efficient forms but also increases the effectiveness of mashup merging. This paper also reports a number of experiments studying the benefits and costs of the proposed techniques.
collaborative computing | 2007
Osama Al-Haj Hassan; Lakshmish Ramaswamy
Recently, unstructured peer-to-peer (P2P) applications have become extremely popular. Searching in these networks has been a hot research topic. Flooding-based searching, which has been the basis of real-world P2P networks is inherently inefficient and unscalable. Replication has proven to be an effective strategy to improve efficiency and scalability of unstructured P2P networks. Previous research has largely focused on replicating resources or their references. This paper considers a replication solution from a different perspective; we investigate replicating messages and its effect on overloading problem. We propose two message replication strategies. The distance-based message replication technique replicates the query messages at different topological regions of the network. The landmarks-based technique further optimizes the performance by considering both the topology as well as the physical proximities of the peers of the overlay. Our experiments show that the proposed techniques substantially reduce the message traffic in the overlay while maintaining query performance.
International Journal of Web Services Research | 2010
John A. Miller; Osama Al-Haj Hassan; Lakshmish Ramaswamy
In recent years, Web 2.0 applications have experienced tremendous growth in popularity. Mashups are a key category of Web 2.0 applications, which empower end-users with a highly personalized mechanism to aggregate and manipulate data from multiple sources distributed across the Web. Surprisingly
international conference on move to meaningful internet systems | 2010
Osama Al-Haj Hassan; Lakshmish Ramaswamy; John A. Miller
Recently, mashups have emerged as an important class of Web 2.0 collaborative applications. Mashups can be conceived as personalized Web services which aggregate and manipulate data from multiple, geographically-distributed Web sources. Mashups, while enhancing personalization, bring up new scalability and performance challenges. The fact that most existing mashup platforms are centralized further exacerbates the scalability challenges. Towards addressing these challenges, in this paper, we present the design, implementation, and evaluation of CoMaP - a cooperative information system for mashup execution. The design of CoMaP is characterized by a scalable architecture with multiple cooperative nodes distributed across the Internet and possibly multiple controllers which plan and coordinate mashup execution. In our architecture, an individual mashup can be executed at several collaborative nodes with each node executing part of the mashup. CoMaP includes a unique mashup deployment scheme that decides which nodes would be involved in executing an individual mashup and what operators they would host. Also, CoMaP continuously adapts to overlay dynamics and to user actions such as creation of new mashups or deletion of existing ones. Furthermore, CoMaP possesses failure resiliency feature which is necessary for cooperative information systems. Our experimental study indicates that the proposed techniques yield improved system performance.
Journal of Network and Computer Applications | 2014
Osama Al-Haj Hassan; Thamer Al-Rousan; Anas Abu Taleb; Adi Maaita
Mashups are key category of Web 2.0 personalized applications. Due to personalization property of Web 2.0 applications, number of mashups hosted by a mashup platform is increasing. End-users are overwhelmed by the increasing number of mashups. Therefore, they cannot easily find mashups of their interest. In this paper, we propose a novel mashup ranking technique based on the popular Vector Space Model (VSM) for mashups that use RSS feeds as data sources. Mashups that are ranked higher would be more interesting to end-users. In order to evaluate our mashup ranking technique, we implement it in a prototype where end-users select mashups that they consider interesting. We implicitly collect the end-user mashup selections and record the outcome of our ranking technique, and then we analyze them. Recorded R-Precision value in our technique is on an average 30% higher than R-Precision value in binary ranking technique which shows an improvement in capturing mashups that resemble end-user interest. In our design, we make sure our mashup ranking technique scales well to increasing number of mashups.
Enterprise Information Systems | 2014
Osama Al-Haj Hassan; Lakshmish Ramaswamy; Fadi Hamad; Anas Abu Taleb
Since the advent of Web 2.0, personalised applications such as mashups have become widely popular. Mashups enable end-users to fetch data from distributed data sources, and refine it based on their personal needs. This high degree of personalisation that mashups offer comes at the expense of performance and scalability. These scalability challenges are exacerbated by the centralised architectures of current mashup platforms. In this paper, we address the performance and scalability issues by designing CoMaP – a distributed mashup platform. CoMaP’s architecture comprises of several cooperative mashup processing nodes distributed over the Internet upon which mashups can, fully or partially, be executed. CoMaP incorporates a dynamic and efficient scheme for deploying mashups on the processing nodes. Our scheme considers a number of parameters such as variations in link delays and bandwidths, and loads on mashup processing nodes. CoMaP includes effective and low-cost mechanisms for balancing loads on the processing nodes as well for handling node failures. Furthermore, we propose novel techniques that leverage keyword synonyms, ontologies and caching to enhance end-user experience. This paper reports several experiments to comprehensively study CoMaP’s performance. The results demonstrate CoMaP’s benefits as a scalable distributed mashup platform.
Archive | 2013
Anas Abu Taleb; Tareq Alhmiedat; Osama Al-Haj Hassan; Nidal M. Turab