Joe Tekli
Lebanese American University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joe Tekli.
Computer Science Review | 2009
Joe Tekli; Richard Chbeir; Kokou Yetongnon
In recent years, XML has been established as a major means for information management, and has been broadly utilized for complex data representation (e.g. multimedia objects). Owing to an unparalleled increasing use of the XML standard, developing efficient techniques for comparing XML-based documents becomes essential in the database and information retrieval communities. In this paper, we provide an overview of XML similarity/comparison by presenting existing research related to XML similarity. We also detail the possible applications of XML comparison processes in various fields, ranging over data warehousing, data integration, classification/clustering and XML querying, and discuss some required and emergent future research directions.
IEEE Transactions on Services Computing | 2012
Joe Tekli; Ernesto Damiani; Richard Chbeir; Gabriele Gianini
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA ) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XMLs verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro- and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Journal of Web Semantics | 2012
Joe Tekli; Richard Chbeir
XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.
international conference on web engineering | 2009
Fekade Getahun; Joe Tekli; Richard Chbeir; Marco Viviani; Kokou Yetongnon
Merging related RSS news (coming from one or different sources) is beneficial for end-users with different backgrounds (journalists, economists, etc.), particularly those accessing similar information. In this paper, we provide a practical approach to both: measure the relatedness, and identify relationships between RSS elements. Our approach is based on the concepts of semantic neighborhood and vector space model, and considers the content and structure of RSS news items.
international world wide web conferences | 2010
Fekade Getahun Taddesse; Joe Tekli; Richard Chbeir; Marco Viviani; Kokou Yetongnon
Merging XML documents can be of key importance in several applications. For instance, merging the RSS news from same or different sources and providers can be beneficial for end-users in various scenarios. In this paper, we address this issue and explore the relatedness measure between RSS elements. We show here how to define and compute exclusive relations between any two elements and provide several predefined merging operators that can be extended and adapted to human needs. We also provide a set of experiments conducted to validate our approach.
web information systems engineering | 2007
Joe Tekli; Richard Chbeir; Kokou Yetongnon
The automatic processing and management of XML-based data are ever more popular research issues due to the increasing abundant use of XML, especially on the Web. Nonetheless, several operations based on the structure of XML data have not yet received strong attention. Among these is the process of matching XML documents with XML grammars, useful in various applications such as documents classification, retrieval and selective dissemination of information. In this paper, we propose an algorithm for measuring the structural similarity between an XML document and a Document Type Definition (DTD) considered as the simplest way for specifying structural constraints on XML documents. We consider the various DTD operators that designate constraints on the existence, repeatability and alternativeness of XML elements/attributes. Our approach is based on the concept of tree edit distance, as an effective and efficient means for comparing tree structures, XML documents and DTDs being modeled as ordered labeled trees. It is of polynomial complexity, in comparison with existing exponential algorithms. Classification experiments, conducted on large sets of real and synthetic XML documents, underline our approach effectiveness, as well as its applicability to large XML repositories and databases.
conference on current trends in theory and practice of informatics | 2007
Joe Tekli; Richard Chbeir; Kokou Yetongnon
In the past few years, XML has been established as an effective means for information management, and has been widely exploited for complex data representation. Owing to an unparalleled increasing use of the XML standard, developing efficient techniques for comparing XML-based documents becomes essential in information retrieval (IR) research. Various algorithms for comparing hierarchically structured data, e.g. XML documents, have been proposed in the literature. However, to our knowledge, most of them focus exclusively on comparing documents based on structural features, overlooking the semantics involved. In this paper, we integrate IR semantic similarity assessment in an edit distance algorithm, seeking to amend similarity judgments when comparing XML-based documents. Our approach comprises of an original edit distance operation cost model, introducing semantic relatedness of XML element/attribute labels, in traditional edit distance computations. A prototype has been developed to evaluate our models performance. Experiments yielded notable results.
advances in databases and information systems | 2006
Samir Saad; Joe Tekli; Richard Chbeir; Kokou Yetongnon
Database fragmentation is a process for reducing irrelevant data accesses by grouping data frequently accessed together in dedicated segments. In this paper, we address multimedia database fragmentation by extending existing fragmentation algorithms to take into account key characteristics of multimedia objects. We particularly discuss multimedia primary horizontal fragmentation and provide a partitioning strategy based on low-level multi-media features. Our approach particularly emphasizes the importance of multimedia predicates implications in optimizing multimedia fragments. To validate our approach, we have implemented a prototype computing multimedia predicates implications. Experimental results are satisfactory.
Proceedings of the 2011 international ACM workshop on Medical multimedia analysis and retrieval | 2011
Alceu Ferraz Costa; Joe Tekli; Agma J. M. Traina
This paper proposes a new feature extraction method: the Fast Fractal Stack, or FFS. The extraction algorithm consists in decomposing the input grayscale image into a stack of binary images from which the fractal dimension values are computed, resulting in a compact and highly descriptive set of features. We evaluated FFS for the task of classification of interstitial lung diseases in computed tomography (CT) scans, applied on a database of 248 CT images from 67 patients. The proposed approach performs well, improving the classification accuracy when compared to other feature extraction algorithms. Additionally, the FFS extraction algorithm is efficient, with a computational cost linear with respect to input image size.
signal-image technology and internet-based systems | 2012
Amine Awada; Youssef Bou Issa; Clara Ghannam; Joe Tekli; Richard Chbeir
The World Health Organization (WHO) estimates at 285 million the number of people affected by visual deficiencies, among which 39 millions are totally blind. In our modern society saturated with visual media tools and applications (images, videos, web pages, etc.), accessing visual information becomes a central need for all kinds of tasks and users, including the visually impaired. In this context, various adapted tools of assistance (screen readers, Braille terminals, screen magnification, etc.), have been increasingly helping persons suffering from a visual incapacity to access and manipulate information. While effective with textual contents, nonetheless, existing solutions remain very limited when it comes to accessing and understanding visual contents. The goal of our work is to provide a computerized solution, investigating the use of the vibrating touch screen technology in providing a contour-based presentation of simple images for visually impaired users. This could prove very useful in allowing blind people to access geographic maps, to navigate autonomously inside and outside buildings, as well as to access graphs and mathematical charts (for visually impaired students). To achieve this, we develop a detailed experimental protocol, EVIAC, testing a blind userâs capacity in learning, understanding, distinguishing and identifying basic geometric objects using a vibrating touch screen. Preliminary tests on blindfolded candidates show promising results with respect to traditional paper embossing.