Mohamed Quafafou
University of Nantes
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohamed Quafafou.
Information Sciences | 2000
Mohamed Quafafou
The paper presents a transition from the crisp rough set theory to a fuzzy one, called Alpha Rough Set Theory or, in short, a-RST. All basic concepts or rough set theory are extended, i.e., information system, indiscernibility, dependency, reduction, core, definability, approximations and boundary. The resulted theory takes into account fuzzy data and allows the approximation of fuzzy concepts. Besides, the control of knowledge granularity is natural in a-RST which is based on a parameterized indiscernibility relation. a-RST is developed to recognize non-deterministic relationships using notions as a-dependency, a-reduct and so forth. On the other hand, we introduce a notion of relative dependency as an alternative of the absolute definibility presented in rough set theory. The extension a-RST leads naturally to the new concept of alpha rough sets which represents sets with fuzzy non-empty boundaries. ” 2000 Elsevier Science Inc. All rights reserved.
international conference on web services | 2014
Mustapha Aznag; Mohamed Quafafou; Zahi Jarir
With a growing number of web services, discovering services that can match with a users query becomes a challenging task. Its very tedious for a service consumer to select the appropriate one according to her/his needs. In this paper, we propose a non-logic-based matchmaking approach that uses the Correlated Topic Model (CTM) to extract topic from semantic service descriptions and model the correlation between the extracted topics. Based on the topic correlation, service descriptions can be grouped into hierarchical clusters. In our approach, we use the Formal Concept Analysis (FCA) formalism to organize the constructed hierarchical clusters into concept lattices according to their topics. Thus, service discovery may be achieved more easily using the concept lattice. In our approach, topic models are used as efficient dimension reduction techniques, which are able to capture semantic relationships between word-topic and topic-service interpreted in terms of probability distributions. In our experiment, we compared the accuracy of the our hierarchical clustering algorithm with that of a classical hierarchical agglomerative clustering. The comparisons of Precision@n and Normalised Discounted Cumulative Gain (NDCGn) values for our approach, Apache lucene and SAWSDL-MX2 Matchmaker indicate that the method based on CTM presented in this paper outperform all the others matchmakers in terms of ranking of the most relevant services.
Lecture Notes in Computer Science | 2002
Vincent Dubois; Mohamed Quafafou
The concept learning problem is a general framework for learning concept consistent with available data. Version Spaces theory and methods are build in this framework. However, it is not designated to handle noisy (possibly inconsistent) data. In this paper, we use rough set theory to improve this framework. Firstly, we introduce a rough consistency. Secondly, we define an approximative concept learning problem. Thirdly, we present a Rough Version Space theory and related methods to address the approximative concept learning problem. Using a didactic example, we put these methods into use. An overview of possible extension of this work concludes this article.
Lecture Notes in Computer Science | 2000
Moussa Boussouf; Mohamed Quafafou
In this paper, we address the problem of feature subset selection using rough set theory. We propose a scalable algorithm to find a set of reducts based on discernibility function, which is an alternative solution for the exhaustive approach. Our study shows that our algorithm improves the classical one from three points of view: computation time, reducts size and the accuracy of induced model.
international conference on web services | 2004
Benjamin Habegger; Mohamed Quafafou
Extracting information from the Web is a complex task with different components which can either be generic or specific to the task, going from downloading a given page, following links, querying a Web-based applications via an HTML form and the HTTP protocol, querying a Web service via the SOAP protocol, etc. Therefore building Web services which proceed to executing an information tasks can not be simply hard coded (i.e. written and compiled once and for all in a given programming language). In order to be able to build flexible information extraction Web Services we need to be able to compose different sub tasks together. We propose a, XML-based language to describe information extraction Web services as the compositions of existing Web services and specific functions. The usefulness the proposed framework is demonstrated by three real world applications. (1) Search engines: we show how to describe a task which queries Googles Web service, retrieves more information on the results by querying their respective HTTP servers, and filters them according to this information. (2) E-commerce sites : an information extraction Web service giving access to an existing HTML-based e-commerce online application such as Amazon is built. (3) Patent extraction: a last example shows how to describe an information extraction Web service which allows to query a Web-based application, extract the set of result links, follow them, and extract the needed information on the result pages. In all three applications the generated description can be easily modified and completed to further respond the users needs and create value-added Web services.
International Journal of Advanced Computer Science and Applications | 2013
Mustapha Aznag; Mohamed Quafafou; Zahi Jarir
With the increasing number of published Web services providing similar functionalities, it’s very tedious for a service consumer to make decision to select the appropriate one according to her/his needs. In this paper, we explore several probabilistic topic models: Probabilistic Latent Semantic Analysis (PLSA), Latent Dirichlet Allocation (LDA) and Correlated Topic Model (CTM) to extract latent factors from web service descriptions. In our approach, topic models are used as efficient dimension reduction techniques, which are able to capture semantic relationships between word-topic and topic-service interpreted in terms of probability distributions. To address the limitation of keywords-based queries, we represent web service description as a vector space and we introduce a new approach for discovering and ranking web services using latent factors. In our experiment, we evaluated our Service Discovery and Ranking approach by calculating the precision (P@n) and normalized discounted cumulative gain (NDCGn).
arXiv: Information Retrieval | 2013
Mustapha Aznag; Mohamed Quafafou; Nicolas Durand; Zahi Jarir
This paper shows that the problem of web services representation is crucial and analyzes the various factors that influence on it. It presents the traditional representation of web services considering traditional textual descriptions based on the information contained in WSDL files. Unfortunately, textual web services descriptions are dirty and need significant cleaning to keep only useful information. To deal with this problem, we introduce rules based text tagging method, which allows filtering web service description to keep only significant information. A new representation based on such filtered data is then introduced. Many web services have empty descriptions. Also, we consider web services representations based on the WSDL file structure (types, attributes, etc.). Alternatively, we introduce a new representation called symbolic reputation, which is computed from relationships between web services. The impact of the use of these representations on web service discovery and recommendation is studied and discussed in the experimentation using real world web services.
web intelligence | 2005
Gilles Nachouki; Mohamed Quafafou; Marie-Pierre Chastang
In this paper, we show the design of MDSManager a system based on a multidatasource approach for data integration. MDSManager uses a multidatasource language called EXQ (extended XQuery). EXQ is designed in order to access and interconnect multiple conflicting static data sources (included databases, XML,HTML), and/or active data sources (included distinct services like Java classes, C programs, Web services etc.).
Lecture Notes in Computer Science | 2004
Benjamin Habegger; Mohamed Quafafou
Many online information sources are available on the Web. Giving machine access to such sources leads to many interesting applications, such as using web data in mediators or software agents. Up to now most work in the field of information extraction from the web has concentrated on building wrappers, i.e. programs allowing to reformat presentational data in HTML into a more machine comprehensible format. While being an important part of a web information extraction application such wrappers are not sufficient to fully access a source. Indeed, it is necessary to setup an infrastructure allowing to build queries, fetch pages, extract specific links, etc. In this paper we propose a language called WetDL allowing to describe an information extraction task as a network of operators whose execution performs the desired extraction task.
international conference on web services | 2016
Hafida Naim; Mustapha Aznag; Mohamed Quafafou; Nicolas Durand
Due to the increasing number of available web services, discovering the best service that matches a user requirement is still a challenge. In most cases the discovery system returns a set of very similar services and sometimes it is unable to find results for some complex queries. Therefore, integrating web service discovery and composition, taking into account the diversity of discovered results, in a unified way is still a big issue for web services. In this paper, we propose a novel service ranking algorithm for diversifying web services discovery results in order to minimize the redundancy in the search results. This algorithm chooses a set of selected web services based on relevancy, service diversity and service density. We also propose a new method to generate service dependency network using the Formal Concept Analysis (FCA) framework. The generated graph is used to select the composition of discovered web services set. Experimental results show that our method performs better than others baseline approaches.