Fahima Nader
École Normale Supérieure
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fahima Nader.
2015 12th International Symposium on Programming and Systems (ISPS) | 2015
Abdelaziz Ouadah; Karim Benouaret; Allel Hadjali; Fahima Nader
This paper describes a new approach based on combining three Multi-Criteria Decision Making methods: Skyline, AHP and Promethee, for ranking Web services. The Skyline is used to reduce the search space and focusing only on interesting web services that are not dominated by any other service. AHP is used to weight QoS criteria in a simple and an intuitive manner; it allows for checking whether the weights assigned are consistent. Promethee method is leveraged to rank skyline services, by taking advantage of the outranking relationships between skyline candidate services and generating positive, negative and Net flows. An algorithm is proposed to rank skyline services based on Net flow. A case study based on an example of QoS dataset is presented to illustrate the different steps of our approach. The experimental evaluation conducted on real Dataset demonstrates that our approach can better capture the user preferences and retrieve the best ranked Skyline services.
service-oriented computing and applications | 2015
Abdelaziz Ouadah; Karim Benouaret; Allel Hadjali; Fahima Nader
This paper handles the problem of Skyline services selection, it proposes a hybrid approach which mixes three Multi-Criteria Decision Making methods: Skyline, AHP and Promethee, for ranking Web services. The Skyline is used to reduce the decision space and focusing only on interesting web services that are not dominated by any other service. AHP is used to weight QoS criteria of interest in a simple and intuitive manner, it allows for checking whether the weights assigned are consistent as well. Promethee method is leveraged to rank skyline services, by taking advantage of the outranking relationships between skyline candidate services and generating positive, negative and Net flows. An efficient algorithm to rank-order skyline services on the basis of Net flow is developed. A case study is presented to illustrate the different steps of our approach. The experimental evaluation conducted on real-world datasets demonstrates that our approach can better capture the user preferences and retrieve the best ranked Skyline services.
advances in databases and information systems | 2015
Selma Khouri; Sabrina Abdellaoui; Fahima Nader
Extract-Transform-Load (\(\mathcal {ETL}\)) is a crucial phase in Data Warehouse (\(\mathcal {DW}\)) design life-cycle that copes with many issues: data provenance, data heterogeneity, process automation, data refreshment, execution time, etc. Ontologies and Semantic Web technologies have been largely used in the \(\mathcal {ETL}\) phase. Ontologies are a buzzword used by many research communities such as: Databases, Artificial Intelligence (AI), Natural Language Processing (NLP), where each community has its type of ontologies: conceptual canonical ontologies (for databases), conceptual non-canonical ontologies (for AI), and linguistic ontologies (for NLP). In \(\mathcal {ETL}\) approaches, these three types of ontologies are considered. However, these studies do not consider the types of the used ontologies which usually affect the quality of the managed data. We propose in this paper a semantic \(\mathcal {ETL}\) approach which considers both canonical and non-canonical layers. To evaluate the effectiveness of our approach, experiments are conducted using Oracle semantic databases referencing LUBM benchmark ontology.
Journal of Ambient Intelligence and Humanized Computing | 2018
Abdelaziz Ouadah; Allel Hadjali; Fahima Nader; Karim Benouaret
With the increasing number of Web services published on the Web, many of services provide the same functionality with different quality of service. Ranking similar web services based on QoS is then an important issue. This paper proposes a hybrid approach to rank-order Skyline Web services, which mixes several methods borrowed from Multi-Criteria Decision Making field. The Skyline method is used to reduce the decision space and focusing only on interesting Web services that are not dominated by any other service. For weighting QoS criteria, we aggregate objective and subjective weights. The objective Entropy weights are extracted directly from invocation history data, however, the subjective weights are calculated using Fuzzy AHP from user opinions. Promethee method is leveraged to rank Skyline Web services, by taking advantage of the outranking relationships between Skyline Web services and generating positive, negative and Net flows. An efficient algorithm to rank-order Skyline Web services on the basis of Net flow is developed. A case study is presented to illustrate the different steps of our approach. The experimental evaluation conducted on real-world datasets demonstrates that our approach can better capture the user preferences and retrieve the best ranked Skyline Web services.
Proceedings of the International Conference on Computing for Engineering and Sciences | 2017
Rokia Bouzidi; Fahima Nader; Rachid Chalal
Information technologies represent a heavyweight financial investment for many enterprises when implementing their information systems. Thats why enterprises should meticulously choose the suitable technology to improve business management processes. To do so, a clear categorization of the technology according to the enterprises objective is necessary. The present work is an attempt to classify information technology according to its locus of impact among 12 major technology artifacts. In each IT group previously identified, we precise on which level this technology operates (operational level, decisional level). To build this classification we first clarify the IT definition through a literature review.
Journal of Data and Information Quality | 2017
Sabrina Abdellaoui; Fahima Nader; Rachid Chalal
In the big data era, data integration is becoming increasingly important. It is usually handled by data flows processes that extract, transform, and clean data from several sources, and populate the data integration system (DIS). Designing data flows is facing several challenges. In this article, we deal with data quality issues such as (1) specifying a set of quality rules, (2) enforcing them on the data flow pipeline to detect violations, and (3) producing accurate repairs for the detected violations. We propose QDflows, a system for designing quality-aware data flows that considers the following as input: (1) a high-quality knowledge base (KB) as the global schema of integration, (2) a set of data sources and a set of validated users’ requirements, (3) a set of defined mappings between data sources and the KB, and (4) a set of quality rules specified by users. QDflows uses an ontology to design the DIS schema. It offers the ability to define the DIS ontology as a module of the knowledge base, based on validated users’ requirements. The DIS ontology model is then extended with multiple types of quality rules specified by users. QDflows extracts and transforms data from sources to populate the DIS. It detects violations of quality rules enforced on the data flows, constructs repair patterns, searches for horizontal and vertical matches in the knowledge base, and performs an automatic repair when possible or generates possible repairs. It interactively involves users to validate the repair process before loading the clean data into the DIS. Using real-life and synthetic datasets, the DBpedia and Yago knowledge bases, we experimentally evaluate the generality, effectiveness, and efficiency of QDflows. We also showcase an interactive tool implementing our system.
cluster computing and the grid | 2016
Sabrina Abdellaoui; Ladjel Bellatreche; Fahima Nader
Data Warehouse (DW) is a collection of data, consolidated from several heterogeneous sources, used to perform data analysis and support decision making in an organization. Extract-Transform-Load (ETL) phase plays a crucial role in designing DW. To overcome the complexity of the ETL phase, different studies have recently proposed the use of ontologies. Ontology-based ETL approaches have been used to reduce heterogeneity between data sources and ensure automation of the ETL process. Existing studies in semantic ETL have largely focused on fulfilling functional requirements. However, the ETL process quality dimension has not been sufficiently considered by these studies. As the amount of data has exploded with the advent of big data era, dealing with quality challenges in the early stages of designing the process become more important than ever. To address this issue, we propose to keep data quality requirements at the center of the ETL phase design. We present in this paper an approach, defining the ETL process at the ontological level. We define a set of quality indicators and quantitative measures that can anticipate data quality problems and identify causes of deficiencies. Our approach checks the quality of data before loading them into the target data warehouse to avoid the propagation of corrupted data. Finally, our proposal is validated through a case study, using Oracle Semantic DataBase sources (SDBs), where each source references the Lehigh University BenchMark ontology (LUBM).
International Journal of Collaborative Intelligence | 2014
Bensattalah Aissa; Fahima Nader; Rachid Chalal
Many enterprises are reflected on strategies and tools that facilitate the knowledge sharing and exploiting the collective intelligence among their actors. Economic intelligence actors collaborate to solve a decisional problem, they use significant mental effort, and so they share a common knowledge that can indicate to other actors directions to follow or avoid. Indeed, whenever an actor explores knowledge or a relevant document, it enriches the collective knowledge of the memory via annotations. To ensure this collaboration in solving a decisional problem among actors in a context of economic intelligence, in this article we propose a conceptual model using ontologies to represent collaborative semantic annotations between economic intelligence actors in order to capitalise on and reuse the knowledge shared in a collective memory.
international conference on information systems | 2015
Sabrina Abdellaoui; Fahima Nader
International Journal of Business Information Systems | 2018
Bensattalah Aissa; Faiçal Azouaou; Fahima Nader; Rachid Chalal