Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rachid Chalal is active.

Publication


Featured researches published by Rachid Chalal.


computer science and its applications | 2015

Requirement Analysis in Data Warehouses to Support External Information

Mohamed Lamine Chouder; Rachid Chalal; Waffa Setra

In strategic decision-making, the decision maker needs to exploit the strategic information provided by decision support systems (DSS) and the strategic external information emanating from the enterprise business environment. The data warehouse (DW) is the main component of a data-driven DSS. In the field of DW design, many approaches exist but ignore external information and focus only on internal information coming from the operational sources. The existing approaches do not provide any instrument to take into account external information. In this paper, our objective is to introduce two models that will be employed in our approach: the requirement model and the environment model. These models are the basis of our DW design approach that supports external information. To evaluate the requirement model, we will illustrate with an example how to obtain external information useful for decision-making.


world conference on information systems and technologies | 2018

From KDD to KUBD: Big Data Characteristics Within the KDD Process Steps

Naima Lounes; Houria Oudghiri; Rachid Chalal; Walid-Khaled Hidouci

Big Data is the current challenge for the computing field not only because of the volume of data involved but also for the amazing promises to analyze and interpret massive data to generate useful and strategic knowledge in various fields such as security, sales and education. However, the massive volume of data in addition to other characteristics of Big Data such as the variety, velocity, and variability require a whole new set of techniques and technologies, which are not yet available, to effectively extract the desired knowledge. The KDD (Knowledge Discovery in Databases) process has achieved excellent results in the classical database context and that is why we examine the possibility of adapting it to the Big Data context to take advantage of its strong and effective data processing techniques. We introduce therefore a new process KUBD (Knowledge Unveiling in Big Data) inspired from the KDD process and adapted to the Big Data context.


Proceedings of the International Conference on Computing for Engineering and Sciences | 2017

Towards a classification of information technologies

Rokia Bouzidi; Fahima Nader; Rachid Chalal

Information technologies represent a heavyweight financial investment for many enterprises when implementing their information systems. Thats why enterprises should meticulously choose the suitable technology to improve business management processes. To do so, a clear categorization of the technology according to the enterprises objective is necessary. The present work is an attempt to classify information technology according to its locus of impact among 12 major technology artifacts. In each IT group previously identified, we precise on which level this technology operates (operational level, decisional level). To build this classification we first clarify the IT definition through a literature review.


Journal of Data and Information Quality | 2017

QDflows: A System Driven by Knowledge Bases for Designing Quality-Aware Data flows

Sabrina Abdellaoui; Fahima Nader; Rachid Chalal

In the big data era, data integration is becoming increasingly important. It is usually handled by data flows processes that extract, transform, and clean data from several sources, and populate the data integration system (DIS). Designing data flows is facing several challenges. In this article, we deal with data quality issues such as (1) specifying a set of quality rules, (2) enforcing them on the data flow pipeline to detect violations, and (3) producing accurate repairs for the detected violations. We propose QDflows, a system for designing quality-aware data flows that considers the following as input: (1) a high-quality knowledge base (KB) as the global schema of integration, (2) a set of data sources and a set of validated users’ requirements, (3) a set of defined mappings between data sources and the KB, and (4) a set of quality rules specified by users. QDflows uses an ontology to design the DIS schema. It offers the ability to define the DIS ontology as a module of the knowledge base, based on validated users’ requirements. The DIS ontology model is then extended with multiple types of quality rules specified by users. QDflows extracts and transforms data from sources to populate the DIS. It detects violations of quality rules enforced on the data flows, constructs repair patterns, searches for horizontal and vertical matches in the knowledge base, and performs an automatic repair when possible or generates possible repairs. It interactively involves users to validate the repair process before loading the clean data into the DIS. Using real-life and synthetic datasets, the DBpedia and Yago knowledge bases, we experimentally evaluate the generality, effectiveness, and efficiency of QDflows. We also showcase an interactive tool implementing our system.


Information Systems | 2017

EXODuS: Exploratory OLAP over Document Stores

Mohamed Lamine Chouder; Stefano Rizzi; Rachid Chalal

Abstract OLAP has been extensively used for a couple of decades as a data analysis approach to support decision making on enterprise structured data. Now, with the wide diffusion of NoSQL databases holding semi-structured data, there is a growing need for enabling OLAP on document stores as well, to allow non-expert users to get new insights and make better decisions. Unfortunately, due to their schemaless nature, document stores are hardly accessible via direct OLAP querying. In this paper we propose EXODuS , an interactive, schema-on-read approach to enable OLAP querying of document stores in the context of self-service BI and exploratory OLAP. To discover multidimensional hierarchies in document stores we adopt a data-driven approach based on the mining of approximate functional dependencies; to ensure good performances, we incrementally build local portions of hierarchies for the levels involved in the current user query. Users execute an analysis session by expressing well-formed multidimensional queries related by OLAP operations; these queries are then translated into the native query language of MongoDB, one of the most popular document-based DBMS. An experimental evaluation on real-world datasets shows the efficiency of our approach and its compatibility with a real-time setting.


computer science and its applications | 2015

Engineering the Requirements of Data Warehouses: A Comparative Study of Goal-Oriented Approaches

Waffa Setra; Rachid Chalal; Mohamed Lamine Chouder

There is a consensus that the requirements analysis phase in the development project of a data warehouse (DW) is of critical importance. It is equivalent to application of requirements engineering (RE) activities, to identify the useful information for decision-making, to be met by the DW. Many approaches has been proposed in this field. Our focus is on goal-oriented approaches which are requirement-driven DW design approaches. We are interested in investigating to what extent these approaches went well with respect to the RE process. Thus, theoretical foundations about RE are presented, including the classical RE process. After that, goal-oriented DW design approaches are described briefly; and evaluation criteria, supporting a comparative study of these approaches, are provided.


International Journal of Collaborative Intelligence | 2014

Collective memory based on semantic annotation among economic intelligence actors

Bensattalah Aissa; Fahima Nader; Rachid Chalal

Many enterprises are reflected on strategies and tools that facilitate the knowledge sharing and exploiting the collective intelligence among their actors. Economic intelligence actors collaborate to solve a decisional problem, they use significant mental effort, and so they share a common knowledge that can indicate to other actors directions to follow or avoid. Indeed, whenever an actor explores knowledge or a relevant document, it enriches the collective knowledge of the memory via annotations. To ensure this collaboration in solving a decisional problem among actors in a context of economic intelligence, in this article we propose a conceptual model using ontologies to represent collaborative semantic annotations between economic intelligence actors in order to capitalise on and reuse the knowledge shared in a collective memory.


edbt/icdt workshops | 2017

Enabling Self-Service BI on Document Stores.

Mohamed Lamine Chouder; Stefano Rizzi; Rachid Chalal


2014 4th International Symposium ISKO-Maghreb: Concepts and Tools for knowledge Management (ISKO-Maghreb) | 2014

Models and tools support to the Competitive Intelligence process

Mohamed Lamine Chouder; Rachid Chalal


Archive | 2017

JSON Datasets for Exploratory OLAP

Mohamed Lamine Chouder; Stefano Rizzi; Rachid Chalal

Collaboration


Dive into the Rachid Chalal's collaboration.

Top Co-Authors

Avatar

Fahima Nader

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Waffa Setra

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Houria Oudghiri

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Naima Lounes

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Rokia Bouzidi

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge