Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mamadou Samba Camara is active.

Publication


Featured researches published by Mamadou Samba Camara.


International Journal of Computer Integrated Manufacturing | 2014

A methodology for the evaluation of interoperability improvements in inter-enterprises collaboration based on causal performance measurement models

Mamadou Samba Camara; Yves Ducq; Rémy Dupas

This article proposes a framework and methodology to evaluate and improve the interoperability for each partner of an inter-enterprises collaboration. The framework is based on the decomposition of business processes into non-value-added (NVA) and business activities and it is also based on process performance indicators (PIs) to measure interoperability. The developed methodology is then founded on this framework and is aimed to provide support for validating interoperability solutions. A causal performance measurement model (CPMM) is used to define how interoperability improvement may influence the achievement of all of the partners’ objectives. An application of the methodology to an illustrative example has been presented.


federated conference on computer science and information systems | 2016

Megamodel-based Management of Dynamic Tool Integration in Complex Software Systems.

El Hadji Bassirou Toure; Ibrahima Fall; Alassane Bah; Mamadou Samba Camara

The development of complex software systems is more and more based on the composition and integration of autonomous component systems. This can be done either statically (proactive approach) at development-time or dynamically through a reactive approach in which a new composite system can possibly be created on-demand and/or at run-time from existing systems. With the aim of constructing and managing such complex and reactive software systems, we propose a megamodelbased environment supporting dynamic tool integration. Such an environment must therefore be consistent at any time (i.e. before, during and after an integration) and should also have to exhibit some self-* properties (such self management, selfhealing and self-configuration). In order to meet these challenges we propose the use of Hoare’s Axiomatic Semantics and some inference rules to maintain the integrity of the megamodel and its components. For that we have defined a formal-safe execution as well as an execution semantic for each operation likely to modify the megamodel contents.


Archive | 2015

Prior Data Quality Management in Data Mining Process

Mamadou Samba Camara; Djasrabe Naguingar; Alassane Bah

Data Mining (DM) projects are implemented by following the knowledge discovery process. Several techniques for detecting and handling data quality problems such as missing data, outliers, inconsistent data or time-variant data, can be found in the literature of DM and Data Warehousing (DW). Tasks that are related to the quality of data are mostly in the Data Understanding and in the Data Preparation phases of the DM process. The main limitation in the application of the data quality management techniques is the complexity caused by a lack of anticipation in the detection and resolution of the problems. A DM process model designed for the prior management of data quality is proposed in this work. In this model, the DM process is defined in relation to the Software Engineering (SE) process; the two processes are combined in parallel. The main contribution of this DM process is the anticipation and the automation of all activities necessary to remove data quality problems.


Archive | 2014

Interoperability Improvement in Inter-Enterprises Collaboration: A Software Engineering Approach

Mamadou Samba Camara; Rémy Dupas; Yves Ducq; Bacary Mané

The present research work aims at developing an approach to reach inter-enterprise interoperability and to test its achievement using practices from the software engineering process. Four fundamental activities are identified in the software process: software specification, software development, software validation and software evolution [1]. In this work, interoperability requirements are specified by representing interoperability problems directly on business process models. For the validation activity, an interoperability testing sub-process is defined based on this new form of interoperability requirements specification. It is also demonstrated that the improvement proposed in software specification activity will have positive impact on the software development activity.


Archive | 2019

Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data

Bourama Mane; Ibrahima Fall; Mamadou Samba Camara; Alassane Bah

At the level of the National Statistical Institutes (NSI), there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Data collection projects for households or businesses are increasingly integrating into their implementation phase, a platform for results dissemination. If not, it is the NSI Web portals that are often used for dissemination. Thus, these dissemination methods previously used, and do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor, and the general census of the population of Senegal.


Archive | 2017

Megamodel Consistency Management at Runtime

El Hadji Bassirou Toure; Ibrahima Fall; Alassane Bah; Mamadou Samba Camara

This paper addresses the problem of ensuring consistency, correctness and other properties in dynamically changing software systems. The approach uses a Megamodel that represents the current state of the system at runtime including some rules. These rules are formulated as Hoare-Triples and allow to check whether modifications to the software system result in a consistent state, otherwise to fix changes that are likely to violate the megamodel integrity.


2017 Intelligent Systems and Computer Vision (ISCV) | 2017

Definition of the database anonymization method for open data

Bourama Mane; Ibrahima Fall; Mamadou Samba Camara; Alassane Bah

In their mission of collecting, analyzing and disseminating statistic data, the National Institutes of Statistics are more and more confronted with personal data management. With the advent of the “Open Data” [1], the need for data diffusion and to establish a link between databases created during the surveys and censuses, are necessary for planning, monitoring and activities evaluation, in a context marked by a will of the official authorities to accelerate the economic growth [2]. For that matter, this article describes a methodology to anonymize database collected from households or companies, limiting the risk of information disclosure while preserving as much as possible the quality of the data resulting from the anonymization process.


2017 Intelligent Systems and Computer Vision (ISCV) | 2017

A methodology for prior management of temporal data quality in a data mining process

Mouhamed Diop; Mamadou Samba Camara; Ibrahima Fall; Alassane Bah

In Data Mining (DM) projects, more specifically in the Data Understanding and the Data Preparation phases, several techniques found in the literature are used to detect and handle data quality problems such as missing data, outliers, inconsistent data or time-variant data. However, the main limitation in the application of these techniques is the complexity caused by a lack of anticipation in the detection and resolution of data quality problems. Then, a DM process model designed for the prior management of data quality was recently proposed. It has the distinctive feature of having linked the DM process and the Software Engineering (SE) one by combining them in parallel. However, authors of that work [1] have just specified what should be done, not how it should be. The present research work is an improvement of that DM process model. It adds to it a methodology that indicates in a concrete way a guideline on how to combine the SE process and the DM one to anticipate and manage data quality problems that can be found during the mining process. This work will specifically address the case of temporal data. The main contribution of this methodology is the definition, in concrete terms, of how to anticipate and automate all activities necessary to remove temporal data quality problems in a mining process.


international conference on system theory, control and computing | 2015

Activity failure prediction based on process mining

Mamadou Samba Camara; Ibrahima Fall; Gervais Mendy; Samba Diaw

Based on the state of the art of process mining, we can conclude that quality characteristics (failure rate metrics or loops) are poorly represented or absent in most predictive models that can be found in the literature. The main goal of this present research work is to analyze how to learn prediction model defining failure as response variable. A model of this type can be used for active real-time-controlling (e. g. through the reassignment of workflow activities based on prediction results) or for the automated support of redesign (i.e., prediction results are transformed in software requirements used to implement process improvements). The proposed methodology is based on the application of a data mining process because the objective of this work can be considered as a data mining goal.


6th International IFIP Working Conference on Enterprise Interoperability (IWEI) | 2015

Validation and Verification of Interoperability Requirements

Mamadou Samba Camara; Rémy Dupas; Yves Ducq

The research objective of this work is to develop an approach to reach inter-enterprise interoperability and to test its achievement using practices from the software engineering process. Four fundamental activities are identified in the software process: software specification, software development, software validation and software evolution [1]. In this work, the interoperability requirements specification is based on measurable and non-measurable quality characteristics. It is also demonstrated that the improvement proposed in software specification activity will have positive impact on the software development activity. For the validation activity, the definition of a interoperability testing sub-process is made through a two-step decomposition: one step to verify measurable requirements and another to validate non-measurable ones.

Collaboration


Dive into the Mamadou Samba Camara's collaboration.

Top Co-Authors

Avatar

Alassane Bah

Cheikh Anta Diop University

View shared research outputs
Top Co-Authors

Avatar

Ibrahima Fall

Cheikh Anta Diop University

View shared research outputs
Top Co-Authors

Avatar

Rémy Dupas

University of Bordeaux

View shared research outputs
Top Co-Authors

Avatar

Yves Ducq

University of Bordeaux

View shared research outputs
Top Co-Authors

Avatar

Bourama Mane

Cheikh Anta Diop University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bacary Mané

Cheikh Anta Diop University

View shared research outputs
Top Co-Authors

Avatar

Birahime Diouf

Cheikh Anta Diop University

View shared research outputs
Top Co-Authors

Avatar

Djasrabe Naguingar

Cheikh Anta Diop University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge