Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fadel Toure is active.

Publication


Featured researches published by Fadel Toure.


ACM Sigsoft Software Engineering Notes | 2011

An empirical analysis of a testability model for object-oriented programs

Aymen Kout; Fadel Toure; Mourad Badri

We present, in this paper, a metric based testability model for object-oriented programs. The model is, in fact, an adaptation of a model pro-posed in literature for assessing the testability of object-oriented design. The study presented in this paper aims at exploring empirically the capa-bility of the model to assess testability of classes at the code level. We investigate testability from the perspective of unit testing and required testing effort. We designed an empirical study using data collected from two Java software systems for which JUnit test cases exist. To capture testability of classes in terms of required testing effort, we used different metrics to quantify the corresponding JUnit test cases. In order to eva-luate the capability of the model to predict testability of classes (charac-teristics of corresponding test classes), we used statistical tests using correlation.


international conference on advanced software engineering and its applications | 2010

Exploring Empirically the Relationship between Lack of Cohesion and Testability in Object-Oriented Systems

Linda Badri; Mourad Badri; Fadel Toure

The study presented in this paper aims at exploring empirically the relationship between lack of cohesion and testability of classes in object-oriented systems. We investigated testability from the perspective of unit testing. We designed and conducted an empirical study using two Java software systems for which JUnit test cases exist. To capture testability of classes, we used different metrics to measure some characteristics of the corresponding JUnit test cases. We used also some lack of cohesion metrics. In order to evaluate the capability of lack of cohesion metrics to predict testability, we performed statistical tests using correlation. The achieved results provide evidence that (lack of) cohesion may be associated with (low) testability.


The Journal of Object Technology | 2009

Empirical Analysis of Object-Oriented Design Metrics: Towards a New Metric Using Control Flow Paths and Probabilities

Mourad Badri; Linda Badri; Fadel Toure

A large number of object-oriented metrics have been proposed in literature. They are used to assess different software attributes. However, it is not obvious for a developer or a project manager to select the metrics that are more useful. Furthermore, these metrics are not completely independent. Using several metrics at the same time is time consuming and can generate a quite large data set, which may be difficult to analyze and interpret. We present, in this paper, a new metric capturing in a unified way several aspects of object-oriented systems quality. The metric uses control flow paths and probabilities, and captures the collaboration between classes. Our objective is not to evaluate a given design by giving absolute values, but more relative values that may be used, for example, to identify in a relative way high-risk classes. We have designed and conducted an empirical study using several large Java projects. We compared the new metric, using the Principal Components Analysis method, to several well known object-oriented metrics. The selected metrics were grouped in five categories: coupling, cohesion, inheritance, complexity and size. The obtained results demonstrate that the proposed metric captures, in a large part, the information provided by the other metrics.


Advances in Software Engineering | 2012

Evaluating the effect of control flow on the unit testing effort of classes: an empirical analysis

Mourad Badri; Fadel Toure

The aim of this paper is to evaluate empirically the relationship between a new metric (Quality Assurance Indicator--Qi) and testability of classes in object-oriented systems. The Qi metric captures the distribution of the control flow in a system. We addressed testability from the perspective of unit testing effort. We collected data from five open source Java software systems for which JUnit test cases exist. To capture the testing effort of classes, we used different metrics to quantify the corresponding JUnit test cases. Classes were classified, according to the required testing effort, in two categories: high and low. In order to evaluate the capability of the Qi metric to predict testability of classes, we used the univariate logistic regression method. The performance of the predicted model was evaluated using Receiver Operating Characteristic (ROC) analysis. The results indicate that the univariate model based on the Qi metric is able to accurately predict the unit testing effort of classes.


International Journal of Computer Theory and Engineering | 2013

Metrics and Software Quality Evolution: A Case Study on Open Source Software

Nicholas Drouin; Mourad Badri; Fadel Toure

This paper aims at analyzing empirically the quality evolution of an open source software using metrics. We used a control flow based metric (Quality Assurance Indicator - Qi) which we proposed in a previous work. We wanted to investigate if the Qi metric can be used to observe how quality evolves along the evolution of the successive released versions of the subject software system. We addressed software quality from an internal perspective. We performed an empirical analysis using historical data on the subject system (Apache Tomcat). The collected data cover, in fact, a period of more than seven years (thirty-one versions in total). Empirical results provide evidence that the Qi metric reflects properly the quality evolution of the subject system.


Procedia Computer Science | 2015

Predicting Unit Testing Effort Levels of Classes: An Exploratory Study based on Multinomial Logistic Regression Modeling☆

Mourad Badri; Fadel Toure; Luc Lamontagne

Abstract The study aims at investigating empirically the ability of a Quality Assurance Indicator (Qi), a metric that we proposed in a previous work, to predict different levels of unit testing effort of classes in object-oriented systems. To capture the unit testing effort of classes, we used four metrics to quantify various perspectives related to the code of corresponding unit test cases. Classes were classified, according to the involved unit testing effort, in five categories (levels). We collected data from two open source Java software systems (ANT and JFREECHART) for which JUnit test cases exist. In order to explore the ability of the Qi metric to predict different levels of the unit testing effort of classes, we decided to explore the possibility of using the Multinomial Logistic Regression (MLR) method. The performance of the Qi metric has been compared to the performance of three well-known source code metrics related respectively to size, complexity and coupling. Results suggest that the MLR model based on the Qi metric is able to accurately predict different levels of the unit testing effort of classes.


Journal of Software Engineering Research and Development | 2014

A metrics suite for JUnit test code: a multiple case study on open source software

Fadel Toure; Mourad Badri; Luc Lamontagne

BackgroundThe code of JUnit test cases is commonly used to characterize software testing effort. Different metrics have been proposed in literature to measure various perspectives of the size of JUnit test cases. Unfortunately, there is little understanding of the empirical application of these metrics, particularly which metrics are more useful in terms of provided information.MethodsThis paper aims at proposing a unified metrics suite that can be used to quantify the unit testing effort. We addressed the unit testing effort from the perspective of unit test case construction, and particularly the effort involved in writing the code of JUnit test cases. We used in our study five unit test case metrics, two of which were introduced in a previous work. We conducted an empirical study in three main stages. We collected data from six open source Java software systems, of different sizes and from different domains, for which JUnit test cases exist. We performed in a first stage a Principal Component Analysis to find whether the analyzed unit test case metrics are independent or are measuring similar structural aspects of the code of JUnit test cases. We used in a second stage clustering techniques to determine the unit test case metrics that are the less volatile, i.e. the least affected by the style adopted by developers while writing the code of test cases. We used in a third stage correlation and linear regression analysis to evaluate the relationships between the internal software class attributes and the test case metrics.Results and ConclusionsThe main goal of this study was to identify a subset of unit test case metrics: (1) providing useful information on the effort involved to write the code of JUnit test cases, (2) that are independent from each other, and (3) that are the less volatile. Results confirm the conclusions of our previous work and show, in addition, that: (1) the set of analyzed unit test case metrics could be reduced to a subset of two independent metrics maximizing the whole set of provided information, (2) these metrics are the less volatile, and (3) are also the most correlated to the internal software class attributes.


2012 International Conference on Computer Systems and Industrial Informatics | 2012

On understanding software quality evolution from a defect perspective: A case study on an open source software system

Mourad Badri; Nicholas Drouin; Fadel Toure

Software systems need to continually evolve during their life cycle. It is, therefore, important to monitor how their quality evolves so that quality assurance activities can be properly planned. In this paper, we analyze empirically the quality evolution of an open source software system (Apache Tomcat). We address software quality from an external perspective. We used the number of defects as a quality indicator. We wanted to investigate if the Qi (Quality Assurance Indicator) metric, which we proposed in a previous work, can be used to observe how quality, measured in terms of defects, evolves in the presence of changes. We performed an empirical analysis using historical data collected from the subject system covering a period of more than seven years (thirty-one versions). Results are reported and discussed in the paper.


Innovations in Systems and Software Engineering | 2018

Predicting different levels of the unit testing effort of classes using source code metrics: a multiple case study on open-source software

Fadel Toure; Mourad Badri; Luc Lamontagne

Nowadays, the growth in size and complexity of object-oriented software systems bring new software quality assurance challenges. Applying equally testing (quality assurance) effort to all classes of a large and complex object-oriented software system is cost prohibitive and not realistic in practice. So, predicting early the different levels of the unit testing effort required for testing classes can help managers to: (1) better identify critical classes, which will involve a relatively high-testing effort, on which developers and testers have to focus to ensure software quality, (2) plan testing activities, and (3) optimally allocate resources. In this paper, we investigate empirically the ability of a Quality Assurance Indicator (Qi), a synthetic metric that we proposed in a previous work, to predict different levels of the unit testing effort of classes in object-oriented software systems. The unit testing effort of classes is addressed from the perspective of unit test cases construction. We focused particularly on the effort involved in writing the code of unit test cases. To capture the involved unit testing effort of classes, we used four metrics that quantify different characteristics related to the code of corresponding unit test cases. We used Means and K-Means-based categorizations to group software classes into five categories according to the involved unit testing effort. We performed an empirical analysis using data collected from eight open-source Java software systems from different domains, for which the JUnit test cases were available. To evaluate the ability of the Qi metric to predict different levels of the unit testing effort of classes, we used three modeling techniques: the univariate logistic regression, the univariate linear regression, and the multinomial logistic regression. The performance of the models based on the Qi metric has been compared to the performance of the models based on various well-known object-oriented source code metrics. We used different evaluation criteria to compare the prediction models. Results indicate that the models based on the Qi metric have more promising prediction potential than those based on traditional object-oriented metrics.


international conference on machine learning | 2017

Investigating the Accuracy of Test Code Size Prediction using Use Case Metrics and Machine Learning Algorithms: An Empirical Study

Mourad Badri; Linda Badri; William Flageol; Fadel Toure

Software testing plays a crucial role in software quality assurance. It is, however, a time and resource consuming process. It is, therefore, important to predict as soon as possible the effort required to test software, so that activities can be planned and resources can be optimally allocated. Test code size, in terms of Test Lines Of Code (TLOC), is an important testing effort indicator used in many empirical studies. In this paper, we investigate empirically the early prediction of TLOC for object-oriented software using use case metrics. We used different machine learning algorithms (linear regression, k-NN, Naïve Bayes, C4.5, Random Forest, and Multilayer Perceptron) to build the prediction models. We performed an empirical study using data collected from five Java projects. The use case metrics have been compared to the well-known Use Case Points (UCP) method. Results show that the use case metrics-based approach gives a more accurate prediction of TLOC than the UCP method.

Collaboration


Dive into the Fadel Toure's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linda Badri

Université du Québec

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aymen Kout

Université du Québec

View shared research outputs
Researchain Logo
Decentralizing Knowledge