Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hakim Lounis is active.

Publication


Featured researches published by Hakim Lounis.


international conference on software engineering | 1999

Investigating quality factors in object-oriented designs: an industrial case study

Lionel C. Briand; Jürgen Wüst; Stefan V. Ikonomovski; Hakim Lounis

This paper aims at empirically exploring the relationships between most of the existing coupling and cohesion measures for object-oriented (OO) systems, and the fault-proneness of OO system classes. The underlying goal of such a study is to better understand the relationship between existing design measurement in OO systems and the quality of the software developed. The study described here is a replication of an analogous study conducted in an university environment with systems developed by students. In order to draw more general conclusions and to (dis)confirm the results obtained there, we now replicated the study using data collected on an industrial system developed by professionals. Results show that many of our findings are consistent across systems, despite the very disparate nature of the systems under study. Some of the strong dimensions captured by the measures in each data set are visible in both the university and industrial case study. For example, the frequency of method invocations appears to be the main driving factor of fault-proneness in all systems. However, there are also differences across studies which illustrate the fact that quality does not follow universal laws and that quality models must be developed locally, wherever needed.


international conference on software maintenance | 1999

Using coupling measurement for impact analysis in object-oriented systems

Lionel C. Briand; Jürgen Wüst; Hakim Lounis

Many coupling measures have been proposed in the context of object oriented (OO) systems. In addition, due to the numerous dependencies present in OO systems, several studies have highlighted the complexity of using dependency analysis to perform impact analysis. An alternative is to investigate the construction of probabilistic decision models based on coupling measurement to support impact analysis. In addition to providing an ordering of classes where ripple effects are more likely, such an approach is simple and can be automated. In our investigation, we perform a thorough analysis on a commercial C++ system where change data has been collected over several years. We identify the coupling dimensions that seem to be significantly related to ripple effects and use these dimensions to rank classes according to their probability of containing ripple effects. We then assess the expected effectiveness of such decision models.


Empirical Software Engineering | 2001

Replicated Case Studies for Investigating Quality Factorsin Object-Oriented Designs

Lionel C. Briand; Jürgen Wüst; Hakim Lounis

Thispaper aims at empirically exploring the relationships betweenmost of the existing design coupling, cohesion, and inheritancemeasures for object-oriented (OO) systems, and the fault-pronenessof OO system classes. The underlying goal of this study is tobetter understand the relationship between existing design measurementin OO systems and the quality of the software developed. in addition,we aim at assessing whether such relationships, once modeled,can be used to effectively drive and focus inspections or testing.The study described here is a replication of an analogous studyconducted in a university environment with systems developedby students. In order to draw more general conclusions and to(dis)confirm the results obtained there, we now replicated thestudy using data collected on an industrial system developedby professionals. Results show that many of our findings areconsistent across systems, despite the very disparate natureof the systems under study. Some of the strong dimensions capturedby the measures in each data set are visible in both the universityand industrial case study. For example, the frequency of methodinvocations appears to be the main driving factor of fault-pronenessin all systems. However, there are also differences across studies,which illustrate the fact that, although many principles andtechniques can be reused, quality does not follow universal lawsand quality models must be developed locally, wherever needed.


Digital Investigation | 2009

Towards an integrated e-mail forensic analysis framework

Rachid Hadjidj; Mourad Debbabi; Hakim Lounis; Farkhund Iqbal; Adam Szporer; Djamel Benredjem

Due to its simple and inherently vulnerable nature, e-mail communication is abused for numerous illegitimate purposes. E-mail spamming, phishing, drug trafficking, cyber bullying, racial vilification, child pornography, and sexual harassment are some common e-mail mediated cyber crimes. Presently, there is no adequate proactive mechanism for securing e-mail systems. In this context, forensic analysis plays a major role by examining suspected e-mail accounts to gather evidence to prosecute criminals in a court of law. To accomplish this task, a forensic investigator needs efficient automated tools and techniques to perform a multi-staged analysis of e-mail ensembles with a high degree of accuracy, and in a timely fashion. In this article, we present our e-mail forensic analysis software tool, developed by integrating existing state-of-the-art statistical and machine-learning techniques complemented with social networking techniques. In this framework we incorporate our two proposed authorship attribution approaches; one is presented for the first time in this article.


IEEE Transactions on Software Engineering | 2002

The optimal class size for object-oriented software

K. El Emam; S. Benlarbi; Nishith Goel; Walcélio L. Melo; Hakim Lounis; Shesh N. Rai

A growing body of literature suggests that there is an optimal size for software components. This means that components that are too small or too big will have a higher defect content (i.e., there is a U-shaped curve relating defect content to size). The U-shaped curve has become known as the “Goldilocks Conjecture”. Recently, a cognitive theory has been proposed to explain this phenomenon, and it has been expanded to characterize object-oriented software. This conjecture has wide implications for software engineering practice. It suggests (1) that designers should deliberately strive to design classes that are of the optimal size, (2) that program decomposition is harmful, and (3) that there exists a maximum (threshold) class size that should not be exceeded to ensure fewer faults in the software. The purpose of the current paper is to evaluate this conjecture for object-oriented systems. We first demonstrate that the claims of an optimal component/class size (1 above) and of smaller components/classes having a greater defect content (2 above) are due to a mathematical artifact in the analyses performed previously. We then empirically test the threshold effect claims of this conjecture (3 above). To our knowledge, the empirical test of size threshold effects for object-oriented systems has not been performed thus far. We perform an initial study with an industrial C++ system, and replicated it twice on another C++ system and on a commercial Java application. Our results provide unambiguous evidence that there is no threshold effect of class size. We obtained the same result for three systems using 4 different size measures. These findings suggest that there is a simple continuous relationship between class size and faults, and that optimal class size, smaller classes are better, and threshold effects conjectures have no sound theoretical nor empirical basis.


automated software engineering | 1997

Applying concept formation methods to object identification in procedural code

Houari A. Sahraoui; Walcélio L. Melo; Hakim Lounis; François Dumont

Legacy software systems present a high level of entropy combined with imprecise documentation. This makes their maintenance more difficult, more time consuming, and costlier. In order to address these issues, many organizations have been migrating their legacy systems to new technologies. In this paper, we describe a computer-supported approach aimed at supporting the migration of procedural software systems to the object-oriented (OO) technology, which supposedly fosters reusability, expandability, flexibility, encapsulation, information hiding, modularity, and maintainability. Our approach relies heavily on the automatic formation of concepts based on information extracted directly from code to identify objects. The approach tends, thus, to minimize the need for domain application experts. We also propose rules for the identification of OO methods from routines. A well known and self-contained example is used to illustrate the approach. We have applied the approach on medium/large procedural software systems, and the results show that the approach is able to find objects and to identify their methods from procedures and functions.


automated software engineering | 1998

Reusability hypothesis verification using machine learning techniques: a case study

Yida Mao; Houari A. Sahraoui; Hakim Lounis

Since the emergence of object technology, organizations have accumulated a tremendous amount of object-oriented (OO) code. Instead of continuing to recreate components that are similar to existing artifacts, and considering the rising costs of development, many organizations would like to decrease software development costs and cycle time by reusing existing OO components. This paper proposes an experiment to verify three hypotheses about the impact of three internal characteristics (inheritance, coupling and complexity) of OO applications on reusability. This verification is done through a machine learning approach (the C4.5 algorithm and a windowing technique). Two kinds of results are produced: (1) for each hypothesis (characteristic), a predictive model is built using a set of metrics derived from this characteristic; and (2) for each predictive model, we measure its completeness, correctness and global accuracy.


international conference on software engineering | 1998

An investigation on the use of machine learned models for estimating correction costs

M.A. de Almeida; Hakim Lounis; Walcélio L. Melo

We present the results of an empirical study in which we have investigated machine learning (ML) algorithms with regard to their capabilities to accurately assess the correctability of faulty software components. Three different families of algorithms have been analyzed. We have used (1) fault data collected on corrective maintenance activities for the Generalized Support Software reuse asset library located at the Flight Dynamics Division of NASAs GSFC and (2) product measures extracted directly from the faulty components of this library.


software engineering and advanced applications | 2006

Analyzing Change Impact in Object-Oriented Systems

M. K. Abdi; Hakim Lounis; Houari A. Sahraoui

The development of software products consumes a lot of time and resources. On the other hand, these development costs are lower than maintenance costs, which represent a major concern, specially, for systems designed with recent technologies. Systems modification should be taken rigorously, and change effects must be considered. In this paper, we propose an approach, both analytical and experimental; its objective is to analyze and predict changes impacts in object-oriented (OO) systems. The method we follow consists first, to choose an existing impact model, and adapt it afterward. An impact calculation technique based on a meta-model is developed. To evaluate our approach, an empirical study was led on a real system in which a correlation hypothesis between coupling and change impact was advanced. A concrete change was done in the target system and coupling metrics were extracted from it. The hypothesis was verified with machine-learning (ML) techniques. Obtained results are interesting; they are presented and commented


International Journal of Software Engineering and Knowledge Engineering | 1999

AN INVESTIGATION ON THE USE OF MACHINE LEARNED MODELS FOR ESTIMATING SOFTWARE CORRECTABILITY

Mauricio Amaral de Almeida; Hakim Lounis; Walcélio L. Melo

In this paper we present the results of an empirical study in which we have investigated Machine Learning (ML) algorithms with regard to their capabilities to accurately assess the correctability of faulty software components. Three different families algorithms have been analyzed: divide and conquer (top down induction decision tree), covering, and inductive logic programming (ILP). We have used (1) fault data collected on corrective maintenance activities for the Generalized Support Software reuse asset library located at the Flight Dynamics Division of NASAs GSFC and (2) product measures extracted directly from the faulty components of this library. In our data set, the software quality models generated by both C4.5-rules (a divide and conquer algorithm) and FOIL (an inductive logic programming one) presented the best results from the point of view of model accuracy.

Collaboration


Dive into the Hakim Lounis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tamer Fares Gayed

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar

Hicham Assoudi

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar

Moncef Bari

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mounir Boukadoum

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hafedh Mili

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge