Baldoino Fonseca
Federal University of Alagoas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Baldoino Fonseca.
Information & Software Technology | 2016
Iran Rodrigues; Márcio Ribeiro; Flávio Medeiros; Paulo Borba; Baldoino Fonseca; Rohit Gheyi
Context: Maintaining software families is not a trivial task. Developers commonly introduce bugs when they do not consider existing dependencies among features. When such implementations share program elements, such as variables and functions, inadvertently using these elements may result in bugs. In this context, previous work focuses only on the occurrence of intraprocedural dependencies, that is, when features share program elements within a function. But at the same time, we still lack studies investigating dependencies that transcend the boundaries of a function, since these cases might cause bugs as well.Objective: This work assesses to what extent feature dependencies exist in actual software families, answering research questions regarding the occurrence of intraprocedural, global, and interprocedural dependencies and their characteristics.Method: We perform an empirical study covering 40 software families of different domains and sizes. We use a variability-aware parser to analyze families source code while retaining all variability information.Results: Intraprocedural and interprocedural feature dependencies are common in the families we analyze: more than half of functions with preprocessor directives have intraprocedural dependencies, while over a quarter of all functions have interprocedural dependencies. The median depth of interprocedural dependencies is 9.Conclusion: Given these dependencies are rather common, there is a need for tools and techniques to raise developers awareness in order to minimize or avoid problems when maintaining code in the presence of such dependencies. Problems regarding interprocedural dependencies with high depths might be harder to detect and fix.
acm symposium on applied computing | 2010
Andrew Diniz da Costa; Camila Nunes; Viviane Torres da Silva; Baldoino Fonseca; Carlos José Pereira de Lucena
Appropriate implementation of self-adaptive software systems able not only to check the needs for the adaptations and perform them but also to ensure their compliance with new environment requirements is still an open issue. Therefore, this paper proposes an extension to the Java self-Adaptive Agent Framework (JAAF) in order to apply the self-test concept. This framework allows for the creation of self-adaptive agents based on a process composed of a set of four main activities (monitor, analyze, plan and execute). In this paper we extend the process and framework by including the test activity that will check the adapted behavior before its execution. The applicability of the proposed process is demonstrated by a case study where a system responsible for generating susceptibility maps, i.e., maps that show locations with landslides risks in a given area, searches to adapt its behavior and checks the adaptations before using them.
european dependable computing conference | 2016
Henrique Alves; Baldoino Fonseca; Nuno Antunes
Code with certain characteristics is more prone to have security vulnerabilities. In fact, studies show that code not following best practices is harder to verify and maintain, and consequently is more probable to have vulnerabilities left unnoticed or inadvertently introduced. In this experience report, we study whether software metrics can reflect such characteristics, thus having some correlation with the existence of vulnerabilities. The analysis is based on 2875 security patches, used to build a dataset with metrics and vulnerabilities for all the functions, classes and files of 5750 versions of five widely used projects that are exposed to attacks: Linux Kernel, Mozilla, Xen Hypervisor, httpd and glibc. We calculated software metrics from their sources and used correlation algorithm and statistical tests on these metrics in order to identify relations between them and the existing vulnerabilities. Results show that software metrics are able to discriminate vulnerable and non vulnerable functions, but it is not possible to find strong correlations between these metrics and the number of vulnerabilities existing in the analyzed functions. Finally, the results indicate that vulnerable functions are probable to have other vulnerabilities in the future.
international symposium on software reliability engineering | 2015
Lucas Amorim; Evandro Costa; Nuno Antunes; Baldoino Fonseca; Márcio Ribeiro
Developers continuously maintain software systems to adapt to new requirements and to fix bugs. Due to the complexity of maintenance tasks and the time-to-market, developers make poor implementation choices, also known as code smells. Studies indicate that code smells hinder comprehensibility, and possibly increase change- and fault-proneness. Therefore, they must be identified to enable the application of corrections. The challenge is that the inaccurate definitions of code smells make developers disagree whether a piece of code is a smell or not, consequently, making difficult creation of a universal detection solution able to recognize smells in different software projects. Several works have been proposed to identify code smells but they still report inaccurate results and use techniques that do not present to developers a comprehensive explanation how these results have been obtained. In this experimental report we study the effectiveness of the Decision Tree algorithm to recognize code smells. For this, it was applied in a dataset containing 4 open source projects and the results were compared with the manual oracle, with existing detection approaches and with other machine learning algorithms. The results showed that the approach was able to effectively learn rules for the detection of the code smells studied. The results were even better when genetic algorithms are used to pre-select the metrics to use.
Information & Software Technology | 2018
Mário Hozano; Alessandro Garcia; Baldoino Fonseca; Evandro Costa
Abstract Context A code smell indicates a poor implementation choice that often worsens software quality. Thus, code smell detection is an elementary technique to identify refactoring opportunities in software systems. Unfortunately, there is limited knowledge on how similar two or more developers detect smells in code. In particular, few studies have investigated if developers agree or disagree when recognizing a smell and which factors can influence on such (dis)agreement. Objective We perform a broader study to investigate how similar the developers detect code smells. We also analyze whether certain factors related to the developers’ profiles concerning background and experience may influence such (dis)agreement. Moreover, we analyze if the heuristics adopted by developers on detecting code smells may influence on their (dis)agreement. Method We conducted an empirical study with 75 developers who evaluated instances of 15 different code smell types. For each smell type, we analyzed the agreement among the developers and we assessed the influence of 6 different factors on the developers’ evaluations. Altogether more than 2700 evaluations were collected, resulting in substantial quantitative and qualitative analyses. Results The results indicate that the developers presented a low agreement on detecting all 15 smell types analyzed in our study. The results also suggest that factors related to background and experience did not have a consistent influence on the agreement among the developers. On the other hand, the results show that the agreement was consistently influenced by specific heuristics employed by developers. Conclusions Our findings reveal that the developers detect code smells in significantly different ways. As a consequence, these findings introduce some questions concerning the results of previous studies that did not consider the different perceptions of developers on detecting code smells. Moreover, our findings shed light towards improving state-of-the-art techniques for accurate, customized detection of code smells.
international conference on software engineering | 2018
Leonardo da Silva Sousa; Anderson Oliveira; Willian Nalepa Oizumi; Simone Diniz Junqueira Barbosa; Alessandro Garcia; Jaejoon Lee; Marcos Kalinowski; Rafael Maiani de Mello; Baldoino Fonseca; Roberto Felicio Oliveira; Carlos José Pereira de Lucena; Rodrigo B. de Paes
The prevalence of design problems may cause re-engineering or even discontinuation of the system. Due to missing, informal or outdated design documentation, developers often have to rely on the source code to identify design problems. Therefore, developers have to analyze different symptoms that manifest in several code elements, which may quickly turn into a complex task. Although researchers have been investigating techniques to help developers in identifying design problems, there is little knowledge on how developers actually proceed to identify design problems. In order to tackle this problem, we conducted a multi-trial industrial experiment with professionals from 5 software companies to build a grounded theory. The resulting theory offers explanations on how developers identify design problems in practice. For instance, it reveals the characteristics of symptoms that developers consider helpful. Moreover, developers often combine different types of symptoms to identify a single design problem. This knowledge serves as a basis to further understand the phenomena and advance towards more effective identification techniques.
international conference on program comprehension | 2017
Mário Hozano; Alessandro Garcia; Nuno Antunes; Baldoino Fonseca; Evandro Costa
Code smells indicate poor implementation choices that may hinder program comprehension and maintenance. Their informal definition allows developers to follow different heuristics to detect smells in their projects. Machine learning has been used to customize smell detection according to the developers perception. However, such customization is not guided (i.e. constrained) to consider alternative heuristics used by developers when detecting smells. As a result, their customization might not be efficient, requiring a considerable effort to reach high effectiveness. In fact, there is no empirical knowledge yet about the efficiency of such unguided approaches for supporting developer-sensitive smell detection. This paper presents Histrategy, a guided customization technique to improve the efficiency on smell detection. Histrategy considers a limited set of detection strategies, produced from different detection heuristics, as input of a customization process. The output of the customization process consists of a detection strategy tailored to each developer. The technique was evaluated in an experimental study with 48 developers and four types of code smells. The results showed that Histrategy is able to outperform six widely adopted machine learning algorithms – used in unguided approaches – both in effectiveness and efficiency. It was also confirmed that most developers benefit from using alternative heuristics to: (i) build their tailored detection strategies, and (ii) achieve efficient smell detection.
international conference on enterprise information systems | 2017
Mário Hozano; Nuno Antunes; Baldoino Fonseca; Evandro Costa
Code smells indicate poor implementation choices that may hinder the system maintenance. Their detection is important for the software quality improvement, but studies suggest that it should be tailored to the perception of each developer. Therefore, detection techniques must adapt their strategies to the developer’s perception. Machine Learning (ML) algorithms is a promising way to customize the smell detection, but there is a lack of studies on their accuracy in detecting smells for different developers. This paper evaluates the use of MLalgorithms on detecting code smells for different developers, considering their individual perception about code smells. We experimentally compared the accuracy of 6 algorithms in detecting 4 code smell types for 40 different developers. For this, we used a detailed dataset containing instances of 4 code smell types manually validated by 40 developers. The results show that ML-algorithms achieved low accuracies for the developers that participated of our study, showing that are very sensitive to the smell type and the developer. These algorithms are not able to learn with limited training set, an important limitation when dealing with diverse perceptions about code smells.
latin american symposium on dependable computing | 2016
Henrique Alves; Baldoino Fonseca; Nuno Antunes
Software metrics can be used as a indicator of the presence of software vulnerabilities. These metrics have been used with machine learning to predict source code prone to contain vulnerabilities. Although it is not possible to find the exact location of the flaws, the models can show which components require more attention during inspections and testing. Each new technique uses his own evaluation dataset, which many times has limited size and representativeness. In this experience report, we use a large and representative dataset to evaluate several state of the art vulnerability prediction techniques. This dataset was built with information of 2186 vulnerabilities from five widely used open source projects. Results show that the dataset can be used to distinguish which are the best techniques. It is also shown that some of the techniques can predict nearly all of the vulnerabilities present in the dataset, although with very low precisions. Finally, accuracy, precision and recall are not the most effective to characterize the effectiveness of this tools.
dependable systems and networks | 2016
Marcus Pianco; Baldoino Fonseca; Nuno Antunes
Usually, the most critical modules of the system receive extra attention. But even these modules might be too large to be thoroughly inspected so it is useful to know where to apply the majority of the efforts. Thus, knowing which code changes are more prone to contain vulnerabilities may allow security experts to concentrate on a smaller subset of submitted code changes. In this paper we discuss the change history of functions and its impact on the existence of vulnerabilities. For this, we analyzed the commit history of two software projects widely exposed to attacks (Mozilla and Linux Kernel). Starting from security bugs, we analyzed more than 95k functions (with and without vulnerabilities), and systematized the changes in each function according to a subset of the patterns described in the Orthogonal Defects Classification. The results show that the frequency of changes can allow to distinguish functions more prone to have vulnerabilities.