Mariza Andrade da Silva Bigonha
Universidade Federal de Minas Gerais
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mariza Andrade da Silva Bigonha.
Journal of Systems and Software | 2012
Kecia Aline Marques Ferreira; Mariza Andrade da Silva Bigonha; Roberto da Silva Bigonha; Luiz F. O. Mendes; Heitor C. Almeida
Abstract: Despite the importance of software metrics and the large number of proposed metrics, they have not been widely applied in industry yet. One reason might be that, for most metrics, the range of expected values, i.e., reference values are not known. This paper presents results of a study on the structure of a large collection of open-source programs developed in Java, of varying sizes and from different application domains. The aim of this work is the definition of thresholds for a set of object-oriented software metrics, namely: LCOM, DIT, coupling factor, afferent couplings, number of public methods, and number of public fields. We carried out an experiment to evaluate the practical use of the proposed thresholds. The results of this evaluation indicate that the proposed thresholds can support the identification of classes which violate design principles, as well as the identification of well-designed classes. The method used in this study to derive software metrics thresholds can be applied to other software metrics in order to find their reference values.
working conference on reverse engineering | 2013
Cristiano Amaral Maffort; Marco Tulio Valente; Mariza Andrade da Silva Bigonha; Nicolas Anquetil; André C. Hora
Software architecture conformance is a key software quality control activity that aims to reveal the progressive gap normally observed between concrete and planned software architectures. In this paper, we present ArchLint, a lightweight approach for architecture conformance based on a combination of static and historical source code analysis. For this purpose, ArchLint relies on four heuristics for detecting both absences and divergences in source code based architectures. We applied ArchLint in an industrial-strength system and as a result we detected 119 architectural violations, with an overall precision of 46.7% and a recall of 96.2%, for divergences. We also evaluated ArchLint with four open-source systems, used in an independent study on reflexion models. In this second study, ArchLint achieved precision results ranging from 57.1% to 89.4%.
Empirical Software Engineering | 2016
Cristiano Amaral Maffort; Marco Tulio Valente; Ricardo Terra; Mariza Andrade da Silva Bigonha; Nicolas Anquetil; André C. Hora
Software architecture conformance is a key software quality control activity that aims to reveal the progressive gap normally observed between concrete and planned software architectures. However, formally specifying an architecture can be difficult, as it must be done by an expert of the system having a high level understanding of it. In this paper, we present a lightweighted approach for architecture conformance based on a combination of static and historical source code analysis. The proposed approach relies on four heuristics for detecting absences (something expected was not found) and divergences (something prohibited was found) in source code based architectures. We also present an architecture conformance process based on the proposed approach. We followed this process to evaluate the architecture of two industrial-strength information systems, achieving an overall precision of 62.7 % and 53.8 %. We also evaluated our approach in an open-source information retrieval library, achieving an overall precision of 59.2 %. We envision that an heuristic-based approach for architecture conformance can be used to rapidly raise architectural warnings, without deeply involving experts in the process.
compiler construction | 2011
Rodrigo Sol; Christophe Guillon; Fernando Magno Quintão Pereira; Mariza Andrade da Silva Bigonha
Trace compilation is a technique used by just-in-time (JIT) compilers such as TraceMonkey, the JavaScript engine in the Mozilla Firefox browser. Contrary to traditional JIT machines, a trace compiler works on only part of the source program, normally a linear path inside a heavily executed loop. Because the trace is compiled during the interpretation of the source program the JIT compiler has access to runtime values. This observation gives the compiler the possibility of producing binary code specialized to these values. In this paper we explore such opportunity to provide an analysis that removes unnecessary overflow tests from JavaScript programs. Our optimization uses range analysis to show that some operations cannot produce overflows. The analysis is linear in size and space on the number of instructions present in the input trace, and it is more effective than traditional range analyses, because we have access to values known only at execution time. We have implemented our analysis on top of Firefoxs TraceMonkey, and have tested it on over 1000 scripts from several industrial strength benchmarks, including the scripts present in the top 100 most visited webpages in the Alexa index. We generate binaries to either x86 or the embedded microprocessor ST40-300. On the average, we eliminate 91.82% of the overflows in the programs present in the TraceMonkey test suite. This optimization provides an average code size reduction of 8.83% on ST40 and 6.63% on x86. Our optimization increases TraceMonkeys runtime by 2.53%.
software and compilers for embedded systems | 2011
André Luiz Camargos Tavares; Quentin Colombet; Mariza Andrade da Silva Bigonha; Christophe Guillon; Fernando Magno Quintão Pereira; Fabrice Rastello
Recent results have shown how to do graph-coloring-based register allocation in a way that decouples spilling from register assignment. This decoupled approach has the main advantage of simplifying the implementation of register allocators. However, the decoupled model, as described in previous works, faces many problems when dealing with register aliasing, a phenomenon typical in architectures usually seen in embedded systems, such as ARM. In this paper we introduce the semi-elementary form, a program representation that brings decoupled register allocation to architectures with register aliasing. The semi-elementary form is much smaller than program representations used by previous decoupled solutions; thus, leading to register allocators that perform better in terms of time and space. Furthermore, this representation reduces the number of copies that traditional allocators insert into assembly programs. We have empirically validated our results by showing how our representation improves two well known graph coloring based allocators, namely the Iterated Register Coalescer (IRC), and Bouchez et al.s brute force (BF) method, both augmented with Smith et al. extensions to handle aliasing. Running our techniques on SPEC CPU 2000, we have reduced the number of nodes in the interference graphs by a factor of 4 to 5; hence, speeding-up allocation time by a factor of 3 to 5. Additionally the semi-elementary form reduces by 8% the number of copies that IRC leaves uncoalesced.
brazilian symposium on software engineering | 2009
Kecia Aline Marques Ferreira; Mariza Andrade da Silva Bigonha; Roberto da Silva Bigonha; Heitor C. Almeida; Luiz F. O. Mendes
Although a large quantity of OO software has been produced, little isknown about the actual structure of this type of software. There is alarge number of proposed metrics for OO software, but they are stillnot employed effectively in industry. A reason for this is that thereare few data published about this topic, and typical values of themetrics are not known. This paper presents the results of a studycarried out on a large collection of open-source software developed inJava. The objective of this study was to identify characteristics ofthis type of software in terms of a set of metrics for OO software,such as connectivity, class cohesion and depth of a class in itsinheritance tree. The results of the study provide important insightson the structure of open-source OO software and exhibit values that canbe taken as baselines for the values of measures of the metrics.
Revista De Informática Teórica E Aplicada | 2008
Kecia Aline Marques Ferreira; Mariza Andrade da Silva Bigonha; Roberto da Silva Bigonha
Most of the software cost is due to maintenance. In the last years, there has been a great deal of interest in developing cost estimation and effort prediction instruments for software maintenance. This work proposes that module connectivity is a key factor to predict maintenance cost and uses this thesis as the basis to develop a Connectivity Evaluation Model in OO Systems (MACSOO), which is a refactoring model based on connectivity whose aim is to minimize maintenance cost. We describe experiments whose results provide an example of the model application and expose the correlation between connectivity and maintainability.
european conference on parallel processing | 2003
Marco Tulio Valente; Fernando Magno Quintão Pereira; Roberto da Silva Bigonha; Mariza Andrade da Silva Bigonha
The growing success of wireless ad hoc networks and portable hardware devices presents many interesting problems to system engineers. Particular, coordination is a challenging task, since ad hoc networks are characterized by very opportunistic connections and rapidly changing topologies. This paper presents a coordination model, called PeerSpaces, designed to overcome the shortcomings of traditional coordination models when used in ad hoc networks.
international conference on program comprehension | 2017
Bruno L. Sousa; Priscila Souza; Eduardo Fernandes; Kecia Aline Marques Ferreira; Mariza Andrade da Silva Bigonha
Bad smells are symptoms of problems in the source code of software systems. They may harm the maintenance and evolution of systems on different levels. Thus, detecting smells is essential in order to support the software quality improvement. Since even small systems may contain several bad smell instances, and considering that developers have to prioritize their elimination, its automated detection is a necessary support for developers. Regarding that, detection strategies have been proposed to formalize rules to detect specific bad smells, such as Large Class and Feature Envy. Several tools like JDeodorant and JSpIRIT implement these strategies but, in general, they do not provide full customization of the formal rules that define a detection strategy. In this paper, we propose FindSmells, a tool for detecting bad smells in software systems through software metrics and their thresholds. With FindSmells, the user can compose and manage different strategies, which run without source code analysis. We also provide a running example of the tool. Video: https://youtu.be/LtomN93y6gg.
Proceedings of the 19th Brazilian Symposium on Programming Languages - Volume 9325 | 2015
Francisco Demontiê; Junio Cezar; Mariza Andrade da Silva Bigonha; Frederico F. Campos; Fernando Magno Quintão Pereira
Complexity analysis is an important activity for software engineers. Such an analysis can be specially useful in the identification of performance bugs. Although the research community has made significant progress in this field, existing techniques still show limitations. Purely static methods may be imprecise due to their inability to capture the dynamic behaviour of programs. On the other hand, dynamic approaches usually need user intervention and/or are not effective to relate complexity bounds with the symbols in the program code. In this paper, we present a hybrid technique that solves these shortcomings. Our technique uses a numeric method based on polynomial interpolation to precisely determine a complexity function for loops. Statically, we determine: i the inputs of a loop, i.e., the variables that control its iterations; and ii an algebraic equation relating the loops within a function. We then instrument the program to plot a curve relating inputs and number of operations executed. By running the program over different inputs, we generate sufficient points for our interpolator. In the end, the complexity function for each loop is combined using an algebra of our own craft. We have implemented our technique in the LLVM compiler, being able to analyse 99.7i¾?% of all loops available in the Polybench benchmark suite, and most of the loops in Rodinia. These results indicate that our technique is an effective and useful way to find the complexity of loops in high-performance applications.