Tiago L. Alves
University of Minho
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tiago L. Alves.
international conference on software maintenance | 2010
Tiago L. Alves; Christiaan Ypma; Joost Visser
A wide variety of software metrics have been proposed and a broad range of tools is available to measure them. However, the effective use of software metrics is hindered by the lack of meaningful thresholds. Thresholds have been proposed for a few metrics only, mostly based on expert opinion and a small number of observations.
joint conference of international workshop on software measurement and international conference on software process and product measurement | 2011
Tiago L. Alves; Jose Pedro Correia; Joost Visser
Software metrics have been proposed as instruments, not only to guide individual developers in their coding tasks, but also to obtain high-level quality indicators for entire software systems. Such system-level indicators are intended to enable meaningful comparisons among systems or to serve as triggers for a deeper analysis.Common methods for aggregation range from simple mathematical operations (e.g. addition and central tendency) to more complex methodologies such as distribution fitting, wealth inequality metrics (e.g. Gini coefficient and Theil Index) and custom formulae.However, these methodologies provide little guidance for interpreting the aggregated results or to trace back to individual measurements.To resolve such limitations, a two-stage rating approach has been proposed where (i) measurement values are compared to thresholds to summarize them into risk profiles, and (ii) risk profiles are mapped to ratings.In this paper, we extend our approach for deriving metric thresholds from benchmark data into a methodology for benchmark-based calibration of two-stage aggregation of metrics into ratings.We explain the core algorithm of the methodology and we demonstrate its application to various metrics of the SIG quality model, using a benchmark of 100 software systems.We present an evaluation of the sensitivity of the algorithm to the underlying data.
fundamental approaches to software engineering | 2011
Jácome Miguel Costa Cunha; Joost Visser; Tiago L. Alves; João Saraiva
Spreadsheets are notoriously error-prone. To help avoid the introduction of errors when changing spreadsheets, models that capture the structure and interdependencies of spreadsheets at a conceptual level have been proposed. Thus, spreadsheet evolution can be made safe within the confines of a model. As in any other model/instance setting, evolution may not only require changes at the instance level but also at the model level. When model changes are required, the safety of instance evolution can not be guarded by the model alone. We have designed an appropriate representation of spreadsheet models, including the fundamental notions of formulaeand references. For these models and their instances, we have designed coupled transformation rules that cover specific spreadsheet evolution steps, such as the insertion of columns in all occurrences of a repeated block of cells. Each model-level transformation rule is coupled with instance level migration rules from the source to the target model and vice versa. These coupled rules can be composed to create compound transformations at the model level inducing compound transformations at the instance level. This approach guarantees safe evolution of spreadsheets even when models change.
software language engineering | 2009
Tiago L. Alves; Joost Visser
This paper describes a case study about how well-established software engineering techniques can be applied to the development of a grammar. The employed development methodology can be described as iterative grammar engineering and includes the application of techniques such as grammar metrics, unit testing, and test coverage analysis. The result is a grammar of industrial strength, in the sense that it is well-tested, it can be used for fast parsing of high volumes of code, and it allows automatic generation of support for syntax tree representation, traversal, and interchange.
Electronic Notes in Theoretical Computer Science | 2012
Tiago L. Alves; Paulo Silva; Joost Visser
Data schema transformations occur in the context of software evolution, refactoring, and cross-paradigm data mappings. When constraints exist on the initial schema, these need to be transformed into constraints on the target schema. Moreover, when high-level data types are refined to lower level structures, additional target schema constraints must be introduced to balance the loss of structure and preserve semantics. We introduce an algebraic approach to schema transformation that is constraint-aware in the sense that constraints are preserved from source to target schemas and that new constraints are introduced where needed. Our approach is based on refinement theory and point-free program transformation. Data refinements are modeled as rewrite rules on types that carry point-free predicates as constraints. At each rewrite step, the predicate on the reduct is computed from the predicate on the redex. An additional rewrite system on point-free functions is used to normalize the predicates that are built up along rewrite chains. We implemented our rewrite systems in a type-safe way in the functional programming language Haskell. We demonstrate their application to constraint-aware hierarchical-relational mappings.
formal methods | 2005
Tiago L. Alves; Paulo Silva; Joost Visser; José Nuno Fonseca Oliveira
We constructed a tool, called VooDooM, which converts datatypes in Vdm-sl into Sql relational data models. The conversion involves transformation of algebraic types to maps and products, and pointer introduction. The conversion is specified as a theory of refinement by calculation. The implementation technology is strategic term rewriting in Haskell, as supported by the Strafunski bundle. Due to these choices of theory and technology, the road from theory to practise is straightforward.
source code analysis and manipulation | 2011
Tiago L. Alves; Jurriaan Hage; Peter Rademaker
When analyzing software systems we face the challenge of how to implement a particular analysis for different programming languages. A solution for this problem is to write a single analysis using a code query language, abstracting from the specificities of languages being analyzed. Over the past ten years many code query technologies have been developed, based on different formalisms. Each technology comes with its own query language and set of features. To determine the state of the art of code querying we compare the languages and tools for seven code query technologies: Grok, Rscript, JRelCal, Semmle Code, JGraLab, CrocoPat and JTransformer. The specification of a package stability metric is used as a running example to compare the languages. The comparison involves twelve criteria, some of which are concerned with properties of the query language (paradigm, types, parametrization, polymorphism, modularity, and libraries), and some of which are concerned with the tool itself (output formats, interactive interface, API support, interchange formats, extraction support, and licensing). We contextualize the criteria in two usage scenarios: interactive and tool integration. We conclude that there is no particularly weak or dominant tool. As important improvement points, we identify the lack of library mechanisms, interchange formats, and possibilities for integration with source code extractors.
source code analysis and manipulation | 2009
Tiago L. Alves; Joost Visser
Test coverage is an important indicator for unit test quality. Tools such as Clover compute coverage by first instrumenting the code with logging functionality, and then logging which parts are executed during unit test runs. Since computation of test coverage is a dynamic analysis, it presupposes a working installation of the software. In the context of software quality assessment by an independent third party, a working installation is often not available. The evaluator may not have access to the required libraries or hardware platform. The installation procedure may not be automated or documented. In this paper, we propose a technique for estimating test coverage at method level through static analysis only. The technique uses slicing of static call graphs to estimate the dynamic test coverage. We explain the technique and its implementation. We validate the results of the static estimation by statistical comparison to values obtained through dynamic analysis using Clover. We found high correlation between static coverage estimation and real coverage at system level but closer analysis on package and class level reveals opportunities for further improvement.
Advances in intelligent systems and computing | 2017
Tiago L. Alves; Rúben Rodrigues; Hugo Costa; Miguel Rocha
Biomedical literature is composed of an ever increasing number of publications in natural language. Patents are a relevant fraction of those, being important sources of information due to all the curated data from the granting process. However, their unstructured data turns the search of information a challenging task. To surpass that, Biomedical text mining (BioTM) creates methodologies to search and structure that data. Several BioTM techniques can be applied to patents. From those, Information Retrieval is the process where relevant data is obtained from collections of documents. In this work, a patent pipeline was developed and integrated into @Note2, an open-source computational framework for BioTM. This integration allows to run further BioTM tools over the patent documents, including Information Extraction processes as Named Entity Recognition or Relation Extraction.
international conference on software maintenance | 2010
Tiago L. Alves
The software life-cycle of applications supporting space missions follows a rigorous process in order to ensure the application compliance with all the specified requirements. Ensuring the correct behavior of the application is critical since an error can lead, ultimately, to the loss of a complete space mission. However, it is not only important to ensure the correct behavior of the application but also to achieve good product quality since the applications need to be maintained for several years. Then, the question arises, is a rigorous process enough to guarantee good product maintainability? In this paper we assess the software product maintainability of two simulators used to support space missions. The assessment is done using both a standardized analysis, using the SIG quality model for maintainability, and a customized copyright license analysis. The assessment results revealed several quality problems leading to three lessons. First, rigorous process requirements by themselves do not ensure product quality. Second, quality models can be used not only to pinpoint code problems but also to reveal team issues. Finally, tailored analyses, complementing quality models, are necessary for in-depth investigation of quality.