Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aline Paes is active.

Publication


Featured researches published by Aline Paes.


Machine Learning | 2009

Using the bottom clause and mode declarations in FOL theory revision from examples

Ana Luísa Duboc; Aline Paes; Gerson Zaverucha

Theory revision systems are designed to improve the accuracy of an initial theory, producing more accurate and comprehensible theories than purely inductive methods. Such systems search for points where examples are misclassified and modify them using revision operators. This includes trying to add antecedents to clauses usually following a top-down approach, considering all the literals of the knowledge base. Such an approach leads to a huge search space which dominates the cost of the revision process. ILP Mode Directed Inverse Entailment systems restrict the search for antecedents to the literals of the bottom clause. In this work the bottom clause and mode declarations are introduced in a first-order logic theory revision system aiming to improve the efficiency of the antecedent addition operation and, consequently, also of the whole revision process. Experimental results compared to revision system FORTE show that the revision process is on average 55 times faster, generating more comprehensible theories and still not significantly decreasing the accuracies obtained by the original revision process. Moreover, the results show that when the initial theory is approximately correct, it is more efficient to revise it than learn from scratch, obtaining significantly better accuracies. They also show that using the proposed theory revision system to induce theories from scratch is faster and generates more compact theories than when the theory is induced using a traditional ILP system, obtaining competitive accuracies.


inductive logic programming | 2007

ILP Through Propositionalization and Stochastic k-Term DNF Learning

Aline Paes; Filip Železný; Gerson Zaverucha; David C. Page; Ashwin Srinivasan

One promising family of search strategies to alleviate runtime and storage requirements of ILP systems is that of stochastic local search methods, which have been successfully applied to hard propositional tasks such as satisfiability. Stochastic local search algorithms for propositional satisfiability benefit from the ability to quickly test whether a truth assignment satisfies a formula. Because of that many possible solutions can be tested and scored in a short time. In contrast, testing whether a clause covers an example in ILP takes much longer, so that far fewer possible solutions can be tested in the same time. Therefore in this paper we investigate stochastic local search in ILP using a relational propositionalized problem instead of directly use the first-order clauses space of solutions.


inductive logic programming | 2007

Revising first-order logic theories from examples through stochastic local search

Aline Paes; Gerson Zaverucha; Vítor Santos Costa

First-Order Theory Revision from Examples is the process of improving user-defined or automatically generated First-Order Logic (FOL) theories, given a set of examples. So far, the usefulness of Theory Revision systems has been limited by the cost of searching the huge search spaces they generate. This is a general difficulty when learning FOL theories but recent work showed that Stochastic Local Search (SLS) techniques may be effective, at least when learning FOL theories from scratch. Motivated by these results, we propose novel SLS based search strategies for First-Order Theory Revision from Examples. Experimental results show that introducing stochastic search significantly speeds up the runtime performance and improve accuracy.


inductive logic programming | 2008

Using the Bottom Clause and Mode Declarations on FOL Theory Revision from Examples

Ana Luísa Duboc; Aline Paes; Gerson Zaverucha

Theory revision systems are designed to improve the accuracy of an initial theory, producing more accurate and comprehensible theories than purely inductive methods. Such systems search for points where examples are misclassified and modify them using revision operators. This includes trying to add antecedents in clauses usually generated in a top-down approach, considering all the literals of the knowledge base. This leads to a huge search space which dominates the cost of the revision process. ILP Mode Directed Inverse Entailment systems restrict the search for antecedents to the literals of the bottom clause. In this work the bottom clause and modes declarations are introduced to improve the efficiency of theory revision antecedent addition. Experimental results compared to FORTE revision system show that the runtime of the revision process is on average three orders of magnitude faster, and generate more comprehensible theories without decreasing the accuracy. Moreover, the proposed theory revision approach significantly improves predictive accuracy over theories generated by Aleph system.


inductive logic programming | 2009

Chess revision: acquiring the rules of chess variants through FOL theory revision from examples

Stephen Muggleton; Aline Paes; Vítor Santos Costa; Gerson Zaverucha

The game of chess has been a major testbed for research in artificial intelligence, since it requires focus on intelligent reasoning. Particularly, several challenges arise to machine learning systems when inducing a model describing legal moves of the chess, including the collection of the examples, the learning of a model correctly representing the official rules of the game, covering all the branches and restrictions of the correct moves, and the comprehensibility of such a model. Besides, the game of chess has inspired the creation of numerous variants, ranging from faster to more challenging or to regional versions of the game. The question arises if it is possible to take advantage of an initial classifier of chess as a starting point to obtain classifiers for the different variants. We approach this problem as an instance of theory revision from examples. The initial classifier of chess is inspired by a FOL theory approved by a chess expert and the examples are defined as sequences of moves within a game. Starting from a standard revision system, we argue that abduction and negation are also required to best address this problem. Experimental results show the effectiveness of our approach.


brazilian conference on intelligent systems | 2013

Terminology Learning through Taxonomy Discovery

Raphael Melo; Kate Revoredo; Aline Paes

Description Logics based languages have emerged as the standard knowledge representation scheme for ontologies. Typically, an ontology formalizes a number of dependent and related concepts in a domain, encompassed as a terminology. As defining such terminologies manually is a complex, time consuming and error-prone task, there is great interest and even demands for methods that learn terminologies automatically. Learning a terminology in Descriptions Logics concerns to learn several related concepts. This process would greatly benefit of an ideal order to determine which concept should be learned before another concept. Arguably, such an order would yield rich and readable terminologies, as previously, and interrelated concepts formerly learned could be used to induce the description of further concepts. In this work, we contribute with a formal definition of the concept and terminology learning problems and from such definitions we devise an algorithm for finding an ordering through concept taxonomy discovery, that should be followed when learning several related concepts. We show through an experiment that by following the order detected by the algorithm, we are able to afford a more readable terminology than methods that do not conceive an ideal order or do not learn concepts in a dependent way.


international conference on software engineering | 2017

Supporting defect causal analysis in practice with cross-company data on causes of requirements engineering problems

Marcos Kalinowski; Pablo Curty; Aline Paes; Alexandre Ferreira; Rodrigo O. Spínola; Daniel Méndez Fernández; Michael Felderer; Stefan Wagner

[Context] Defect Causal Analysis (DCA) represents an efficient practice to improve software processes. While knowledge on cause-effect relations is helpful to support DCA, collecting cause-effect data may require significant effort and time. [Goal] We propose and evaluate a new DCA approach that uses cross-company data to support the practical application of DCA. [Method] We collected cross-company data on causes of requirements engineering problems from 74 Brazilian organizations and built a Bayesian network. Our DCA approach uses the diagnostic inference of the Bayesian network to support DCA sessions. We evaluated our approach by applying a model for technology transfer to industry and conducted three consecutive evaluations: (i) in academia, (ii) with industry representatives of the Fraunhofer Project Center at UFBA, and (iii) in an industrial case study at the Brazilian National Development Bank (BNDES). [Results] We received positive feedback in all three evaluations and the cross-company data was considered helpful for determining main causes. [Conclusions] Our results strengthen our confidence in that supporting DCA with cross-company data is promising and should be further investigated.


Machine Learning | 2017

On the use of stochastic local search techniques to revise first-order logic theories from examples

Aline Paes; Gerson Zaverucha; Vítor Santos Costa

Theory Revision from Examples is the process of repairing incorrect theories and/or improving incomplete theories from a set of examples. This process usually results in more accurate and comprehensible theories than purely inductive learning. However, so far, progress on the use of theory revision techniques has been limited by the large search space they yield. In this article, we argue that it is possible to reduce the search space of a theory revision system by introducing stochastic local search. More precisely, we introduce a number of stochastic local search components at the key steps of the revision process, and implement them on a state-of-the-art revision system that makes use of the most specific clause to constrain the search space. We show that with the use of these SLS techniques it is possible for the revision system to be executed in a feasible time, while still improving the initial theory and in a number of cases even reaching better accuracies than the deterministic revision process. Moreover, in some cases the revision process can be faster and still achieve better accuracies than an ILP system learning from an empty initial hypothesis or assuming an initial theory to be correct.


ibero american conference on ai | 2006

PFORTE: revising probabilistic FOL theories

Aline Paes; Kate Revoredo; Gerson Zaverucha; Vítor Santos Costa

There has been significant recent progress in the integration of probabilistic reasoning with first order logic representations (SRL). So far, the learning algorithms developed for these models all learn from scratch, assuming an invariant background knowledge. As an alternative, theory revision techniques have been shown to perform well on a variety of machine learning problems. These techniques start from an approximate initial theory and apply modifications in places that performed badly in classification. In this work we describe the first revision system for SRL classification, PFORTE, which addresses two problems: all examples must be classified, and they must be classified well. PFORTE uses a two step-approach. The completeness component uses generalization operators to address failed proofs and the classification component addresses classification problems using generalization and specialization operators. Experimental results show significant benefits from using theory revision techniques compared to learning from scratch.


Archive | 2018

Lightweight Neural Programming: The GRPU

Felipe Carregosa; Aline Paes; Gerson Zaverucha

Deep Learning techniques have achieved impressive results over the last few years. However, they still have difficulty in producing understandable results that clearly show the embedded logic behind the inductive process. One step in this direction is the recent development of Neural Differentiable Programmers. In this paper, we designed a neural programmer that can be easily integrated into existing deep learning architectures, with similar amount of parameters to a single commonly used Recurrent Neural Network. Tests conducted with the proposal suggest that it has the potential to induce algorithms even without any kind of special optimization, achieving competitive results in problems handled by more complex RNN architectures.

Collaboration


Dive into the Aline Paes's collaboration.

Top Co-Authors

Avatar

Gerson Zaverucha

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel de Oliveira

Federal Fluminense University

View shared research outputs
Top Co-Authors

Avatar

Kate Revoredo

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Esteban Clua

Federal Fluminense University

View shared research outputs
Top Co-Authors

Avatar

Ana Luísa Duboc

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

David B. Carvalho

Federal Fluminense University

View shared research outputs
Top Co-Authors

Avatar

Rainier Sales

Federal Fluminense University

View shared research outputs
Top Co-Authors

Avatar

Paulo Mann

Federal Fluminense University

View shared research outputs
Top Co-Authors

Avatar

Raphael Melo

Universidade Federal do Estado do Rio de Janeiro

View shared research outputs
Researchain Logo
Decentralizing Knowledge