A.S. d'Avila Garcez
City University London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A.S. d'Avila Garcez.
Artificial Intelligence | 2001
A.S. d'Avila Garcez; Krysia Broda; Dov M. Gabbay
Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The “explanation capability” of neural networks can be achieved by the extraction of symbolic knowledge. In this paper, we present a new method of extraction that captures nonmonotonic rules encoded in the network, and prove that such a method is sound. We start by discussing some of the main problems of knowledge extraction methods. We then discuss how these problems may be ameliorated. To this end, a partial ordering on the set of input vectors of a network is defined, as well as a number of pruning and simplification rules. The pruning rules are then used to reduce the search space of the extraction algorithm during a pedagogical extraction, whereas the simplification rules are used to reduce the size of the extracted set of rules. We show that, in the case of regular networks, the extraction algorithm is sound and complete. We proceed to extend the extraction algorithm to the class of non-regular networks, the general case. We show that non-regular networks always contain regularities in their subnetworks. As a result, the underlying extraction method for regular networks can be applied, but now in a decompositional fashion. In order to combine the sets of rules extracted from each subnetwork into the final set of rules, we use a method whereby we are able to keep the soundness of the extraction algorithm. Finally, we present the results of an empirical analysis of the extraction system, using traditional examples and real-world application problems. The results have shown that a very high fidelity between the extracted set of rules and the network can be achieved.
IEE Proceedings - Software | 2003
A.S. d'Avila Garcez; Alessandra Russo; Bashar Nuseibeh; Jeff Kramer
The development of requirements specifications inevitably involves modification and evolution. To support modification while preserving particular requirements goals and properties, the use of a cycle composed of two phases: analysis and revision is proposed. In the analysis phase, a desirable property of the system is checked against a partial specification. Should the property be violated, diagnostic information is provided. In the revision phase, the diagnostic information is used to help modify the specification in such a way that the new specification no longer violates the original property. An instance of the above analysis–revision cycle that combines new techniques of logical abduction and inductive learning to analyse and revise specifications, respectively is investigated. More specifically, given an (event-based) system description and a system property, abductive reasoning is applied in refutation mode to verify whether the description satisfies the property and, if it does not, identify diagnostic information in the form of a set of examples of property violation. These (counter) examples are then used to generate a corresponding set of examples of system behaviours that should be covered by the system description. Finally, such examples are used as training examples for inductive learning, changing the system description in order to resolve the property violation. This is accomplished with the use of the connectionist inductive learning and logic programming system—a hybrid system based on neural networks and the backpropagation learning algorithm. A case study of an automobile cruise control system illustrates the approach and provides some early validation of its capabilities.
international conference on neural information processing | 2002
A.S. d'Avila Garcez; Luís C. Lamb; Dov M. Gabbay
Neural-Symbolic integration has become a very active research area in the last decade. In this paper, we present a new massively parallel model for modal logic. We do so by extending the language of Modal Prolog to allow modal operators in the head of the clauses. We then use an ensemble of C-IL/sup 2/p neural networks to encode the extended modal theory (and its relations), and show that the ensemble computes a fixpoint semantics of the extended theory. An immediate result of our approach is the ability to perform learning from examples efficiently using each network of the ensemble. Therefore, one can adapt the extended C-IL/sup 2/P system by training possible world representations.
automated software engineering | 2001
A.S. d'Avila Garcez; Alessandra Russo; Bashar Nuseibeh; J. Kramer
We argue that the evolution of requirements specifications can be supported by a cycle composed of two phases: analysis and revision. We investigate an instance of such a cycle, which combines two techniques of logical abduction and inductive learning to analyze and revise specifications respectively.
international joint conference on artificial intelligence | 2011
H.L.H. de Penning; A.S. d'Avila Garcez; Luís C. Lamb; John-Jules Ch. Meyer
In real-world applications, the effective integration of learning and reasoning in a cognitive agent model is a difficult task. However, such integration may lead to a better understanding, use and construction of more realistic models. Unfortunately, existing models are either oversimplified or require much processing time, which is unsuitable for online learning and reasoning. Currently, controlled environments like training simulators do not effectively integrate learning and reasoning. In particular, higher-order concepts and cognitive abilities have many unknown temporal relations with the data, making it impossible to represent such relationships by hand. We introduce a novel cognitive agent model and architecture for online learning and reasoning that seeks to effectively represent, learn and reason in complex training environments. The agent architecture of the model combines neural learning with symbolic knowledge representation. It is capable of learning new hypotheses from observed data, and infer new beliefs based on these hypotheses. Furthermore, it deals with uncertainty and errors in the data using a Bayesian inference model. The validation of the model on real-time simulations and the results presented here indicate the promise of the approach when performing online learning and reasoning in real-world scenarios, with possible applications in a range of areas.
international symposium on neural networks | 2012
A.S. d'Avila Garcez; Gerson Zaverucha
Multiple instance learning is an increasingly important area in machine learning. In multi-instance learning, the training set is structured into subsets (or bags) of instances. The bags are labelled, but the label of each instance is unknown or irrelevant. In this paper, we revisit the connectionist approach to multi-instance learning. We propose a recurrent neural network model for multi-instance learning. We have applied the new model to a benchmark multi-instance dataset. The results provide evidence that connectionist multi-instance learning is more promising than previously anticipated. We argue that a principled connectionist approach should provide robust and efficient multi-instance learning, yet comparative results should be taken with caution as a result of varying methodologies.
international symposium on neural networks | 1997
A.S. d'Avila Garcez; Gerson Zaverucha; V.N.A.L. da Silva
The connectionist inductive learning and logic programming system, C-IL/sup 2/P, integrates the symbolic and connectionist paradigms of artificial intelligence through neural networks that perform massively parallel logic programming and inductive learning from examples and background knowledge. This work presents an extension of C-IL/sup 2/P that allows the implementation of extended logic programs in neural networks. This extension makes C-IL/sup 2/P applicable to problems where the background knowledge is represented in a default logic. As a case example, we have applied the system for fault diagnosis of a simplified power system generation plant, obtaining good preliminary results.
international symposium on neural networks | 2002
A.S. d'Avila Garcez
This paper shows that single hidden layer networks with semi-linear activation function compute the answer set semantics of extended logic programs. As a result, incomplete (nonmonotonic) theories, presented as extended logic programs, i.e., possibly containing both classical and default negations, may be refined through inductive learning in knowledge-based neural networks.This paper shows that single hidden layer networks with semi-linear activation function compute the answer set semantics of extended logic programs. As a result, incomplete (nonmonotonic) theories, presented as extended logic programs, i.e., possibly containing both classical and default negations, may be refined through inductive learning in knowledge-based neural networks.
european conference on artificial intelligence | 2004
Dov M. Gabbay; A.S. d'Avila Garcez; L.C. Lamb
international symposium on neural networks | 2007
Rafael V. Borges; Luís C. Lamb; A.S. d'Avila Garcez