Gavin Rens
University of KwaZulu-Natal
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gavin Rens.
Journal of Applied Logic | 2014
Gavin Rens; Thomas Meyer; Gerhard Lakemeyer
A logic for specifying probabilistic transition systems is presented. Our perspective is that of agents performing actions. A procedure for deciding whether sentences in this logic are valid is provided. One of the main contributions of the paper is the formulation of the decision procedure: a tableau system which appeals to solving systems of linear equations. The tableau rules eliminate propositional connectives, then, for all open branches of the tableau tree, systems of linear equations are generated and checked for feasibility. Proofs of soundness, completeness and termination of the decision procedure are provided.
australasian joint conference on artificial intelligence | 2010
Gavin Rens; Ivan José Varzinczak; Thomas Meyer; Alexander Ferrein
We propose a formalism for reasoning about actions based on multi-modal logic which allows for expressing observations as first-class objects. We introduce a new modal operator, namely [o |α], which allows us to capture the notion of perceiving an observation given that an action has taken place. Formulae of the type [o |α]ϕ mean ’after perceiving observation o, given α was performed, necessarily ϕ’. In this paper, we focus on the challenges concerning sensing with explicit observations, and acting with nondeterministic effects. We present the syntax and semantics, and a correct and decidable tableau calculus for the logic.
africon | 2013
Gavin Rens; Alexander Ferrein
We consider online partially observable Markov decision processes (POMDPs) which compute policies by local look-ahead from the current belief-state. One problem is that belief-nodes deeper in the decision-tree increase in the number of states with non-zero probability they contain. Computation time of updating a belief-state is exponential in the number of states contained by the belief. Belief-update occurs for each node in a search tree. It would thus pay to reduce the size of the nodes while keeping the information they contain. In this paper, we compare four fast and frugal methods to reduce the size of belief-nodes in the search tree, hence improving the running-time of online POMDP algorithms.
foundations of information and knowledge systems | 2014
Gavin Rens; Thomas Meyer; Gerhard Lakemeyer
We present a logic inspired by partially observable Markov decision process POMDP theory for specifying agent domains where the agents actuators and sensors are noisy causing uncertainty. The language features modalities for actions and predicates for observations. It includes a notion of probability to represent the uncertainties, and the expression of rewards and costs are also catered for. One of the main contributions of the paper is the formulation of a sound and complete decision procedure for checking validity of sentences: a tableau method which appeals to solving systems of equations. The tableau rules eliminate propositional connectives, then, for all open branches of the tableau tree, systems of equations are generated and checked for feasibility. This paper presents progress made on previously published work.
international conference on agents and artificial intelligence | 2015
Gavin Rens; Thomas Meyer
We propose an agent architecture which combines Partially observable Markov decision processes (POMDPs) and the belief-desire-intention (BDI) framework to capitalize on their complimentary strengths. Our architecture introduces the notion of intensity of the desire for a goal’s achievement. We also define an update rule for goals’ desire levels. When to select a new goal to focus on is also defined. To verify that the proposed architecture works, experiments were run with an agent based on the architecture, in a domain where multiple goals must continually be achieved. The results show that (i) while the agent is pursuing goals, it can concurrently perform rewarding actions not directly related to its goals, (ii) the trade-off between goals and preferences can be set effectively and (iii) goals and preferences can be satisfied even while dealing with stochastic actions and perceptions. We believe that the proposed architecture furthers the theory of high-level autonomous agent reasoning.
international joint conference on artificial intelligence | 2011
Gavin Rens
Broadly speaking, my research concerns combining logic of action and POMDP theory in a coherent, theoretically sound language for agent programming. We have already developed a logic for specifying partially observable stochastic domains. A logic for reasoning with the models specified must still be developed. An agent programming language will then be developed and used to design controllers for robots.
Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz) | 2018
Gavin Rens; Thomas Amdreas Meyer; Gabriele Kern-Isberner; Abhaya C. Nayak
Similarity among worlds plays a pivotal role in providing the semantics for different kinds of belief change. Although similarity is, intuitively, a context-sensitive concept, the accounts of similarity presently proposed are, by and large, context blind. We propose an account of similarity that is context sensitive, and when belief change is concerned, we take it that the epistemic input provides the required context. We accordingly develop and examine two accounts of probabilistic belief change that are based on such evidence-sensitive similarity. The first switches between two extreme behaviors depending on whether or not the evidence in question is consistent with the current knowledge. The second gracefully changes its behavior depending on the degree to which the evidence is consistent with current knowledge. Finally, we analyze these two belief change operators with respect to a select set of plausible postulates.
Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz) | 2018
Gavin Rens; Abhaya C. Nayak; Thomas Meyer
Many multi-agent systems (MASs) are situated in stochastic environments. Some such systems that are based on the partially observable Markov decision process (POMDP) do not take the benevolence of other agents for granted. We propose a new POMDP-based framework which is general enough for the specification of a variety of stochastic MAS domains involving the impact of agents on each others reputations. A unique feature of this framework is that actions are specified as either undirected (regular) or directed (towards a particular agent), and a new directed transition function is provided for modeling the effects of reputation in interactions. Assuming that an agent must maintain a good enough reputation to survive in the network, a planning algorithm is developed for an agent to select optimal actions in stochastic MASs. Preliminary evaluation is provided via an example specification and by determining the algorithms complexity.
international conference on agents and artificial intelligence | 2015
Gavin Rens; Thomas Meyer; Gerhard Lakemeyer
We present a decidable logic in which queries can be posed about (i) the degree of belief in a propositional sentence after an arbitrary finite number of actions and observations and (ii) the utility of a finite sequence of actions after a number of actions and observations. Another contribution of this work is that a POMDP model specification is allowed to be partial or incomplete with no restriction on the lack of information specified for the model. The model may even contain information about non-initial beliefs. Essentially, entailment of arbitrary queries (expressible in the language) can be answered. A sound, complete and terminating decision procedure is provided.
international conference on agents and artificial intelligence | 2015
Gavin Rens
A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in belief-decision-trees while planning occurs. Whereas belief-decision-trees branch on actions and observations, with my method, they branch only on actions. This is achieved by unifying the branches required due to the nondeterminism of observations. The method is based on the expected values of domain features. The new algorithm is experimentally compared to three other online POMDP algorithms, outperforming them on the given test domain.