Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martijn van Otterlo is active.

Publication


Featured researches published by Martijn van Otterlo.


Springer US | 2014

Reinforcement Learning: State-of-the-Art

Marco Wiering; Martijn van Otterlo

Reinforcement learning encompasses both a science of adaptive behavior of rational beings in uncertain environments and a computational methodology for finding optimal behaviors for challenging problems in control, optimization and adaptive behavior of intelligent agents. As a field, reinforcement learning has progressed tremendously in the past decade.The main goal of this book is to present an up-to-date series of survey articles on the main contemporary sub-fields of reinforcement learning. This includes surveys on partially observable environments, hierarchical task decompositions, relational knowledge representation and predictive state representations. Furthermore, topics such as transfer, evolutionary methods and continuous spaces in reinforcement learning are surveyed. In addition, several chapters review reinforcement learning methods in robotics, in games, and in computational neuroscience. In total seventeen different subfields are presented by mostly young experts in those areas, and together they truly represent a state-of-the-art of current reinforcement learning research.Marco Wiering works at the artificial intelligence department of the University of Groningen in the Netherlands. He has published extensively on various reinforcement learning topics. Martijn van Otterlo works in the cognitive artificial intelligence group at the Radboud University Nijmegen in The Netherlands. He has mainly focused on expressive knowledgerepresentation in reinforcement learning settings.


ACM Transactions on Speech and Language Processing | 2011

Spatial role labeling: Towards extraction of spatial relations from natural language

Parisa Kordjamshidi; Martijn van Otterlo; Marie-Francine Moens

This article reports on the novel task of spatial role labeling in natural language text. It proposes machine learning methods to extract spatial roles and their relations. This work experiments with both a step-wise approach, where spatial prepositions are found and the related trajectors, and landmarks are then extracted, and a joint learning approach, where a spatial relation and its composing indicator, trajector, and landmark are classified collectively. Context-dependent learning techniques, such as a skip-chain conditional random field, yield good results on the GUM-evaluation (Maptask) data and CLEF-IAPR TC-12 Image Benchmark. An extensive error analysis, including feature assessment, and a cross-domain evaluation pinpoint the main bottlenecks and avenues for future research.


international conference on robotics and automation | 2012

Learning relational affordance models for robots in multi-object manipulation tasks

Bogdan Moldovan; Plinio Moreno; Martijn van Otterlo; José Santos-Victor; Luc De Raedt

Affordances define the action possibilities on an object in the environment and in robotics they play a role in basic cognitive capabilities. Previous works have focused on affordance models for just one object even though in many scenarios they are defined by configurations of multiple objects that interact with each other. We employ recent advances in statistical relational learning to learn affordance models in such cases. Our models generalize over objects and can deal effectively with uncertainty. Two-object interaction models are learned from robotic interaction with the objects in the world and employed in situations with arbitrary numbers of objects. We illustrate these ideas with experimental results of an action recognition task where a robot manipulates objects on a shelf.


Archive | 2012

Reinforcement Learning and Markov Decision Processes

Martijn van Otterlo; Marco Wiering

Situated in between supervised learning and unsupervised learning, the paradigm of reinforcement learning deals with learning in sequential decision making problems in which there is limited feedback. This text introduces the intuitions and concepts behind Markov decision processes and two classes of algorithms for computing optimal behaviors: reinforcement learning and dynamic programming. First the formal framework of Markov decision process is defined, accompanied by the definition of value functions and policies. The main part of this text deals with introducing foundational classes of algorithms for learning optimal behaviors, based on various definitions of optimality with respect to the goal of learning sequential decisions. Additionally, it surveys efficient extensions of the foundational algorithms, differing mainly in the way feedback given by the environment is used to speed up learning, and in the way they concentrate on relevant parts of the problem. For both model-based and model-free settings these efficient extensions have shown useful in scaling up to larger problems.


inductive logic programming | 2011

Relational learning for spatial relation extraction from natural language

Parisa Kordjamshidi; Paolo Frasconi; Martijn van Otterlo; Marie-Francine Moens; Luc De Raedt

Automatically extracting spatial information is a challenging novel task with many applications. We formalize it as an information extraction step required for a mapping from natural language to a formal spatial representation. Sentences may give rise to multiple spatial relations between words representing landmarks, trajectors and spatial indicators. Our contribution is to formulate the extraction task as a relational learning problem, for which we employ the recently introduced kLog framework. We discuss representational and modeling aspects, kLogs flexibility in our task and we present current experimental results.


international joint conference on artificial intelligence | 2013

Machine learning for interactive systems and robots: a brief introduction

Heriberto Cuayáhuitl; Martijn van Otterlo; Nina Dethlefs; Lutz Frommberger

Research on interactive systems and robots, i.e. interactive machines that perceive, act and communicate, has applied a multitude of different machine learning frameworks in recent years, many of which are based on a form of reinforcement learning (RL). In this paper, we will provide a brief introduction to the application of machine learning techniques in interactive learning systems. We identify several dimensions along which interactive learning systems can be analyzed. We argue that while many applications of interactive machines seem different at first sight, sufficient commonalities exist in terms of the challenges faced. By identifying these commonalities between (learning) approaches, and by taking interdisciplinary approaches towards the challenges, we anticipate more effective design and development of sophisticated machines that perceive, act and communicate in complex, dynamic and uncertain environments.


international conference on robotics and automation | 2013

On the use of probabilistic relational affordance models for sequential manipulation tasks in robotics

Bogdan Moldovan; Plinio Moreno; Martijn van Otterlo

In this paper we employ probabilistic relational affordance models in a robotic manipulation task. Such affordance models capture the interdependencies between properties of multiple objects, executed actions, and effects of those actions on objects. Recently it was shown how to learn such models from observed video demonstrations of actions manipulating several objects. This paper extends that work and employs those models for sequential tasks. Our approach consists of two parts. First, we employ affordance models sequentially in order to recognize the individual actions making up a demonstrated sequential skill or high level concept. Second, we utilize the models of concepts to plan a suitable course of action to replicate the observed consequences of a demonstration. For this we adopt the framework of relational Markov decision processes. Empirical results show the viability of the affordance models for sequential manipulation skills for object placement.


Neurocomputing | 2014

There are plenty of places like home: Using relational representations in hierarchies for distance-based image understanding

Laura Antanas; Martijn van Otterlo; José Antonio Oramas Mogrovejo; Tinne Tuytelaars; Luc De Raedt

Understanding images in terms of logical and hierarchical structures is crucial for many semantic tasks, including image retrieval, scene understanding and robotic vision. This paper combines robust feature extraction, qualitative spatial relations, relational instance-based learning and compositional hierarchies in one framework. For each layer in the hierarchy, qualitative spatial structures in images are detected, classified and then employed one layer up the hierarchy to obtain higher-level semantic structures. We apply a four-layer hierarchy to street view images and subsequently detect corners, windows, doors, and individual houses.


Archive | 2012

Solving Relational and First-Order Logical Markov Decision Processes: A Survey

Martijn van Otterlo

In this chapter we survey representations and techniques for Markov decision processes, reinforcement learning, and dynamic programming in worlds explicitly modeled in terms of objects and relations. Such relational worlds can be found everywhere in planning domains, games, real-world indoor scenes and many more. Relational representations allow for expressive and natural datastructures that capture the objects and relations in an explicit way, enabling generalization over objects and relations, but also over similar problems which differ in the number of objects. The field was recently surveyed completely in (van Otterlo, 2009b), and here we describe a large portion of the main approaches. We discuss model-free – both value-based and policy-based – and model-based dynamic programming techniques. Several other aspects will be covered, such as models and hierarchies, and we end with several recent efforts and future directions.


Reinforcement Learning: State of the Art | 2012

Conclusions, Future Directions and Outlook

Marco Wiering; Martijn van Otterlo

This book has provided the reader with a thorough description of the field of reinforcement learning (RL). In this last chapter we will first discuss what has been accomplished with this book, followed by a description of those topics that were left out of this book, mainly because they are outside of the main field of RL or they are small (possibly novel and emerging) subfields within RL. After looking back what has been done in RL and in this book, a step into the future development of the field will be taken, and we will end with the opinions of some of the authors what they think will become the most important areas of research in RL.

Collaboration


Dive into the Martijn van Otterlo's collaboration.

Top Co-Authors

Avatar

Luc De Raedt

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marie-Francine Moens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Laura Antanas

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bogdan Moldovan

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Ingo Thon

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Tinne Tuytelaars

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge