Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karl Tuyls is active.

Publication


Featured researches published by Karl Tuyls.


Archive | 2012

Multi-Agent Systems

Massimo Cossentino; Michael Kaisers; Karl Tuyls; Gerhard Weiss

Reinforcement learning agents can successfully learn in a variety of difficult tasks. A fundamental problem is that they may learn slowly in complex environments, inspiring the development of speedup methods such as transfer learning. Transfer improves learning by reusing learned behaviors in similar tasks, usually via an inter-task mapping, which defines how a pair of tasks are related. This paper proposes a novel transfer learning technique to autonomously construct an inter-task mapping by using a novel combinations of sparse coding, sparse projection learning, and sparse pseudo-input gaussian processes. Experiments show successful transfer of information between two very different domains: the mountain car and the pole swing-up task. This paper empirically shows that the learned inter-task mapping can be used to successfully (1) improve the performance of a learned policy on a fixed number of samples, (2) reduce the learning times needed by the algorithms to converge to a policy on a fixed number of samples, and (3) converge faster to a near-optimal policy given a large amount of samples.


Autonomous Agents and Multi-Agent Systems | 2006

An Evolutionary Dynamical Analysis of Multi-Agent Learning in Iterated Games

Karl Tuyls; Pieter Jan't Hoen; Bram Vanschoenwinkel

In this paper, we investigate Reinforcement learning (RL) in multi-agent systems (MAS) from an evolutionary dynamical perspective. Typical for a MAS is that the environment is not stationary and the Markov property is not valid. This requires agents to be adaptive. RL is a natural approach to model the learning of individual agents. These Learning algorithms are however known to be sensitive to the correct choice of parameter settings for single agent systems. This issue is more prevalent in the MAS case due to the changing interactions amongst the agents. It is largely an open question for a developer of MAS of how to design the individual agents such that, through learning, the agents as a collective arrive at good solutions. We will show that modeling RL in MAS, by taking an evolutionary game theoretic point of view, is a new and potentially successful way to guide learning agents to the most suitable solution for their task at hand. We show how evolutionary dynamics (ED) from Evolutionary Game Theory can help the developer of a MAS in good choices of parameter settings of the used RL algorithms. The ED essentially predict the equilibriums outcomes of the MAS where the agents use individual RL algorithms. More specifically, we show how the ED predict the learning trajectories of Q-Learners for iterated games. Moreover, we apply our results to (an extension of) the COllective INtelligence framework (COIN). COIN is a proved engineering approach for learning of cooperative tasks in MASs. The utilities of the agents are re-engineered to contribute to the global utility. We show how the improved results for MAS RL in COIN, and a developed extension, are predicted by the ED.


Artificial Intelligence | 2007

What evolutionary game theory tells us about multiagent learning

Karl Tuyls; Simon Parsons

This paper discusses If multi-agent learning is the answer, what is the question? [Y. Shoham, R. Powers, T. Grenager, If multi-agent learning is the answer, what is the question? Artificial Intelligence 171 (7) (2007) 365-377, this issue] from the perspective of evolutionary game theory. We briefly discuss the concepts of evolutionary game theory, and examine the main conclusions from [Y. Shoham, R. Powers, T. Grenager, If multi-agent learning is the answer, what is the question? Artificial Intelligence 171 (7) (2007) 365-377, this issue] with respect to some of our previous work. Overall we find much to agree with, concluding, however, that the central concerns of multiagent learning are rather narrow compared with the broad variety of work identified in [Y. Shoham, R. Powers, T. Grenager, If multi-agent learning is the answer, what is the question? Artificial Inteligence 171 (7) (2007) 365-377, this issue].


adaptive agents and multi-agents systems | 2003

A selection-mutation model for q-learning in multi-agent systems

Karl Tuyls; Katja Verbeeck; Tom Lenaerts

Although well understood in the single-agent framework, the use of traditional reinforcement learning (RL) algorithms in multi-agent systems (MAS) is not always justified. The feedback an agent experiences in a MAS, is usually influenced by the other agents present in the system. Multi agent environments are therefore non-stationary and convergence and optimality guarantees of RL algorithms are lost. To better understand the dynamics of traditional RL algorithms we analyze the learning process in terms of evolutionary dynamics. More specifically we show how the Replicator Dynamics (RD) can be used as a model for Q-learning in games. The dynamical equations of Q-learning are derived and illustrated by some well chosen experiments. Both reveal an interesting connection between the exploitation-exploration scheme from RL and the selection-mutation mechanisms from evolutionary game theory.


Knowledge Engineering Review | 2005

Evolutionary game theory and multi-agent reinforcement learning

Karl Tuyls; Ann Nowé

In this paper we survey the basics of reinforcement learning and (evolutionary) game theory, applied to the field of multi-agent systems. This paper contains three parts. We start with an overview on the fundamentals of reinforcement learning. Next we summarize the most important aspects of evolutionary game theory. Finally, we discuss the state-of-the-art of multi-agent reinforcement learning and the mathematical connection with evolutionary game theory.


Ai Magazine | 2012

Multiagent Learning: Basics, Challenges, and Prospects

Karl Tuyls; Gerhard Weiss

Multiagent systems (MAS) are widely accepted as an important method for solving problems of a distributed nature. A key to the success of MAS is efficient and effective multiagent learning (MAL). The past twenty-five years have seen a great interest and tremendous progress in the field of MAL. This article introduces and overviews this field by presenting its fundamentals, sketching its historical development and describing some key algorithms for MAL. Moreover, main challenges that the field is facing today are indentified.


Autonomous Agents and Multi-Agent Systems | 2007

Exploring selfish reinforcement learning in repeated games with stochastic rewards

Katja Verbeeck; Ann Nowé; Johan Parent; Karl Tuyls

In this paper we introduce a new multi-agent reinforcement learning algorithm, called exploring selfish reinforcement learning (ESRL). ESRL allows agents to reach optimal solutions in repeated non-zero sum games with stochastic rewards, by using coordinated exploration. First, two ESRL algorithms for respectively common interest and conflicting interest games are presented. Both ESRL algorithms are based on the same idea, i.e. an agent explores by temporarily excluding some of the local actions from its private action space, to give the team of agents the opportunity to look for better solutions in a reduced joint action space. In a latter stage these two algorithms are transformed into one generic algorithm which does not assume that the type of the game is known in advance. ESRL is able to find the Pareto optimal solution in common interest games without communication. In conflicting interest games ESRL only needs limited communication to learn a fair periodical policy, resulting in a good overall policy. Important to know is that ESRL agents are independent in the sense that they only use their own action choices and rewards to base their decisions on, that ESRL agents are flexible in learning different solution concepts and they can handle both stochastic, possible delayed rewards and asynchronous action selection. A real-life experiment, i.e. adaptive load-balancing of parallel applications is added.


Journal of Artificial Intelligence Research | 2015

Evolutionary dynamics of multi-agent learning: a survey

Daan Bloembergen; Karl Tuyls; Daniel Hennes; Michael Kaisers

The interaction of multiple autonomous agents gives rise to highly dynamic and nondeterministic environments, contributing to the complexity in applications such as automated financial markets, smart grids, or robotics. Due to the sheer number of situations that may arise, it is not possible to foresee and program the optimal behaviour for all agents beforehand. Consequently, it becomes essential for the success of the system that the agents can learn their optimal behaviour and adapt to new situations or circumstances. The past two decades have seen the emergence of reinforcement learning, both in single and multi-agent settings, as a strong, robust and adaptive learning paradigm. Progress has been substantial, and a wide range of algorithms are now available. An important challenge in the domain of multi-agent learning is to gain qualitative insights into the resulting system dynamics. In the past decade, tools and methods from evolutionary game theory have been successfully employed to study multi-agent learning dynamics formally in strategic interactions. This article surveys the dynamical models that have been derived for various multi-agent reinforcement learning algorithms, making it possible to study and compare them qualitatively. Furthermore, new learning algorithms that have been introduced using these evolutionary game theoretic tools are reviewed. The evolutionary models can be used to study complex strategic interactions. Examples of such analysis are given for the domains of automated trading in stock markets and collision avoidance in multi-robot systems. The paper provides a roadmap on the progress that has been achieved in analysing the evolutionary dynamics of multi-agent learning by highlighting the main results and accomplishments.


adaptive agents and multi-agents systems | 2005

Bee Behaviour in Multi-agent Systems

Nyree Lemmens; Steven de Jong; Karl Tuyls; Ann Nowé

In this paper we present a new, non-pheromone-based algorithm inspired by the behaviour of bees. The algorithm combines both recruitment and navigation strategies. We investigate whether this new algorithm outperforms pheromone-based algorithms, inspired by the behaviour of ants, in the task of foraging. From our experiments, we conclude that (i) the bee-inspired algorithm is significantly more efficient when finding and collecting food, i.e., it uses fewer iterations to complete the task; (ii) the bee-inspired algorithm is more scalable, i.e., it requires less computation time to complete the task, even though in small worlds, the ant-inspired algorithm is faster on a time-per-iteration measure; and finally, (iii) our current bee-inspired algorithm is less adaptive than ant-inspired algorithms.


intelligent robots and systems | 2012

Collision avoidance under bounded localization uncertainty

Daniel Claes; Daniel Hennes; Karl Tuyls; Wim Meeussen

We present a multi-mobile robot collision avoidance system based on the velocity obstacle paradigm. Current positions and velocities of surrounding robots are translated to an efficient geometric representation to determine safe motions. Each robot uses on-board localization and local communication to build the velocity obstacle representation of its surroundings. Our close and error-bounded convex approximation of the localization density distribution results in collision-free paths under uncertainty. While in many algorithms the robots are approximated by circumscribed radii, we use the convex hull to minimize the overestimation in the footprint. Results show that our approach allows for safe navigation even in densely packed environments.

Collaboration


Dive into the Karl Tuyls's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ann Nowé

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haitham Bou Ammar

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Katja Verbeeck

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge