Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johan Parent is active.

Publication


Featured researches published by Johan Parent.


Autonomous Agents and Multi-Agent Systems | 2007

Exploring selfish reinforcement learning in repeated games with stochastic rewards

Katja Verbeeck; Ann Nowé; Johan Parent; Karl Tuyls

In this paper we introduce a new multi-agent reinforcement learning algorithm, called exploring selfish reinforcement learning (ESRL). ESRL allows agents to reach optimal solutions in repeated non-zero sum games with stochastic rewards, by using coordinated exploration. First, two ESRL algorithms for respectively common interest and conflicting interest games are presented. Both ESRL algorithms are based on the same idea, i.e. an agent explores by temporarily excluding some of the local actions from its private action space, to give the team of agents the opportunity to look for better solutions in a reduced joint action space. In a latter stage these two algorithms are transformed into one generic algorithm which does not assume that the type of the game is known in advance. ESRL is able to find the Pareto optimal solution in common interest games without communication. In conflicting interest games ESRL only needs limited communication to learn a fair periodical policy, resulting in a good overall policy. Important to know is that ESRL agents are independent in the sense that they only use their own action choices and rewards to base their decisions on, that ESRL agents are flexible in learning different solution concepts and they can handle both stochastic, possible delayed rewards and asynchronous action selection. A real-life experiment, i.e. adaptive load-balancing of parallel applications is added.


australian joint conference on artificial intelligence | 2002

Learning to Reach the Pareto Optimal Nash Equilibrium as a Team

Katja Verbeeck; Ann Nowé; Tom Lenaerts; Johan Parent

Coordination is an important issue in multi-agent systems when agents want to maximize their revenue. Often coordination is achieved through communication, however communication has its price. We are interested in finding an approach where the communication between the agents is kept low, and a global optimal behavior can still be found.In this paper we report on an efficient approach that allows independent reinforcement learning agents to reach a Pareto optimal Nash equilibrium with limited communication. The communication happens at regular time steps and is basically a signal for the agents to start an exploration phase. During each exploration phase, some agents exclude their current best action so as to give the team the opportunityto look for a possibly better Nash equilibrium. This technique of reducing the action space by exclusions was only recently introduced for finding periodical policies in games of conflicting interests. Here, we explore this technique in repeated common interest games with deterministic or stochastic outcomes.


genetic and evolutionary computation conference | 2005

Transition models as an incremental approach for problem solving in evolutionary algorithms

Anne Defaweux; Tom Lenaerts; Jano I. van Hemert; Johan Parent

This paper proposes an incremental approach for building solutions using evolutionary computation. It presents a simple evolutionary model called a Transition model in which partial solutions are constructed that interact to provide larger solutions. An evolutionary process is used to merge these partial solutions into a full solution for the problem at hand. The paper provides a preliminary study on the evolutionary dynamics of this model as well as an empirical comparison with other evolutionary techniques on binary constraint satisfaction.


Workshop on Radical Agent Concepts | 2002

Homo Egualis Reinforcement Learning Agents for Load Balancing

Katja Verbeeck; Johan Parent; Ann Nowé

Periodical policies were recently introduced as a solution for the coordination problem in games which assume competition between the players, and where the overall performance can only be as good as the performance of the poorest player. Instead of converging to just one Nash equilibrium, which may favor just one of the players, a periodical policy switches between periods in which all interesting Nash equilibria are played. As a result the players are able to equalize their pay-offs and a fair solution is build. Moreover players can learn this policy with a minimum on communication; now and then they send each other their performance. In this paper, periodical policies are investigated for use in real-life asynchronous games. More precisely we look at the problem of load balancing in a simple job scheduling game. The asynchronism of the problem is reflected in delayed pay-offs or reinforcements, probabilistic job creation and processor rates which follow an exponential distribution. We show that a group of homo egualis reinforcement learning agents can still find a periodical policy. When the jobs are small, homo egualis reinforcement learning agents find a good probability distribution over their action space to play the game without any communication.


Scientific Programming | 2004

Adaptive load balancing of parallel applications with multi-agent reinforcement learning on heterogeneous systems

Johan Parent; Katja Verbeeck; Jan Lemeire; Ann Nowé; Kris Steenhaut; Erik F. Dirkx

We report on the improvements that can be achieved by applying machine learning techniques, in particular reinforcement learning, for the dynamic load balancing of parallel applications. The applications being considered in this paper are coarse grain data intensive applications. Such applications put high pressure on the interconnect of the hardware. Synchronization and load balancing in complex, heterogeneous networks need fast, flexible, adaptive load balancing algorithms. Viewing a parallel application as a one-state coordination game in the framework of multi-agent reinforcement learning, and by using a recently introduced multi-agent exploration technique, we are able to improve upon the classic job farming approach. The improvements are achieved with limited computation and communication overhead.


congress on evolutionary computation | 2005

Complexity transitions in evolutionary algorithms: evaluating the impact of the initial population

A. Defaweux; Tom Lenaerts; J.I. van Hemert; Johan Parent

This paper proposes an evolutionary approach for the composition of solutions in an incremental way. The approach is based on the metaphor of transitions in complexity discussed in the context of evolutionary biology. Partially defined solutions interact and evolve into aggregations until a full solution for the problem at hand is found. The impact of the initial population on the outcome and the dynamics of the process is evaluated using the domain of binary constraint satisfaction problems.


congress on evolutionary computation | 2005

Linear genetic programming using a compressed genotype representation

Johan Parent; Ann Nowé; Kris Steenhaut; Anne Defaweux

This paper presents a modularization strategy for linear genetic programming (GP) based on a substring compression/substitution scheme. The purpose of this substitution scheme is to protect building blocks and is in other words a form of learning linkage. The compression of the genotype provides both a protection mechanism and a form of genetic code reuse. This paper presents results for synthetic genetic algorithm (GA) reference problems like SEQ and OneMax as well as several standard GP problems. These include a real world application of GP to data compression. Results show that despite the fact that the compression substrings assumes a tight linkage between alleles, this approach improves the search process.


european conference on machine learning | 2001

Social Agents Playing a Periodical Policy

Ann Nowé; Johan Parent; Katja Verbeeck


genetic and evolutionary computation conference | 2002

Evolving Compression Preprocessors With Genetic Programming

Johan Parent; Ann Nowé


Archive | 2002

Adaptive Load Balancing of Parallel Applications with Reinforcement Learning on Heterogeneous Networks

Johan Parent; Katja Verbeeck; Jan Lemeire

Collaboration


Dive into the Johan Parent's collaboration.

Top Co-Authors

Avatar

Ann Nowé

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Katja Verbeeck

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Anne Defaweux

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Tom Lenaerts

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Kris Steenhaut

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Jan Lemeire

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Andreas Birk

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Erik F. Dirkx

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Thomas Walle

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Tom De Vlaminck

Vrije Universiteit Brussel

View shared research outputs
Researchain Logo
Decentralizing Knowledge