Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Kudenko is active.

Publication


Featured researches published by Daniel Kudenko.


Archive | 2003

Adaptive Agents and Multi-Agent Systems II

Daniel Kudenko; Dimitar Kazakov; Eduardo Alonso

Cooperation and learning are two ways in which an agent can improve its performance. Cooperative Multiagent Learning is a framework to analyze the tradeoff between cooperation and learning in multiagent systems. We focus on multiagent systems where individual agents are capable of solving problems and learning using CBR (Case-based Reasoning). We present several collaboration strategies for agents that learn and their empirical results in several experiments. Finally we analyze the collaboration strategies and their results along several dimensions, like number of agents, redundancy, CBR technique used, and individual decision policies.


Knowledge Engineering Review | 2001

Learning in multi-agent systems

Eduardo Alonso; Mark d'Inverno; Daniel Kudenko; Michael Luck; Jason Noble

In recent years, multi-agent systems (MASs) have received increasing attention in the artificial intelligence community. Research in multi-agent systems involves the investigation of autonomous, rational and flexible behaviour of entities such as software programs or robots, and their interaction and coordination in such diverse areas as robotics (Kitano et al., 1997), information retrieval and management (Klusch, 1999), and simulation (Gilbert & Conte, 1995). When designing agent systems, it is impossible to foresee all the potential situations an agent may encounter and specify an agent behaviour optimally in advance. Agents therefore have to learn from, and adapt to, their environment, especially in a multi-agent setting.


Springer US | 2003

Adaptive agents and multi-agent systems: adaptation and multi-agent learning

Eduardo Alonso; Daniel Kudenko; Dimitar Kazakov

To Adapt or Not to Adapt - Consequences of Adapting Driver and Traffic Light Agents.- Optimal Control in Large Stochastic Multi-agent Systems.- Continuous-State Reinforcement Learning with Fuzzy Approximation.- Using Evolutionary Game-Theory to Analyse the Performance of Trading Strategies in a Continuous Double Auction Market.- Parallel Reinforcement Learning with Linear Function Approximation.- Combining Reinforcement Learning with Symbolic Planning.- Agent Interactions and Implicit Trust in IPD Environments.- Collaborative Learning with Logic-Based Models.- Priority Awareness: Towards a Computational Model of Human Fairness for Multi-agent Systems.- Bifurcation Analysis of Reinforcement Learning Agents in the Seltens Horse Game.- Bee Behaviour in Multi-agent Systems.- Stable Cooperation in the N-Player Prisoners Dilemma: The Importance of Community Structure.- Solving Multi-stage Games with Hierarchical Learning Automata That Bootstrap.- Auctions, Evolution, and Multi-agent Learning.- Multi-agent Reinforcement Learning for Intrusion Detection.- Networks of Learning Automata and Limiting Games.- Multi-agent Learning by Distributed Feature Extraction.


Advances in Complex Systems | 2011

An Empirical Study of Potential-Based Reward Shaping and Advice in Complex, Multi-Agent Systems

Sam Devlin; Daniel Kudenko; Marek Grześ

This paper investigates the impact of reward shaping in multi-agent reinforcement learning as a way to incorporate domain knowledge about good strategies. In theory, potential-based reward shaping does not alter the Nash Equilibria of a stochastic game, only the exploration of the shaped agent. We demonstrate empirically the performance of reward shaping in two problem domains within the context of RoboCup KeepAway by designing three reward shaping schemes, encouraging specific behaviour such as keeping a minimum distance from other players on the same team and taking on specific roles. The results illustrate that reward shaping with multiple, simultaneous learning agents can reduce the time needed to learn a suitable policy and can alter the final group performance.


adaptive agents and multi-agents systems | 2004

Reinforcement Learning of Coordination in Heterogeneous Cooperative Multi-Agent Systems

Spiros Kapetanakis; Daniel Kudenko

In todayýs open networking environment, the assumption that the learning agents that join a system are homogeneous is becoming increasingly unrealistic. This makes ef fective coordination particularly dif ficult to learn, especially in the absence of learning agent standards. In this short paper we investigate the problem of learning to coordinate with heterogeneous agents. We show that an agent employing the FMQ algorithm, a recently developed multiagent learning method, has the ability to converge towards the optimal joint action when teamed-up with one or more simple Q-learners. Specifically, we show such convergence in scenarios where simple Q-learners alone are unable to converge towards an optimum.


SLS'07 Proceedings of the 2007 international conference on Engineering stochastic local search algorithms: designing, implementing and analyzing effective heuristics | 2007

Tuning the performance of the MMAS heuristic

Enda Ridge; Daniel Kudenko

This paper presents an in-depth Design of Experiments (DOE) methodology for the performance analysis of a stochastic heuristic. The heuristic under investigation is Max-Min Ant System (MMAS). for the Travelling Salesperson Problem (TSP). Specifically, the Response Surface Methodology is used to model and tune MMAS performance with regard to 10 tuning parameters, 2 problem characteristics and 2 performance metrics--solution quality and solution time. The accuracy of these predictions is methodically verified in a separate series of confirmation experiments. The two conflicting responses are simultaneously optimised using desirability functions. Recommendations on optimal parameter settings are made. The optimal parameters are methodically verified. The large number of degrees-of-freedom in the MMAS design are overcome with a Minimum Run Resolution V design. Publicly available algorithm and problem generator implementations are used throughout. The paper should therefore serve as an illustrative case study of the principled engineering of a stochastic heuristic.


european agent systems summer school | 2001

Machine learning and inductive logic programming for multi-agent systems

Dimitar Kazakov; Daniel Kudenko

Learning is a crucial ability of intelligent agents. Rather than presenting a complete literature review, we focus in this paper on important issues surrounding the application of machine learning (ML) techniques to agents and multi-agent systems (MAS). In this discussion we move from disembodied ML over single-agent learning to full multi-agent learning. In the second part of the paper we focus on the application of Inductive Logic Programming, a knowledge-based ML technique, to MAS, and present an implemented framework in which multi-agent learning experiments can be carried out.


Archive | 2010

Tuning an Algorithm Using Design of Experiments

Enda Ridge; Daniel Kudenko

This chapter is a tutorial on using a design of experiments approach for tuning the parameters that affect algorithm performance. A case study illustrates the application of the method and interpretation of its results.


intelligent technologies for interactive entertainment | 2008

Generation of dilemma-based interactive narratives with a changeable story goal

Heather Barber; Daniel Kudenko

This paper describes the Generator of Adaptive Dilemma-based Interactive Narratives (GADIN) system. This system automatically generates interactive narratives which are focused on dilemmas in order to create dramatic tension. The user interacts with the system by making decisions on relevant dilemmas and by freely choosing their own actions. In this paper we introduce the version of GADIN which is able to create a finite story. The narrative finishes - in a manner which is satisfying to the user - when a dynamically determined story goal is achieved. Satisfaction of this goal may involve the user acting in a way which changes the dispositions of other characters. If the user actions cause the goal to become impossible or unlikely then they cause the story goal to be re-selected, thus meaning that the user is able to fundamentally change the overall narrative while still experiencing a coherent narrative and clear ending. This method has been applied within the childrens story domain of a dinosaur adventure but is applicable in any domain which makes use of clichéd storylines. The story designer is required only to provide genre-specific storyworld knowledge and dilemmas.


Information Systems | 2008

Plan-based reward shaping for reinforcement learning

Marek Grzes; Daniel Kudenko

Reinforcement learning, while being a highly popular learning technique for agents and multi-agent systems, has so far encountered difficulties when applying it to more complex domains due to scaling-up problems. This paper focuses on the use of domain knowledge to improve the convergence speed and optimality of various RL techniques. Specifically, we propose the use of high-level STRIPS operator knowledge in reward shaping to focus the search for the optimal policy. Empirical results show that the plan-based reward shaping approach outperforms other RL techniques, including alternative manual and MDP-based reward shaping when it is used in its basic form. We show that MDP-based reward shaping may fail and successful experiments with STRIPS-based shaping suggest modifications which can overcome encountered problems. The STRIPS-based method we propose allows expressing the same domain knowledge in a different way and the domain expert can choose whether to define an MDP or STRIPS planning task. We also evaluate the robustness of the proposed STRIPS-based technique to errors in the plan knowledge.

Collaboration


Dive into the Daniel Kudenko's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Feng Li

City University London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge