Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathryn E. Merrick is active.

Publication


Featured researches published by Kathryn E. Merrick.


advances in computer entertainment technology | 2006

Motivated reinforcement learning for non-player characters in persistent computer game worlds

Kathryn E. Merrick; Mary Lou Maher

Massively multiplayer online computer games are played in complex, persistent virtual worlds. Over time, the landscape of these worlds evolves and changes as players create and personalise their own virtual property. In contrast, many non-player characters that populate virtual game worlds possess a fixed set of pre-programmed behaviours and lack the ability to adapt and evolve in time with their surroundings. This paper presents motivated reinforcement learning agents as a means of creating non-player characters that can both evolve and adapt. Motivated reinforcement learning agents explore their environment and learn new behaviours in response to interesting experiences, allowing them to display progressively evolving behavioural patterns. In dynamic worlds, environmental changes provide an additional source of interesting experiences triggering further learning and allowing the agents to adapt their existing behavioural patterns in time with their surroundings.


TAEBC-2009 | 2009

Motivated Reinforcement Learning

Kathryn E. Merrick; Mary Lou Maher

Motivated learning is an emerging research field in artificial intelligence and cognitive modelling. Computational models of motivation extend reinforcement learning to adaptive, multitask learning in complex, dynamic environments the goal being to understand how machines can develop new skills and achieve goals that were not predefined by human engineers. In particular, this book describes how motivated reinforcement learning agents can be used in computer games for the design of non-player characters that can adapt their behaviour in response to unexpected changes in their environment. This book covers the design, application and evaluation of computational models of motivation in reinforcement learning. The authors start with overviews of motivation and reinforcement learning, then describe models for motivated reinforcement learning. The performance of these models is demonstrated by applications in simulated game scenarios and a live, open-ended virtual world. Researchers in artificial intelligence, machine learning and artificial life will benefit from this book, as will practitioners working on complex, dynamic systems in particular multiuser, online games.


advances in computer entertainment technology | 2007

Motivated reinforcement learning for adaptive characters in open-ended simulation games

Kathryn E. Merrick; Mary Lou Maher

Recently a new generation of virtual worlds has emerged in which users are provided with open-ended modelling tools with which they can create and modify world content. The result is evolving virtual spaces for commerce, education and social interaction. In general, these virtual worlds are not games and have no concept of winning, however the open-ended modelling capacity is nonetheless compelling. The rising popularity of open-ended virtual worlds suggests that there may also be potential for a new generation of computer games situated in open-ended environments. A key issue with the development of such games, however, is the design of non-player characters which can respond autonomously to unpredictable, open-ended changes to their environment. This paper considers the impact of open-ended modelling on character development in simulation games. Motivated reinforcement learning using context-free grammars is proposed as a means of representing unpredictable, evolving worlds for character reasoning. This technique is used to design adaptive characters for the Second Life virtual world to create a new kind of open-ended simulation game.


IEEE Transactions on Autonomous Mental Development | 2010

A Comparative Study of Value Systems for Self-Motivated Exploration and Learning by Robots

Kathryn E. Merrick

A range of different value systems have been proposed for self-motivated agents, including biologically and cognitively inspired approaches. Likewise, these value systems have been integrated with different behavioral systems including reflexive architectures, reward-based learning and supervised learning. However, there is little literature comparing the performance of different value systems for motivating exploration and learning by robots. This paper proposes a neural network architecture for integrating different value systems with reinforcement learning. It then presents an empirical evaluation and comparison of four value systems for motivating exploration by a Lego Mindstorms NXT robot. Results reveal the different exploratory properties of novelty-seeking motivation, interest and competence-seeking motivation.


Adaptive Behavior | 2011

Achievement, affiliation, and power: Motive profiles for artificial agents

Kathryn E. Merrick; Kamran Shafi

Computational models of motivation are tools that artificial agents can use to identify, prioritize, select and adapt the goals they will pursue autonomously. Previous research has focused on developing computational models of motivation that permit artificial agents to exhibit characteristics such as adaptive exploration, problem-finding behavior, competence-seeking behavior, and creativity. This permits self-motivated agents to identify novel or interesting goals not specifically programmed by system engineers, or adapt in complex or uncertain environments where it is difficult for system engineers to identify all possible goals in advance. However, existing computational models of motivation cover only a small subset of psychological motivation theories. There remains potential to draw on other psychological motivation theories to create artificial agents with new behavioral characteristics. This includes agents that can strive for standards of excellence, both internal and external; agents that can proactively socialize and build relationships with others; and agents that can exert their influence to gain control of resources. With these objectives in mind, this article expands our ‘‘motivation toolbox’’ with three new computational models of motivation for achievement, affiliation, and power motivation. The models are designed such that they can be used in isolation or together, embedded in an artificial ‘‘motive profile.’’ To validate the new models of motivation, three experiments are presented that compare the goal-selecting behavior of artificial agents with different motive profiles with that of humans with corresponding motive profiles. Results show that agents with different motive profiles exhibit different goal-selection characteristics, and that these various characteristics are statistically similar to behavioral trends observed experimentally in humans. The article concludes by discussing areas for the future development of each motivation model and the future roles and applications of agents with different motive profiles.


Adaptive Behavior | 2009

Motivated Learning from Interesting Events: Adaptive, Multitask Learning Agents for Complex Environments

Kathryn E. Merrick; Mary Lou Maher

This article presents a computational model of motivation for learning agents to achieve adaptive, multitask learning in complex, dynamic environments. Motivation is modeled as an attention focus mechanism to extend existing learning algorithms to environments in which tasks cannot be completely predicted prior to learning. Two agent models are presented for motivated reinforcement learning and motivated supervised learning, which incorporate this model of motivation. The formalisms used to define these agent models further allow the definition of consistent metrics for evaluating motivated learning agent models. The article concludes with a demonstration of the motivated reinforcement learning agent model that uses novelty and interest as the motivation function. The model is evaluated using the new metrics. Results show that motivated reinforcement learning agents using general, task-independent concepts such as novelty and interest can learn multiple, task-oriented behaviors by adapting their focus of attention in response to their changing experiences in their environment.


conference on computability in europe | 2008

Modeling motivation for adaptive nonplayer characters in dynamic computer game worlds

Kathryn E. Merrick

Current computer games are being set in increasingly more complex and dynamic virtual environments. Massively multiplayer online games, for example, are played in persistent virtual worlds, which evolve and change as players create and personalize their own virtual property. In contrast, technologies for controlling the behavior of nonplayer characters that populate virtual game worlds are frequently limited to preprogrammed rules. Characters using fixed rule-sets lack the ability to adapt in time with their environment. Motivated reinforcement learning offers an alternative to character design that can achieve nonplayer characters that both evolve and adapt in dynamic environments. This article presents and evaluates two computational models of motivation for use in nonplayer characters in persistent computer game worlds. These models represent motivation as an ongoing search for novelty, interest, and competence. Two metrics are introduced to evaluate the adaptability of characters controlled by motivated reinforcement learning agents using different models of motivation. These metrics characterize the behavior of nonplayer characters in terms of the variety and complexity of learned behaviors. An empirical evaluation of characters in simulated game scenarios shows that characters motivated by the search for competence are more adaptable in dynamic environments than those motivated by interest and novelty alone.


Cognitive Computation | 2016

Trusted Autonomy and Cognitive Cyber Symbiosis: Open Challenges

Hussein A. Abbass; Eleni Petraki; Kathryn E. Merrick; John Harvey; Michael Barlow

Abstract This paper considers two emerging interdisciplinary, but related topics that are likely to create tipping points in advancing the engineering and science areas. Trusted Autonomy (TA) is a field of research that focuses on understanding and designing the interaction space between two entities each of which exhibits a level of autonomy. These entities can be humans, machines, or a mix of the two. Cognitive Cyber Symbiosis (CoCyS) is a cloud that uses humans and machines for decision-making. In CoCyS, human–machine teams are viewed as a network with each node comprising humans (as computational machines) or computers. CoCyS focuses on the architecture and interface of a Trusted Autonomous System. This paper examines these two concepts and seeks to remove ambiguity by introducing formal definitions for these concepts. It then discusses open challenges for TA and CoCyS, that is, whether a team made of humans and machines can work in fluid, seamless harmony.


IEEE Transactions on Autonomous Mental Development | 2012

Intrinsic Motivation and Introspection in Reinforcement Learning

Kathryn E. Merrick

Incorporating intrinsic motivation with reinforcement learning can permit agents to independently choose, which skills they will develop, or to change their focus of attention to learn different skills at different times. This implies an autonomous developmental process for skills in which a skill-acquisition goal is first identified, then a skill is learned to solve the goal. The learned skill may then be stored, reused, temporarily ignored or even permanently erased. This paper formalizes the developmental process for skills by proposing a goal-lifecycle using the option framework for motivated reinforcement learning agents. The paper shows how the goal-lifecycle can be used as a basis for designing motivational state-spaces that permit agents to reason introspectively and autonomously about when to learn skills to solve goals, when to activate skills, when to suspend activation of skills or when to delete skills. An algorithm is presented that simultaneously learns: 1) an introspective policy mapping motivational states to decisions that change the agents motivational state, and 2) multiple option policies mapping sensed states and actions to achieve various domain-specific goals. Two variations of agents using this model are compared to motivated reinforcement learning agents without introspection for controlling non-player characters in a computer game scenario. Results show that agents using introspection can focus their attention on learning more complex skills than agents without introspection. In addition, they can learn these skills more effectively.


Adaptive Behavior | 2010

Modeling Behavior Cycles as a Value System for Developmental Robots

Kathryn E. Merrick

The behavior of natural systems is governed by rhythmic behavior cycles at the biological, cognitive, and social levels. These cycles permit natural organisms to adapt their behavior to their environment for survival, behavioral efficiency, or evolutionary advantage. This article proposes a model of behavior cycles as the basis for motivated reinforcement learning in developmental robots. Motivated reinforcement learning is a machine learning technique that incorporates a value system with a trial-and-error learning component. Motivated reinforcement learning is a promising model for developmental robotics because it provides a way for artificial agents to build and adapt their skill-sets autonomously over time. However, new models and metrics are needed to scale existing motivated reinforcement learning algorithms to the complex, real-world environments inhabited by robots. This article presents two such models and an experimental evaluation on four Lego Mindstorms NXT robots. Results show that the robots can evolve measurable, structured behavior cycles adapted to their individual physical forms.

Collaboration


Dive into the Kathryn E. Merrick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hussein A. Abbass

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Kamran Shafi

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Essam Soliman Debie

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ning Gu

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Chris Lokan

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Bing Wang

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Jiankun Hu

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

John Harvey

University of New South Wales

View shared research outputs
Researchain Logo
Decentralizing Knowledge