Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Mugan is active.

Publication


Featured researches published by Jonathan Mugan.


IEEE Transactions on Autonomous Mental Development | 2012

Autonomous Learning of High-Level States and Actions in Continuous Environments

Jonathan Mugan; Benjamin Kuipers

How can an agent bootstrap up from a low-level representation to autonomously learn high-level states and actions using only domain-general knowledge? In this paper, we assume that the learning agent has a set of continuous variables describing the environment. There exist methods for learning models of the environment, and there also exist methods for planning. However, for autonomous learning, these methods have been used almost exclusively in discrete environments. We propose attacking the problem of learning high-level states and actions in continuous environments by using a qualitative representation to bridge the gap between continuous and discrete variable representations. In this approach, the agent begins with a broad discretization and initially can only tell if the value of each variable is increasing, decreasing, or remaining steady. The agent then simultaneously learns a qualitative representation (discretization) and a set of predictive models of the environment. These models are converted into plans to perform actions. The agent then uses those learned actions to explore the environment. The method is evaluated using a simulated robot with realistic physics. The robot is sitting at a table that contains a block and other distractor objects that are out of reach. The agent autonomously explores the environment without being given a task. After learning, the agent is given various tasks to determine if it learned the necessary states and actions to complete them. The results show that the agent was able to use this method to autonomously learn to perform the tasks.


international conference on development and learning | 2007

Learning to predict the effects of actions: Synergy between rules and landmarks

Jonathan Mugan; Benjamin Kuipers

A developing agent must learn the structure of its world, beginning with its sensorimotor world. It learns rules to predict how its motor signals change the sensory input it receives. It learns the limits to its motion. It learns which effects of its actions are unconditional and which effects are conditional, including what they depend on. We present preliminary results evaluating an implemented computational model of this important kind of foundational developmental learning. Our model demonstrates synergy between the learning of landmarks representing important qualitative distinctions, and the learning of rules that exploit those distinctions to make reliable predictions. These qualitative distinctions make it possible to define discrete events, and then to identify predictive rules describing regularities among events and the values of context variables. The attention of the learning agent is focused by a stratified model that structures the set of variables, and the structure of the stratified model is simultaneously created by the learning process.


Computational and Robotic Models of the Hierarchical Organization of Behavior | 2013

Autonomous representation learning in a developing agent

Jonathan Mugan; Benjamin Kuipers

Our research goal is to design an agent that can begin with low-level sensors and effectors and autonomously learn high-level representations and actions through interaction with the environment. This chapter focuses on the problem of learning representations. We present four principles for autonomous learning of representations in a developing agent, and we demonstrate how these principles can be embodied in an algorithm. In a simulated environment with realistic physics, we show that an agent can use these principles to autonomously learn useful representations and effective hierarchical actions.


international joint conference on artificial intelligence | 2009

Autonomously learning an action hierarchy using a learned qualitative state representation

Jonathan Mugan; Benjamin Kuipers


Archive | 2010

Autonomous qualitative learning of distinctions and actions in a developing agent

Benjamin Kuipers; Jonathan Mugan


Archive | 2007

Towards the Application of Reinforcement Learning to Undirected Developmental Learning

Jonathan Mugan; Benjamin Kuipers


Archive | 2009

A Comparison of Strategies for Developmental Action Acquisition in QLAP

Jonathan Mugan; Benjamin Kuipers


Archive | 2008

Continuous-domain reinforcement learning using a learned qualitative state representation

Jonathan Mugan; Benjamin Kuipers


Archive | 2008

Discretization of Rational Data

Jonathan Mugan; Klaus Truemper


Archive | 2009

Skill Reuse in Lifelong Developmental Learning

Jonathan Mugan; Benjamin Kuipers

Collaboration


Dive into the Jonathan Mugan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus Truemper

University of Texas at Dallas

View shared research outputs
Researchain Logo
Decentralizing Knowledge