Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matt Knudson is active.

Publication


Featured researches published by Matt Knudson.


Robotics and Autonomous Systems | 2011

Adaptive navigation for autonomous robots

Matt Knudson; Kagan Tumer

In many robotic exploration missions, robots have to learn specific policies that allow them to: (i) select high level goals (e.g., identify specific destinations), (ii) navigate (reach those destinations), (iii) and adapt to their environment (e.g., modify their behavior based on changing environmental conditions). Furthermore, those policies must be robust to signal noise or unexpected situations, scalable to more complex environments, and account for the physical limitations of the robots (e.g., limited battery power and computational power). In this paper we evaluate reactive and learning navigation algorithms for exploration robots that must avoid obstacles and reach specific destinations in limited time and with limited observations. Our results show that neuro-evolutionary algorithms with well-designed evaluation functions can produce up to 50% better performance than reactive algorithms in complex domains where the robots goals are to select paths that lead to seek specific destinations while avoiding obstacles, particularly when facing significant sensor and actuator signal noise.


genetic and evolutionary computation conference | 2010

Coevolution of heterogeneous multi-robot teams

Matt Knudson; Kagan Tumer

Evolving multiple robots so that each robot acting independently can contribute to the maximization of a system level objective presents significant scientific challenges. For example, evolving multiple robots to maximize aggregate information in exploration domains (e.g., planetary exploration, search and rescue) requires coordination, which in turn requires the careful design of the evaluation functions. Additionally, where communication among robots is expensive (e.g., limited power or computation), the coordination must be achieved passively, without robots explicitly informing others of their states/intended actions. Coevolving robots in these situations is a potential solution to producing coordinated behavior, where the robots are coupled through their evaluation functions. In this work, we investigate coevolution in three types of domains: (i) where precisely n homogeneous robots need to perform a task; (ii) where n is the optimal number of homogeneous robots for the task; and (iii) where n is the optimal number of heterogeneous robots for the task. Our results show that coevolving robots with evaluation functions that are locally aligned with the system evaluation significantly improve performance over robots evolving using the system evaluation function directly, particularly in dynamic environments.


genetic and evolutionary computation conference | 2012

Policy transfer in mobile robots using neuro-evolutionary navigation

Matt Knudson; Kagan Tumer

In this paper, we first present a state/action representation that allows robots to learn good navigation policies, but also allows them to transfer the policy to new and more complex situations. In particular, we show how the evolved policies can transfer to situations with: (i) new tasks (different obstacle and target configurations and densities); and (ii) new sets of sensors (different resolution). Our results show that in all cases, policies evolved in simple environments and transferred to more complex situations outperform policies directly evolved in the complex situation both in terms of overall performance (up to 30%) and convergence speed (up to 90%).


Advances in Complex Systems | 2013

Dynamic Partnership Formation for Multi-Rover Coordination

Matt Knudson; Kagan Tumer

Coordinating multiagent systems to maximize global information collection is a key challenge in many real world applications such as planetary exploration, and search and rescue. In particular, in many domains where communication is expensive (e.g., in terms of energy), the coordination must be achieved in a passive manner, without agents explicitly informing other agents of their states and/or intended actions. In this work, we extend results on such multiagent coordination algorithms to domains where the agents cannot achieve the required tasks without forming teams. We investigate team formation in three types of domains, one where n agents need to perform a task for the team to receive credit, one where there is an optimal number of agents (n) required for the task, but where the agents receive a decaying reward if they form a team with membership other than n, and finally we investigate heterogeneous teams where individuals vary in construction. Our results show that encouraging agents to coordinate is more successful than strictly requiring coordination. We also show that devising agent objective functions that are aligned with the global objective and locally computable significantly outperform systems where agents directly use the global objective, and that the improvement increases with the complexity of the task.


genetic and evolutionary computation conference | 2011

Agent fitness functions for evolving coordinated sensor networks

Christian Roth; Matt Knudson; Kagan Tumer

Distributed sensor networks are an attractive area for research in agent systems. This is due primarily to the level of information available in applications where sensing technology has improved dramatically. These include energy systems and area coverage where it is desirable for sensor networks to have the ability to self-organize and be robust to changes in network structure. The challenges presented when investigating distributed sensor networks for such applications include the need for small sensor packages that are still capable of making good decisions to cover areas where multiple types of information may be present. For example in energy systems, singular areas in power plants may produce several types of valuable information, such as temperature, pressure, or chemical indicators. The approach of the work presented in this paper provides agent fitness functions for use with a neuro-evolutionary algorithm to address some of these challenges. In particular, we show that for self-organization and robustness to network changes, it is more advantageous to evolve individual policies, rather than a shared policy that all sensor units utilize. Further, we show that using a difference objective approach to the decomposition of system-level fitness functions provides a better target for evolving these individual policies. This is because the difference evaluation for fitness provides a cleaner signal, while maintaining vital information from the system level that implicitly promotes coordination among individual sensor units in the network.


adaptive agents and multi-agents systems | 2010

Robot coordination with ad-hoc team formation

Matt Knudson; Kagan Tumer


Archive | 2010

Robot Coordination with Ad-hoc Team Formation (Extended Abstract)

Matt Knudson; Kagan Tumer


Archive | 2008

Neuro-Evolutionary Navigation for Resource-Limited Mobile Robots

Matt Knudson; Kagan Tumer


Knowledge Engineering Review | 2016

Preface to the special issue: adaptive learning agents

Peter Vrancx; Enda Howley; Matt Knudson


Archive | 2014

UAS Conflict-Avoidance Using Multiagent RL with Abstract Strategy Type Communication

Carrie Rebhuhn; Matt Knudson; Kagan Tumer

Collaboration


Dive into the Matt Knudson's collaboration.

Top Co-Authors

Avatar

Kagan Tumer

Oregon State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jaime Junell

Oregon State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Vrancx

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Enda Howley

National University of Ireland

View shared research outputs
Researchain Logo
Decentralizing Knowledge