Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris HolmesParker is active.

Publication


Featured researches published by Chris HolmesParker.


genetic and evolutionary computation conference | 2012

Evolving large scale UAV communication system

Adrian K. Agogino; Chris HolmesParker; Kagan Tumer

Unmanned Aerial Vehicles (UAVs) have traditionally been used for short duration missions involving surveillance or military operations. Advances in batteries, photovoltaics and electric motors though, will soon allow large numbers of small, cheap, solar powered unmanned aerial vehicles (UAVs) to fly long term missions at high altitudes. This will revolutionize the way UAVs are used, allowing them to form vast communication networks. However, to make effective use of thousands (and perhaps millions) of UAVs owned by numerous disparate institutions, intelligent and robust coordination algorithms are needed, as this domain introduces unique congestion and signal-to-noise issues. In this paper, we present a solution based on evolutionary algorithms to a specific ad-hoc communication problem, where UAVs communicate to ground-based customers over a single wide-spectrum communication channel. To maximize their bandwidth, UAVs need to optimally control their output power levels and orientation. Experimental results show that UAVs using evolutionary algorithms in combination with appropriately shaped evaluation functions can form a robust communication network and perform 180% better than a fixed baseline algorithm as well as 90% better than a basic evolutionary algorithm.


Knowledge Engineering Review | 2016

Combining reward shaping and hierarchies for scaling to large multiagent systems

Chris HolmesParker; Adrian K. Agogino; Kagan Tumer

Coordinating the actions of agents in multiagent systems presents a challenging problem, especially as the size of the system is increased and predicting the agent interactions becomes difficult. Many approaches to improving coordination within multiagent systems have been developed including organizational structures, shaped rewards, coordination graphs, heuristic methods, and learning automata. However, each of these approaches still have inherent limitations with respect to coordination and scalability. We explore the potential of synergistically combining existing coordination mechanisms such that they offset each others’ limitations. More specifically, we are interested in combining existing coordination mechanisms in order to achieve improved performance, increased scalability, and reduced coordination complexity in large multiagent systems. In this work, we discuss and demonstrate the individual limitations of two well-known coordination mechanisms. We then provide a methodology for combining the two coordination mechanisms to offset their limitations and improve performance over either method individually. In particular, we combine shaped difference rewards and hierarchical organization in the Defect Combination Problem with up to 10 000 sensing agents. We show that combining hierarchical organization with difference rewards can improve both coordination and scalability by decreasing information overhead, structuring agent-to-agent connectivity and control flow, and improving the individual decision-making capabilities of agents. We show that by combining hierarchies and difference rewards, the information overheads and computational requirements of individual agents can be reduced by as much as 99% while simultaneously increasing the overall system performance. Additionally, we demonstrate the robustness of this approach to handling up to 25% agent failures under various conditions.


Proceedings of the 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT) on | 2014

CLEAN Rewards to Improve Coordination by Removing Exploratory Action Noise

Chris HolmesParker; Matthew E. Taylor; Adrian K. Agogino; Kagan Tumer

Coordinating the joint-actions of agents in cooperative multiagent systems is a difficult problem in many real world domains. Learning in such multiagent systems can be slow because an agent may not only need to learn how to behave in a complex environment, but also to account for the actions of other learning agents. The inability of an agent to distinguish between the true environmental dynamics and those caused by the stochastic exploratory actions of other agents creates noise in each agents reward signal. This learning noise can have unforeseen and often undesirable effects on the resultant system performance. We define such noise as exploratory action noise, demonstrate the critical impact it can have on the learning process in multiagent settings, and introduce a reward structure to effectively remove such noise from each agents reward signal. In particular, we introduce two types of Coordinated Learning without Exploratory Action Noise (CLEAN) rewards that allow an agent to estimate the counterfactual reward it would have received had it taken an alternative action. We empirically show that CLEAN rewards outperform agents using both traditional global rewards and shaped difference rewards in two domains.


genetic and evolutionary computation conference | 2012

Evolving distributed resource sharing for cubesat constellations

Adrian K. Agogino; Chris HolmesParker; Kagan Tumer


adaptive agents and multi agents systems | 2011

Agent-based resource allocation in dynamically formed CubeSat constellations

Chris HolmesParker; Adrian K. Agogino


adaptive agents and multi-agents systems | 2015

Counterfactual Exploration for Improving Multiagent Learning

Mitchell K. Colby; Sepideh Kharaghani; Chris HolmesParker; Kagan Tumer


adaptive agents and multi agents systems | 2013

Exploiting structure and utilizing agent-centric rewards to promote coordination in large multiagent systems

Chris HolmesParker; Adrian K. Agogino; Kagan Tumer


adaptive agents and multi agents systems | 2013

CLEAN rewards for improving multiagent coordination in the presence of exploration

Chris HolmesParker; Adrian K. Agogino; Kagan Tumer


Archive | 2014

CLEANing the Reward: Counterfactual Actions to Remove Exploratory Action Noise in Multiagent Learning

Chris HolmesParker; Mathew E. Taylor; Kagan Tumer; Adrian K. Agogino


Archive | 2013

Exploiting Structure and Utilizing Agent-Centric Rewards to Promote Coordination in Large Multiagent Systems (Extended Abstract)

Chris HolmesParker; Adrian K. Agogino; Kagan Tumer

Collaboration


Dive into the Chris HolmesParker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kagan Tumer

Oregon State University

View shared research outputs
Top Co-Authors

Avatar

Matthew E. Taylor

Washington State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge