Gregory J. Barlow
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gregory J. Barlow.
Robotics and Autonomous Systems | 2009
Andrew L. Nelson; Gregory J. Barlow; Lefteris Doitsidis
This paper surveys fitness functions used in the field of evolutionary robotics (ER). Evolutionary robotics is a field of research that applies artificial evolution to generate control systems for autonomous robots. During evolution, robots attempt to perform a given task in a given environment. The controllers in the better performing robots are selected, altered and propagated to perform the task again in an iterative process that mimics some aspects of natural evolution. A key component of this process-one might argue, the key component-is the measurement of fitness in the evolving controllers. ER is one of a host of machine learning methods that rely on interaction with, and feedback from, a complex dynamic environment to drive synthesis of controllers for autonomous agents. These methods have the potential to lead to the development of robots that can adapt to uncharacterized environments and which may be able to perform tasks that human designers do not completely understand. In order to achieve this, issues regarding fitness evaluation must be addressed. In this paper we survey current ER research and focus on work that involved real robots. The surveyed research is organized according to the degree of a priori knowledge used to formulate the various fitness functions employed during evolution. The underlying motivation for this is to identify methods that allow the development of the greatest degree of novel control, while requiring the minimum amount of a priori task knowledge from the designer.
ieee conference on cybernetics and intelligent systems | 2004
Gregory J. Barlow; Choong K. Oh; Edward Grant
Autonomous navigation controllers were developed for fixed wing unmanned aerial vehicle (UAV) applications using incremental evolution with multi-objective genetic programming (GP). We designed four fitness functions derived from flight simulations and used multi-objective GP to evolve controllers able to locate a radar source, navigate the UAV to the source efficiently using on-board sensor measurements, and circle closely around the emitter. We selected realistic flight parameters and sensor inputs to aid in the transference of evolved controllers to physical UAVs. We used both direct and environmental incremental evolution to evolve controllers for four types of radars: 1) continuously emitting, stationary radars, 2) continuously emitting, mobile radars, 3) intermittently emitting, stationary radars, and 4) intermittently emitting, mobile radars. The use of incremental evolution drastically increased evolutions chances of evolving a successful controller compared to direct evolution. This technique can also be used to develop a single controller capable of handling all four radar types. In the next stage of research, the best evolved controllers will be tested by using them to fly real UAVs
congress on evolutionary computation | 2004
Choong K. Oh; Gregory J. Barlow
Autonomous navigation controllers were developed for fixed wing unmanned aerial vehicle (UAV) applications using multiobjective genetic programming (GP). We designed four fitness functions derived from flight simulations and used multiobjective GP to evolve controllers able to locate a radar source, navigate the UAV to the source efficiently using on-board sensor measurements, and circle closely around the emitter. Controllers were evolved for three different kinds of radars: stationary, continuously emitting radars, stationary, intermittently emitting radars, and mobile, continuously emitting radars. We selected realistic flight parameters and sensor inputs to aid in the transference of evolved controllers to physical UAVs.
Evo'08 Proceedings of the 2008 conference on Applications of evolutionary computing | 2008
Gregory J. Barlow; Stephen F. Smith
This paper describes a memory enhanced evolutionary algorithm (EA) approach to the dynamic job shop scheduling problem. Memory enhanced EAs have been widely investigated for other dynamic optimization problems with changing fitness landscapes, but only when associated with a fixed search space. In dynamic scheduling, the search space shifts as jobs are completed and new jobs arrive, so memory entries that describe specific points in the search space will become infeasible over time. The relative importances of jobs in the schedule also change over time, so previously good points become increasingly irrelevant. We describe a classifier-based memory for abstracting and storing information about schedules that can be used to build similar schedules at future times. We compared the memory enhanced EA with a standard EA and several common EA diversity techniques both with and without memory. The memory enhanced EA improved performance over the standard EA, while diversity techniques decreased performance.
international conference on integration of knowledge intensive multi agent systems | 2003
Andrew L. Nelson; Edward Grant; Gregory J. Barlow; M. White
Evolutionary robotics (ER) employs population-based artificial evolution to develop behavioral robotics controllers. We focus on the formulation and application of a fitness selection function for ER that makes use of intra-population competitive selection. In the case of behavioral tasks, such as game playing, intra-population competition can lead to the evolution of complex behaviors. In order for this competition to be realized, the fitness of competing controllers must be based mainly on the aggregate success or failure to complete an overall task. However, because initial controller populations are often subminimally competent, and individuals are unable to complete the overall competitive task at all, no selective pressure can be generated at the onset of evolution (the bootstrap problem). In order to accommodate these conflicting elements in selection, we formulate a bimodal fitness selection function. This function accommodates subminimally competent initial populations in early evolution, but allows for binary success/failure competitive selection of controllers that have evolved to perform at a basic level. Large arbitrarily connected neural network-based robot controllers were evolved to play the competitive team game Capture the Flag. Results show that neural controllers evolved under a variety of conditions were competitive with a hand-coded knowledge-based controller and could win a modest majority of games in a large tournament.
international conference on intelligent transportation systems | 2011
Xiao-Feng Xie; Gregory J. Barlow; Stephen F. Smith; Zachary B. Rubinstein
In this paper, we take a self-scheduling approach to solving the traffic signal control problem, where each intersection is controlled by a self-interested agent operating with a limited (fixed horizon) view of incoming traffic. Central to the approach is a representation that aggregates incoming vehicles into critical clusters, based on the non-uniformly distributed nature of road traffic flows. Starting from a recently developed signal timing strategy based on clearing anticipated queues, we propose extended real-time decision policies that also incorporate look-ahead of approaching vehicle platoons, and thus focus attention more on keeping vehicles moving than on simply clearing queues. We present simulation results that demonstrate the benefit of our approach over simple queue clearing, both in promoting the establishment of “green waves” where vehicles move through the road network without stopping and in improving overall traffic flows.
intelligent robots and systems | 2003
Andrew L. Nelson; Edward Grant; Gregory J. Barlow; Thomas C. Henderson
This paper describes the development and testing of a new evolutionary robotics research test bed. The test bed consists of a colony of small computationally powerful mobile robots that use evolved neural network controllers and vision based sensors to generate team game-playing behaviors. The vision based sensors function by converting video images into range and object color data. Large evolvable neural network controllers use these sensor data to control mobile robots. The networks require 150 individual input connections to accommodate the processed video sensor data. Using evolutionary computing methods, the neural network based controllers were evolved to play the competitive team game Capture the Flag with teams of mobile robots. Neural controllers were evolved in simulation and transferred to real robots for physical verification. Sensor signals in the simulated environment are formatted to duplicate the processed real video sensor values rather than the raw video images. Robot controllers receive sensor signals and send actuator commands of the same format, whether they are driving physical robots in a real environment or simulated robots agents in an artificial environment. Evolved neural controllers can be transferred directly to the real mobile robots for testing and evaluation. Experimental results generated with this new evolutionary robotics research test bed are presented.
international conference on robotics and automation | 2004
Gregory J. Barlow; Thomas C. Henderson; Andrew L. Nelson; Edward Grant
Smart Sensor Networks (S-nets) are groups of stationary agents (S-elements) which provide distributed sensing, computation, and communication in an environment. In order to integrate information from individual agents and to efficiently transmit this information to other agents, these devices must be able to create local groups (S-clusters). A leadership protocol that creates static clusters has been previously proposed. Here, we further develop this protocol to allow for dynamic cluster updating. This accommodates on-the-fly network re-organization in response to environmental disturbances or the gain or loss of S-elements. We outline an informal argument for the correctness of this revised protocol. We describe our embedded system implementation of the leadership protocol in simulation and using a colony of robots. Finally, we present results demonstrating both implementations.
genetic and evolutionary computation conference | 2008
Gregory J. Barlow; Choong K. Oh; Stephen F. Smith
For some tasks, the use of more than one robot may improve the speed, reliability, or flexibility of completion, but many other tasks can be completed only by multiple robots. This paper investigates controller design using multi-objective genetic programming for a multi-robot system to solve a highly constrained problem, where multiple unmanned aerial vehicles (UAVs) must monitor targets spread sparsely throughout a large area. UAVs have a small communication range, sensor information is limited and noisy, monitoring a target takes an indefinite amount of time, and evolved controllers must continue to perform well even as the number of UAVs and targets changes. An evolved task selection controller dynamically chooses a target for the UAV based on sensor information and communication. Controllers evolved using several communication schemes were compared in simulation on problem scenarios of varying size, and the results suggest that this approach can evolve effective controllers if communication is limited to the nearest other UAV.
scandinavian conference on information systems | 2009
Gregory J. Barlow; Stephen F. Smith
Many real-world problems involve the coordination of multiple agents in dynamic environments, where characteristics of the problem being solved change over time. In such problems, adaptive, self-organizing agent approaches have been shown to provide very robust solutions. However, these approaches often require non-trivial amounts of time to respond to large environmental shifts. Considering this limitation, we observe that environmental changes in a given dynamic problem are generally not completely random; similar states in the environment tend to reappear over time. Memory is one way to leverage this past information and improve the adaptive efficiency of the agent system. In this paper, we explore the use of memory as a means of boosting the performance of self-organizing agents in solving dynamic coordination problems. We consider the specific problem of coordinating product flows in a factory that is subject to changing job mixes over time, which has been previously solved using a computational model of the task allocation behavior of wasps. We augment this base procedure with a number of memory systems, the most sophisticated of which exploit memory models inspired by estimation of distribution algorithms (EDAs) to manage computational cost. An experimental analysis is presented which demonstrates the advantage of using memory. Configurations using the EDA-inspired memory models are shown to substantially outperform configurations with more standard and infinite-sized memory models, and all are shown to improve the performance of the baseline task allocation procedure.