Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew L. Nelson is active.

Publication


Featured researches published by Andrew L. Nelson.


Robotics and Autonomous Systems | 2009

Fitness functions in evolutionary robotics: A survey and analysis

Andrew L. Nelson; Gregory J. Barlow; Lefteris Doitsidis

This paper surveys fitness functions used in the field of evolutionary robotics (ER). Evolutionary robotics is a field of research that applies artificial evolution to generate control systems for autonomous robots. During evolution, robots attempt to perform a given task in a given environment. The controllers in the better performing robots are selected, altered and propagated to perform the task again in an iterative process that mimics some aspects of natural evolution. A key component of this process-one might argue, the key component-is the measurement of fitness in the evolving controllers. ER is one of a host of machine learning methods that rely on interaction with, and feedback from, a complex dynamic environment to drive synthesis of controllers for autonomous agents. These methods have the potential to lead to the development of robots that can adapt to uncharacterized environments and which may be able to perform tasks that human designers do not completely understand. In order to achieve this, issues regarding fitness evaluation must be addressed. In this paper we survey current ER research and focus on work that involved real robots. The surveyed research is organized according to the degree of a priori knowledge used to formulate the various fitness functions employed during evolution. The underlying motivation for this is to identify methods that allow the development of the greatest degree of novel control, while requiring the minimum amount of a priori task knowledge from the designer.


Robotics and Autonomous Systems | 2004

Evolution of neural controllers for competitive game playing with teams of mobile robots

Andrew L. Nelson; Edward Grant; Thomas C. Henderson

Abstract In this work, we describe the evolutionary training of artificial neural network controllers for competitive team game playing behaviors by teams of real mobile robots. This research emphasized the development of methods to automate the production of behavioral robot controllers. We seek methods that do not require a human designer to define specific intermediate behaviors for a complex robot task. The work made use of a real mobile robot colony (EVolutionary roBOTs) and a closely coupled computer-based simulated training environment. The acquisition of behavior in an evolutionary robotics system was demonstrated using a robotic version of the game Capture the Flag . In this game, played by two teams of competing robots, each team tries to defend its own goal while trying to ‘attack’ another goal defended by the other team. Robot neural controllers relied entirely on processed video data for sensing of their environment. Robot controllers were evolved in a simulated environment using evolutionary training algorithms. In the evolutionary process, each generation consisted of a competitive tournament of games played between the controllers in an evolving population. Robot controllers were selected based on whether they won or lost games in the course of a tournament. Following a tournament, the neural controllers were ranked competitively according to how many games they won and the population was propagated using a mutation and replacement strategy. After several hundred generations, the best performing controllers were transferred to teams of real mobile robots, where they exhibited behaviors similar to those seen in simulation including basic navigation, the ability to distinguish between different types of objects, and goal tending behaviors.


Robotics and Autonomous Systems | 2004

Maze exploration behaviors using an integrated evolutionary robotics environment

Andrew L. Nelson; Edward Grant; John M. Galeotti; Stacey Rhody

This paper presents results generated with a new evolutionary robotics (ER) simulation environment and its complementary real mobile robot colony research test-bed. Neural controllers producing mobile robot maze searching and exploration behaviors using binary tactile sensors as inputs were evolved in a simulated environment and subsequently transferred to and tested on real robots in a physical environment. There has been a considerable amount of proof-of-concept and demonstration research done in the field of ER control in recent years, most of which has focused on elementary behaviors such as object avoidance and homing. Artificial neural networks (ANN) are the most commonly used evolvable controller paradigm found in current ER literature. Much of the research reported to date has been restricted to the implementation of very simple behaviors using small ANN controllers. In order to move beyond the proof-of-concept stage our ER research was designed to train larger more complicated ANN controllers, and to implement those controllers on real robots quickly and efficiently. To achieve this a physical robot test-bed that includes a colony of eight real robots with advanced computing and communication abilities was designed and built. The real robot platform has been coupled to a simulation environment that facilitates the direct wireless transfer of evolved neural controllers from simulation to real robots (and vice versa). We believe that it is the simultaneous development of ER computing systems in both the simulated and the physical worlds that will produce advances in mobile robot colony research. Our simulation and training environment development focuses on the definition and training of our new class of ANNs, networks that include multiple hidden layers, and time-delayed and recurrent connections. Our physical mobile robot design focuses on maximizing computing and communications power while minimizing robot size, weight, and energy usage. The simulation and ANN-evolution environment was developed using MATLAB. To allow for efficient control software portability our physical evolutionary robots (EvBots) are equipped with a PC-104-based computer running a custom distribution of Linux and connected to the Internet via a wireless network connection. In addition to other high-level computing applications, the mobile robots run a condensed version of MATLAB, enabling ANN controllers evolved in simulation to be transferred directly onto physical robots without any alteration to the code. This is the first paper in a series to be published cataloging our results in this field.


IEEE Power & Energy Magazine | 2002

Characterization of coil faults in an axial flux variable reluctance PM motor

Andrew L. Nelson; Mo-Yuen Chow

In recent years, variable reluctance (VR) and switch reluctance (SR) motors have been proposed for use in applications requiring a degree of fault tolerance. A range of topologies of brushless SR and VR permanent magnet (PM) motors are not susceptible to some types of faults, such as phase-to-phase shorts, and can often continue to function in the presence of other faults. In particular, coil winding faults in a single stator coil may have relatively little effect on motor performance, but may affect overall motor reliability, availability, and longevity. It is important to distinguish between, and characterize, various winding faults for maintenance and diagnostic purposes. These fault characterization and analysis results are a necessary first step in the process of motor fault detection and diagnosis for this motor topology. This paper examines rotor velocity damping due to stator winding turn-to-turn short faults in a fault-tolerant axial flux variable reluctance PM motor. In this type of motor, turn-to-turn shorts due to insulation failures have similar I-V characteristics to coil faults resulting from other problems, such as faulty maintenance or damage due to impact. In order to investigate the effects of such coil faults, a prototype axial flux variable reluctance PM motor was constructed. The motor was equipped with experimental fault simulation stator windings capable of simulating these and other types of stator winding faults. This paper will focus on two common types of winding faults and their effects on rotor velocity in this type of motor.


Robotics and Autonomous Systems | 2006

Using direct competition to select for competent controllers in evolutionary robotics

Andrew L. Nelson; Edward Grant

Abstract Evolutionary robotics (ER) is a field of research that applies artificial evolution toward the automatic design and synthesis of intelligent robot controllers. The preceding decade saw numerous advances in evolutionary robotics hardware and software systems. However, the sophistication of resulting robot controllers has remained nearly static over this period of time. Here, we make the case that current methods of controller fitness evaluation are primary factors limiting the further development of ER. To address this, we define a form of fitness evaluation that relies on intra-population competition. In this research, complex neural networks were trained to control robots playing a competitive team game. To limit the amount of human bias or know-how injected into the evolving controllers, selection was based on whether controllers won or lost games. The robots relied on video sensing of their environment, and the neural networks required on the order of 150 inputs. This represents an order of magnitude increase in sensor complexity compared to other research in this field. Evolved controllers were tested extensively in real fully-autonomous robots and in simulation. Results and experiments are presented to characterize the training process and the acquisition of controller competency under different evolutionary conditions.


international conference on integration of knowledge intensive multi agent systems | 2003

Evolution of complex autonomous robot behaviors using competitive fitness

Andrew L. Nelson; Edward Grant; Gregory J. Barlow; M. White

Evolutionary robotics (ER) employs population-based artificial evolution to develop behavioral robotics controllers. We focus on the formulation and application of a fitness selection function for ER that makes use of intra-population competitive selection. In the case of behavioral tasks, such as game playing, intra-population competition can lead to the evolution of complex behaviors. In order for this competition to be realized, the fitness of competing controllers must be based mainly on the aggregate success or failure to complete an overall task. However, because initial controller populations are often subminimally competent, and individuals are unable to complete the overall competitive task at all, no selective pressure can be generated at the onset of evolution (the bootstrap problem). In order to accommodate these conflicting elements in selection, we formulate a bimodal fitness selection function. This function accommodates subminimally competent initial populations in early evolution, but allows for binary success/failure competitive selection of controllers that have evolved to perform at a basic level. Large arbitrarily connected neural network-based robot controllers were evolved to play the competitive team game Capture the Flag. Results show that neural controllers evolved under a variety of conditions were competitive with a hand-coded knowledge-based controller and could win a modest majority of games in a large tournament.


intelligent robots and systems | 2003

A colony of robots using vision sensing and evolved neural controllers

Andrew L. Nelson; Edward Grant; Gregory J. Barlow; Thomas C. Henderson

This paper describes the development and testing of a new evolutionary robotics research test bed. The test bed consists of a colony of small computationally powerful mobile robots that use evolved neural network controllers and vision based sensors to generate team game-playing behaviors. The vision based sensors function by converting video images into range and object color data. Large evolvable neural network controllers use these sensor data to control mobile robots. The networks require 150 individual input connections to accommodate the processed video sensor data. Using evolutionary computing methods, the neural network based controllers were evolved to play the competitive team game Capture the Flag with teams of mobile robots. Neural controllers were evolved in simulation and transferred to real robots for physical verification. Sensor signals in the simulated environment are formatted to duplicate the processed real video sensor values rather than the raw video images. Robot controllers receive sensor signals and send actuator commands of the same format, whether they are driving physical robots in a real environment or simulated robots agents in an artificial environment. Evolved neural controllers can be transferred directly to the real mobile robots for testing and evaluation. Experimental results generated with this new evolutionary robotics research test bed are presented.


international conference on robotics and automation | 2004

Dynamic leadership protocol for S-nets

Gregory J. Barlow; Thomas C. Henderson; Andrew L. Nelson; Edward Grant

Smart Sensor Networks (S-nets) are groups of stationary agents (S-elements) which provide distributed sensing, computation, and communication in an environment. In order to integrate information from individual agents and to efficiently transmit this information to other agents, these devices must be able to create local groups (S-clusters). A leadership protocol that creates static clusters has been previously proposed. Here, we further develop this protocol to allow for dynamic cluster updating. This accommodates on-the-fly network re-organization in response to environmental disturbances or the gain or loss of S-elements. We outline an informal argument for the correctness of this revised protocol. We describe our embedded system implementation of the leadership protocol in simulation and using a colony of robots. Finally, we present results demonstrating both implementations.


Mobile Robots | 2007

Aggregate Selection in Evolutionary Robotics

Andrew L. Nelson; Edward Grant

Can the processes of natural evolution be mimicked to create robots or autonomous agents? This question embodies the most fundamental goals of evolutionary robotics (ER). ER is a field of research that explores the use of artificial evolution and evolutionary computing for learning of control in autonomous robots, and in autonomous agents in general. In a typical ER experiment, robots, or more precisely their control systems, are evolved to perform a given task in which they must interact dynamically with their environment. Controllers compete in the environment and are selected and propagated based on their ability (or fitness) to perform the desired task. A key component of this process is the manner in which the fitness of the evolving controllers is measured. In ER, fitness is measured by a fitness function or objective function. This function applies some given criteria to determine which robots or agents are better at performing the task for which they are being evolved. Fitness functions can introduce varying levels of a priori knowledge into evolving populations. Some types of fitness functions encode the important features of a known solution to a given task. Populations of controllers evolved using such functions then reproduce these features and essentially evolve control systems that duplicate an a priori known algorithm. In contrast to this, evolution can also be performed using a fitness function that incorporates no knowledge of how the particular task at hand is to be achieved. In these cases all selection is based only on whether robots/agents succeed or fail to complete the task. Such fitness functions are referred to as aggregate because they combine the benefit or deficit of all actions a given agent performs into a single success/failure term. Fitness functions that select for specific solutions do not allow for fundamentally novel control learning. At best, these fitness functions perform some degree of optimization, and provide a method for transferring known control heuristics to robots. At some level, selection must be based on a degree of


workshop on mobile computing systems and applications | 2003

Developing evolutionary neural controllers for teams of mobile robots playing a complex game

Andrew L. Nelson; Edward Grant; Gordon K. Lee

This research develops methods of automating the production of behavioral robotics controllers. Population-based artificial evolution was employed to train neural network-based controllers to play a robotic version of the team game Capture the Flag. The robot agents used processed video data for sensing their environment. To accommodate the 35 to 150 sensor inputs required, large neural networks of arbitrary connectivity and structure were evolved. An intra-population competitive genetic algorithm was used and selection at each generation was based on whether the different controllers won or lost games over the course of a tournament. This paper focuses on the evolutionary neural controller architecture. Evolved controllers were tested in a series of competitive games and transferred to real robots for physical verification.

Collaboration


Dive into the Andrew L. Nelson's collaboration.

Top Co-Authors

Avatar

Edward Grant

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Gregory J. Barlow

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Gordon K. Lee

San Diego State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John M. Galeotti

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Stacey Rhody

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. White

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Mo-Yuen Chow

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Lefteris Doitsidis

Technological Educational Institute of Crete

View shared research outputs
Researchain Logo
Decentralizing Knowledge