Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bobby D. Bryant is active.

Publication


Featured researches published by Bobby D. Bryant.


congress on evolutionary computation | 2003

Evolving adaptive neural networks with and without adaptive synapses

Kenneth O. Stanley; Bobby D. Bryant; Risto Miikkulainen

A potentially powerful application of evolutionary computation (EC) is to evolve neural networks for automated control tasks. However, in such tasks environments can be unpredictable and fixed control policies may fail when conditions suddenly change. Thus, there is a need to evolve neural networks that can adapt, i.e. change their control policy dynamically as conditions change. In this paper, we examine two methods for evolving neural networks with dynamic policies. The first method evolves recurrent neural networks with fixed connection weights, relying on internal state changes to lead to changes in behavior. The second method evolves local rules that govern connection weight changes. The surprising experimental result is that the former method can be more effective than evolving networks with dynamic weights, calling into question the intuitive notion that networks with dynamic synapses are necessary for evolving solutions to adaptive tasks.


congress on evolutionary computation | 2003

Neuroevolution for adaptive teams

Bobby D. Bryant; Risto Miikkulainen

We introduce the adaptive team of agents (ATA), a system of homogeneous agents with identical control policies which nevertheless adopt heterogeneous roles appropriate to their environment. ATAs have applications in domains such as games, and can be evolved through neuroevolution. In this paper we show how ATAs can be evolved to solve the problem posed by a simple strategy game and discuss their application to richer environments.


BMC Neuroscience | 2012

Goal-related navigation of a neuromorphic virtual robot

Laurence C. Jayet Bray; Emily R. Barker; Gareth B. Ferneyhough; Roger V. Hoang; Bobby D. Bryant; Sergiu M. Dascalu; Frederick C. Harris

The field of biologically inspired technology has evolved to the emergence of robots that operate autonomously. Some studies have focused on developing social robots that interact with humans by following social behaviors where other research have centered their efforts on mobile robots with the ability to navigate in their well-known environment. These general-purpose autonomous robots can perform a variety of functions independently, from recognizing people or objects to navigating in a familiar room. As of yet, no humanoid robot has been capable of traveling through a new suburban environment to reproduce goal-related learning and navigational activities. Based on experimental findings, we propose a computational model that is composed of critical interacting brain regions and utilizes fundamental learning mechanisms. It is incorporated in a sophisticated robotic system where a virtual robot navigates through a new environment, learns and recognizes visual landmarks, and consequently makes correct turning decisions to reach a reward. The detailed brain architecture included visual, entorhinal, prefrontal and premotor cortices, as well as the hippocampus. Our microcircuitry replicated some fundamental mammalian dynamics, which were integrated in a robotic loop. This virtual robotic system was designed around a number of components unique to our NeoCortical simulator (NCS) and our Virtual NeuroRobotic (VNR) paradigm. The neural simulation was executed on a remote computing cluster and was networked to the other system components (NCSTools, Webots, Gabor filter) using our Brain Communication Server (BCS), a server developed specifically for integration with NCS. The virtual humanoid was able to navigate through a new virtual environment and reach a reward after a sequence of turning actions. Along the way, it encountered familiar and non-familiar external cues to provide guidance and follow the correct direction. This is the first bio-inspired robot that showed high functionality during navigation while utilizing spiking cortical neurons in a real-time simulation. More importantly, it could take us a step closer to understanding memory impairments in Alzheimer’s patients.


computational intelligence and games | 2008

Visual control in quake II with a cyclic controller

Matt Parker; Bobby D. Bryant

A cyclic controller is evolved in the first person shooter computer game Quake II, to learn to attack a randomly moving enemy in a simple room by using only visual inputs. The chromosome of a genetic algorithm represents a cyclical controller that reads grayscale information from the gameplay screen to determine how far to jump forward in the program and what actions to perform. The cyclic controller learns to effectively find and shoot the enemy, and outperforms our previously published neural network solution for the same problem.


international symposium on neural networks | 2008

Neuro-visual control in the Quake II game engine

Matt Parker; Bobby D. Bryant

The first-person-shooter Quake II is used as a platform to test neuro-visual control and retina input layouts. Agents are trained to shoot a moving enemy as quickly as possible in a visually simple environment, using a neural network controller with evolved weights. Two retina layouts are tested, each with the same number of inputs: first, a graduated density retina which focuses near the center of the screen and blurs outward; second, a uniform retina which focuses evenly across the screen. Results show that the graduated density retina learns more successfully than the uniform retina.


international conference on signal acquisition and processing | 2010

Watermark Embedder Optimization for 3D Mesh Objects Using Classification Based Approach

Rakhi C. Motwani; Mukesh C. Motwani; Bobby D. Bryant; Frederick C. Harris; Akshata S. Agarwal

This paper presents a novel 3D mesh watermarking scheme that utilizes a support vector machine(SVM) based classifier for watermark insertion. Artificial intelligence(AI)based approaches have been employed by watermarking algorithms for various host mediums such as images, audio, and video. However, AI based techniques are yet to be explored by researchers in the 3D domain for watermark insertion and extraction processes. Contributing towards this end, the proposed approach employs a binary SVM to classify vertices as appropriate or inappropriate candidates for watermark insertion. The SVM is trained with feature vectors derived from the curvature estimates of a 1-ring neighborhood of vertices taken from normalized 3D meshes. A geometry-based non-blind approach is used by the watermarking algorithm. The robustness of proposed technique is evaluated experimentally by simulating attacks such as mesh smoothing, cropping and noise addition.


congress on evolutionary computation | 2009

Lamarckian neuroevolution for visual control in the Quake II environment

Matt Parker; Bobby D. Bryant

A combination of backpropagation and neuroevolution is used to train a neural network visual controller for agents in the Quake II environment. The agents must learn to shoot an enemy opponent in a semi-visually complex environment using only raw visual inputs. A comparison is made between using normal neuroevolution and using neuroevolution combined with backpropagation for Lamarckian adaptation. The supervised backpropagation imitates a hand-coded controller that uses non-visual inputs. Results show that using backpropagation in combination with neuroevolution trains the visual neural network controller much faster and more successfully.


computational intelligence and games | 2009

Backpropagation without human supervision for visual control in Quake II

Matt Parker; Bobby D. Bryant

Backpropagation and neuroevolution are used in a Lamarckian evolution process to train a neural network visual controller for agents in the Quake II environment. In previous work, we hand-coded a non-visual controller for supervising in backpropagation, but hand-coding can only be done for problems with known solutions. In this research the problem for the agent is to attack a moving enemy in a visually complex room with a large central pillar. Because we did not know a solution to the problem, we could not hand-code a supervising controller; instead, we evolve a non-visual neural network as supervisor to the visual controller. This setup creates controllers that learn much faster and have a greater fitness than those learning by neuroevolution-only on the same problem in the same amount of time.


IEEE Transactions on Computational Intelligence and Ai in Games | 2012

Neurovisual Control in the Quake II Environment

Matt Parker; Bobby D. Bryant

A wide variety of tasks may be performed by humans using only visual data as input. Creating artificial intelligence that adequately uses visual data allows controllers to use single cameras for input and to interact with computer games by merely reading the screen render. In this research, we use the Quake II game environment to compare various techniques that train neural network (NN) controllers to perform a variety of behaviors using only raw visual input. First, it is found that a humanlike retina, which has greater acuity in the center and less in the periphery, is more useful than a uniform acuity retina, both having the same number of inputs and interfaced to the same NN structure, when learning to attack a moving opponent in a visually simple room. Next, we use the same humanlike retina and NN in a more visually complex room, but, finding it is unable to learn successfully, we use a Lamarckian learning algorithm with a nonvisual hand-coded controller as a supervisor to help train the visual controller via backpropagation. Last, we replace the hand-coded supervising nonvisual controller with an evolved nonvisual NN controller, eliminating the human aspect from the supervision, and it solves a problem for which a solution was not previously known.


Archive | 2018

A Neuroevolutionary Approach to Adaptive Multi-agent Teams

Bobby D. Bryant; Risto Miikkulainen

A multi-agent architecture called the Adaptive Team of Agents (ATA) is introduced, wherein homogeneous agents adopt specific roles in a team dynamically in order to address all the sub-tasks necessary to meet the team’s goals. Artificial neural networks are then trained by neuroevolution to produce an example of such a team, trained to solve the problem posed by a simple strategy game. The evolutionary algorithm is found to induce the necessary in situ adaptivity of behavior into the agents, even when controlled by stateless feed-forward networks.

Collaboration


Dive into the Bobby D. Bryant's collaboration.

Top Co-Authors

Avatar

Risto Miikkulainen

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Kenneth O. Stanley

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Matt Parker

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Igor V. Karpov

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge