Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan C. Schultz is active.

Publication


Featured researches published by Alan C. Schultz.


Machine Learning | 1990

Learning Sequential Decision Rules Using Simulation Models and Competition

John J. Grefenstette; Connie Loggia Ramsey; Alan C. Schultz

The problem of learning decision rules for sequential tasks is addressed, focusing on the problem of learning tactical decision rules from a simple flight simulator. The learning method relies on the notion of competition and employs genetic algorithms to search the space of decision policies. Several experiments are presented that address issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested.


Journal of Artificial Intelligence Research | 1999

Evolutionary algorithms for reinforcement learning

David E. Moriarty; Alan C. Schultz; John J. Grefenstette

There are two distinct approaches to solving reinforcement learning problems, namely, searching in value function space and searching in policy space. Temporal difference methods and evolutionary algorithms are well-known examples of these approaches. Kaelbling, Littman and Moore recently provided an informative survey of temporal difference methods. This article focuses on the application of evolutionary algorithms to the reinforcement learning problem, emphasizing alternative policy representations, credit assignment methods, and problem-specific genetic operators. Strengths and weaknesses of the evolutionary approach to reinforcement learning are presented, along with a survey of representative applications.


systems man and cybernetics | 2005

Enabling effective human-robot interaction using perspective-taking in robots

J. G. Trafton; Nicholas L. Cassimatis; Magdalena D. Bugajska; Derek Brock; Farilee E. Mintz; Alan C. Schultz

We propose that an important aspect of human-robot interaction is perspective-taking. We show how perspective-taking occurs in a naturalistic environment (astronauts working on a collaborative project) and present a cognitive architecture for performing perspective-taking called Polyscheme. Finally, we show a fully integrated system that instantiates our theoretical framework within a working robot system. Our system successfully solves a series of perspective-taking problems and uses the same frames of references that astronauts do to facilitate collaborative problem solving with a person.


international conference on robotics and automation | 1998

Mobile robot exploration and map-building with continuous localization

Brian Yamauchi; Alan C. Schultz; William Adams

Our research addresses how to integrate exploration and localization for mobile robots. A robot exploring and mapping an unknown environment needs to know its own location, but it may need a map in order to determine that location. In order to solve this problem, we have developed ARIEL, a mobile robot system that combines frontier based exploration with continuous localization. ARIEL explores by navigating to frontiers, regions on the boundary between unexplored space and space that is known to be open. ARIEL finds these regions in the occupancy grid map that it builds as it explores the world. ARIEL localizes by matching its recent perceptions with the information stored in the occupancy grid. We have implemented ARIEL on a real mobile robot and tested ARIEL in a real-world office environment. We present quantitative results that demonstrate that ARIEL can localize accurately while exploring, and thereby build accurate maps of its environment.


IEEE Intelligent Systems | 2001

Building a multimodal human-robot interface

Dennis Perzanowski; Alan C. Schultz; William Adams; Elaine Marsh; Magdalena D. Bugajska

When we begin to build and interact with machines or robots that either look like humans or have human functionalities and capabilities, then people may well interact with their human-like machines in ways that mimic human-human communication. For example, if a robot has a face, a human might interact with it similarly to how humans interact with other creatures with faces, Specifically, a human might talk to it, gesture to it, smile at it, and so on. If a human interacts with a computer or a machine that understands spoken commands, the human might converse with the machine, expecting it to have competence in spoken language. In our research on a multimodal interface to mobile robots, we have assumed a model of communication and interaction that, in a sense, mimics how people communicate. Our interface therefore incorporates both natural language understanding and gesture recognition as communication modes. We limited the interface to these two modes to simplify integrating them in the interface and to make our research more tractable. We believe that with an integrated system, the user is less concerned with how to communicate (which interactive mode to employ for a task), and is therefore free to concentrate on the tasks and goals at hand. Because we integrate all our systems components, users can choose any combination of our interfaces modalities. The onus is on our interface to integrate the input, process it, and produce the desired results.


Ai Magazine | 2003

GRACE: an autonomous robot for the AAAI Robot challenge

Reid G. Simmons; Dani Goldberg; Adam Goode; Michael Montemerlo; Nicholas Roy; Brennan Sellner; Chris Urmson; Alan C. Schultz; Myriam Abramson; William Adams; Amin Atrash; Magdalena D. Bugajska; Michael J. Coblenz; Matt MacMahon; Dennis Perzanowski; Ian Horswill; Robert Zubek; David Kortenkamp; Bryn Wolfe; Tod Milam; Bruce Allen Maxwell

In an attempt to solve as much of the AAAI Robot Challenge as possible, five research institutions representing academia, industry, and government integrated their research into a single robot named GRACE. This article describes this first-year effort by the GRACE team, including not only the various techniques each participant brought to GRACE but also the difficult integration effort itself.


human robot interaction | 2013

ACT-R/E: an embodied cognitive architecture for human-robot interaction

J. Gregory Trafton; Laura M. Hiatt; Anthony M. Harrison; Franklin P. Tamborello; Sangeet Khemlani; Alan C. Schultz

We present ACT-R/E (Adaptive Character of Thought-Rational / Embodied), a cognitive architecture for human-robot interaction. Our reason for using ACT-R/E is two-fold. First, ACT-R/E enables researchers to build good embodied models of people to understand how and why people think the way they do. Then, we leverage that knowledge of people by using it to predict what a person will do in different situations; e.g., that a person may forget something and may need to be reminded or that a person cannot see everything the robot sees. We also discuss methods of how to evaluate a cognitive architecture and show numerous empirically validated examples of ACT-R/E models.


Robotics and Autonomous Systems | 2004

Integrating cognition, perception and action through mental simulation in robots

Nicholas L. Cassimatis; J. Gregory Trafton; Magdalena D. Bugajska; Alan C. Schultz

We argue that many problems in robotics arise from the difficulty of integrating multiple knowledge representation and inference techniques. We describe an architecture that integrates disparate reasoning, planning, sensation and mobility algorithms by composing them from strategies for managing mental simulations. Since simulations are conducted by modules that include high-level knowledge representation and inference techniques in addition to algorithms for sensation and reactive mobility, cognition, perception and action are continually integrated. An implemented robot using this framework in object-tacking and human–robot interaction tasks demonstrates that knowledge representation and inference techniques enable more complex and flexible robot behavior.


computational intelligence in robotics and automation | 1999

Evolving control for distributed micro air vehicles

Annie S. Wu; Alan C. Schultz; Arvin Agah

We focus on the task of large area surveillance. Given an area to be surveilled and a team of micro air vehicles (MAVs) with appropriate sensors, the task is to dynamically distribute the MAVs appropriately in the surveillance area for maximum coverage based on features present on the ground, and to adjust this distribution over time as changes in the team or on the ground occur. We have developed a system that learn rule sets for controlling the individual MAVs in a distributed surveillance team. Since each rule set governs an individual MAV, control of the overall behavior of the entire team is distributed; there is no single entity controlling the actions of the entire team. Currently, all members of the MAV team utilize the same rule set; specialization of individual MAVs through the evolution of unique rule sets is a logical extension to this work. A genetic algorithm is used to learn the MAV rule sets.


Space | 2005

The Peer-to-Peer Human-Robot Interaction Project

Terrence Fong; Illah R. Nourbakhsh; Clayton Kunz; John Schreiner; Robert Ambrose; Robert R. Burridge; Reid G. Simmons; Laura M. Hiatt; Alan C. Schultz; J. Gregory Trafton; Magda Bugajska; Jean Scholtz

The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our hypothesis is that peer-to-peer interaction can enable robots to collaborate in a competent, non-disruptive (i.e., natural) manner with users who have limited training, experience, or knowledge of robotics. Specifically, we believe that failures and limitations of autonomy (in planning, in execution, etc.) can be compensated for using human-robot interaction. In this paper, we present an overview of P2P-HRI, describe our development approach and discuss our evaluation methodology.

Collaboration


Dive into the Alan C. Schultz's collaboration.

Top Co-Authors

Avatar

William Adams

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dennis Perzanowski

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Magdalena D. Bugajska

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Gregory Trafton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Derek Brock

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. G. Trafton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicholas L. Cassimatis

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Donald A. Sofge

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge