Karen Zita Haigh
BBN Technologies
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Karen Zita Haigh.
systems man and cybernetics | 1991
Stan Matwin; Tomasz Szapiro; Karen Zita Haigh
It is argued that negotiation rules can be learned and invented by means of genetic algorithms. The work presented introduces a method, a system design, and a prototype implementation that uses genetic-based machine learning to acquire negotiation rules. The learned rules support a party involved in a two-party bargaining problem with multiple issues. It is assumed that both parties work towards a compromise deal. The method provides a framework in which genetic-based learning is applied repetitively on a changing problem representation. System design proposes a problem representation that is adequate to express bargaining processes and that is at the same time conducive to genetic-based learning. The authors report results of experiments with the prototype implementation. These results indicate that genetically learned rules, when used in real negotiations, yield results that are better than results obtained by humans in the same negotiation. The experiments indicate considerable robustness of genetically learned rules with respect to varying parameters defining the genetic operations on which the system relies in modeling negotiations. In terms of user support, experimental results show that in the bargaining process, a good rule is one that advises conceding in small steps and bringing new issues into the negotiation process. >
Intelligence\/sigart Bulletin | 1997
Reid G. Simmons; Richard Goodwin; Karen Zita Haigh; Sven Koenig; Joseph O'Sullivan; Manuela M. Veloso
Office delivery robots have to perform many tasks such as picking up and delivering mail or faxes, returning library books, and getting coffee. They have to determine the order in which to visit locations, plan paths to those locations, follow paths reliably, and avoid static and dynamic obstacles in the process. Reliability and efficiency are key issues in the design of such autonomous robot systems. They must deal reliably with noisy sensors and actuators and with incomplete knowledge of the environment. They must also act efficiently, in real time, to deal with dynamic situations. To achieve these objectives, we have developed a robot architecture that is composed of four layers: obstacle avoidance, navigation, path planning, and task planning. The layers are independent, communicating processes that are always active, processing sensory data and status information to update their decisions and actions. A version of our robot architecture has been in nearly daily use in our building since December 1995. As of January 1997, the robot has traveled more than 110 kilometers (65 miles) in service of over 2500 navigation requests that were specified using our World Wide Web interface.
intelligent robots and systems | 2000
Reid G. Simmons; David Apfelbaum; Dieter Fox; Robert P. Goldman; Karen Zita Haigh; David J. Musliner; Michael J. S. Pelican; Sebastian Thrun
To be truly useful, mobile robots need to be fairly autonomous and easy to control. This is especially true in situations where multiple robots are used, due to the increase in sensory information and the fact that the robots can interfere with one another. The paper describes a system that integrates autonomous navigation, a task executive, task planning, and an intuitive graphical user interface to control multiple, heterogeneous robots. We have demonstrated a prototype system that plans and coordinates the deployment of teams of robots. Testing has shown the effectiveness and robustness of the system, and of the coordination strategies in particular.
international conference on case based reasoning | 1995
Karen Zita Haigh; Manuela M. Veloso
There have been several efforts to create and use real maps in computer applications that automatically find good map routes. In general, online map representations do not include information that may be relevant for the purpose of generating good realistic routes, including for example traffic patterns, construction, or number of lanes. Furthermore, the notion of a good route is dependent on a variety of factors, such as the time of the day, and may also be user dependent. This motivation leads to our work on the accumulation and reuse of previously traversed routes as cases. In this paper, we demonstrate our route planning method which retrieves and reuses multiple past routing cases that collectively form a good basis for generating a new routing plan. We briefly present our similarity metric for retrieving a set of similar routes. The metric effectively takes into account the geometric and continuous-valued characteristics of a city map. We then present the replay mechanism and how the planner produces the route plan by analogizing from the retrieved similar past routes. We discuss in particular the strategy used to merge a set of cases and generate the new route. We use illustrative examples and show some empirical results from a detailed online map of the city of Pittsburgh containing over 18,000 intersections and 25,000 street segments.
adaptive agents and multi-agents systems | 2002
Karen Zita Haigh; John Phelps; Christopher W. Geib
We are building an agent-oriented system to aid elderly people to live longer in their homes, increasing the duration of their independence from round-the-clock care while maintaining important social connectedness and reducing caregiver burden. The Independent LifeStyle Assistant
adaptive agents and multi-agents systems | 1997
Karen Zita Haigh; Manuela M. Veloso
^\mbox\tiny TM
Journal of Experimental and Theoretical Artificial Intelligence | 1997
Karen Zita Haigh; Jonathan Richard Shewchuk; Manuela M. Veloso
(I.L.S.A.) is a multiagent system that incorporates a unified sensing model, probabilistically derived situation awareness, hierarchical task network response planning, real-time action selection control, complex coordination, and machine learning. This paper describes the problem, our reasoning for selecting an agent-based approach, and the architecture of the system.
Argument & Computation | 2014
Simon Parsons; Katie Atkinson; Zimi Li; Peter McBurney; Elizabeth Sklar; Munindar P. Singh; Karen Zita Haigh; Karl N. Levitt; Jeff Rowe
We have been developing Rogue, an architecture that integrates high-level planning with a low-level executing robotic agent. Rogue is designed as the o ce gofer task planner for Xavier the robot. User requests are interpreted as high-level planning goals, such as getting co ee, and picking up and delivering mail or faxes. Users post tasks asynchronously and Rogue controls the corresponding planning and execution continuous process. This paper presents the extensions to a nonlinear state-space planning algorithm to allow for the interaction to the robot executor. We focus on presenting how executable steps are identi ed based on the planning model and the predicted execution performance; how interrupts from users requests are handled and incorporated into the system; how executable plans are merged according to their priorities; and how monitoring execution can add more perception knowledge to the planning and possible needed re-planning processes. The complete Rogue system will learn from its planning and execution experiences to improve upon its own behaviour with time. We nalize the paper by brie y discussing Rogues learning opportunities. This research is sponsored in part by the Wright Laboratory, Aeronautical Systems Center, Air Force Material Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the o cial policies or endorsements, either expressed or implied, of the Wright Laboratory or the U. S. Government. c ACM. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro t or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior speci c permission and/or a fee.
adaptive agents and multi-agents systems | 1998
Karen Zita Haigh; Manuela M. Veloso
Abstract. Automated route planning consists of using real maps to automatically find good map routes. Two shortcomings to standard methods are (1) that domain information may be lacking, and (2) that a ‘good’ route can be hard to define. Most on-line map representations do not include information that may be relevant for the purpose of generating good realistic routes, such as traffic patterns, construction, and one-way streets. The notion of a good route is dependent not only on geometry (shortest path),but also on a variety of other factors, such as the day and time, weather conditions,and perhaps most importantly,user-dependent preferences. These features can be learned by evaluating real-world execution experience. These difficulties motivate our work on applying analogical reasoning to route planning. Analogical reasoning is a method of using past experience to improve problem solving performance in similar new situations.Our approach consists of the accumulation and reuse of previously traversed rou...
2006 1st IEEE Workshop on Networking Technologies for Software Defined Radio Networks | 2006
Gregory Donald Troxel; Eric Blossom; Steve Boswell; Armando Caro; Isidro Marcos Castineyra; Alex Colvin; Tad Dreier; Joseph B. Evans; Nick Goffee; Karen Zita Haigh; Talib S. Hussain; Vikas Kawadia; David Lapsley; Carl Livadas; Alberto Medina; Joanne Mikkelson; Gary J. Minden; Robert Tappan Morris; Craig Partridge; Vivek Raghunathan; Ram Ramanathan; Cesar A. Santivanez; Thomas Schmid; Dan Sumorok; Mani B. Srivastava; Robert S. Vincent; David Wiggins; Alexander M. Wyglinski; Sadaf Zahedi
Trust is a natural mechanism by which an autonomous party, an agent, can deal with the inherent uncertainty regarding the behaviours of other parties and the uncertainty in the information it shares with those parties. Trust is thus crucial in any decentralised system. This paper builds on recent efforts to use argumentation to reason about trust. Specifically, a set of schemes is provided, and abstract patterns of reasoning that apply in multiple situations geared towards trust. Schemes are described in which one agent, A, can establish arguments for trusting another agent, B, directly, as well as schemes that A can use to construct arguments for trusting C, where C is trusted by B. For both sets of schemes, a set of critical questions is offered that identify the situations in which these schemes can fail.