Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dani Goldberg is active.

Publication


Featured researches published by Dani Goldberg.


Ai Magazine | 2003

GRACE: an autonomous robot for the AAAI Robot challenge

Reid G. Simmons; Dani Goldberg; Adam Goode; Michael Montemerlo; Nicholas Roy; Brennan Sellner; Chris Urmson; Alan C. Schultz; Myriam Abramson; William Adams; Amin Atrash; Magdalena D. Bugajska; Michael J. Coblenz; Matt MacMahon; Dennis Perzanowski; Ian Horswill; Robert Zubek; David Kortenkamp; Bryn Wolfe; Tod Milam; Bruce Allen Maxwell

In an attempt to solve as much of the AAAI Robot Challenge as possible, five research institutions representing academia, industry, and government integrated their research into a single robot named GRACE. This article describes this first-year effort by the GRACE team, including not only the various techniques each participant brought to GRACE but also the difficult integration effort itself.


Archive | 2002

A Layered Architecture for Coordination of Mobile Robots

Reid G. Simmons; Trey Smith; M. Bernardine Dias; Dani Goldberg; David Hershberger; Anthony Stentz; Robert Zlot

This paper presents an architecture that enables multiple robots to explicitly coordinate actions at multiple levels of abstraction. In particular, we are developing an extension to the traditional three-layered robot architecture that enables robots to interact directly at each layer — at the behavioral level, the robots create distributed control loops; at the executive level, they synchronize task execution; at the planning level, they use market-based techniques to assign tasks, form teams, and allocate resources. We illustrate these ideas through applications in multi-robot assembly, multi-robot deployment, and multi-robot mapping.


adaptive agents and multi-agents systems | 1999

Coordinating mobile robot group behavior using a model of interaction dynamics

Dani Goldberg; Maja J. Matarić

In this paper we show how various levels of coordinated behavior may be achieved in a group of mobile robots by using a model of the interaction dynamics between a robot and its environment. We present augmented Markov models (AMMs) as a tool for capturing such interaction dynamics on-line and in real-time, with little computational and storage overhead. We begin by describing the structure of AMMs and the algorithm for generating them, then verify the approach utilizing data from physical mobile robots performing elements of a foraging task. Finally, we demonstrate the application of the model for resolving group coordination issues arising from three sources: individual performance, group affiliation, and group performance. Corresponding respectively to these are the three experimental examples we present fault detection, group membership based on ability and experience, and dynamic leader selection.


Autonomous Agents and Multi-Agent Systems | 2003

Maximizing Reward in a Non-Stationary Mobile Robot Environment

Dani Goldberg; Maja J. Matarić

The ability of a robot to improve its performance on a task can be critical, especially in poorly known and non-stationary environments where the best action or strategy is dependent upon the current state of the environment. In such systems, a good estimate of the current state of the environment is key to establishing high performance, however quantified. In this paper, we present an approach to state estimation in poorly known and non-stationary mobile robot environments, focusing on its application to a mine collection scenario, where performance is quantified using reward maximization. The approach is based on the use of augmented Markov models (AMMs), a sub-class of semi-Markov processes. We have developed an algorithm for incrementally constructing arbitrary-order AMMs on-line. It is used to capture the interaction dynamics between a robot and its environment in terms of behavior sequences executed during the performance of a task. For the purposes of reward maximization in a non-stationary environment, multiple AMMs monitor events at different timescales and provide statistics used to select the AMM likely to have a good estimate of the environmental state. AMMs with redundant or outdated information are discarded, while attempting to maintain sufficient data to reduce conformation to noise. This approach has been successfully implemented on a mobile robot performing a mine collection task. In the context of this task, we first present experimental results validating our reward maximization performance criterion. We then incorporate our algorithm for state estimation using multiple AMMs, allowing the robot to select appropriate actions based on the estimated state of the environment. The approach is tested first with a physical robot, in a non-stationary environment with an abrupt change, then with a simulation, in a gradually shifting environment.


adaptive agents and multi-agents systems | 2003

Task allocation using a distributed market-based planning mechanism

Dani Goldberg; Vincent A. Cicirello; M. Bernardine Dias; Reid G. Simmons; Stephen F. Smith; Anthony Stentz

This paper describes a market-based planning mechanism used for task and resource allocation within a larger distributed, multi-robot control and coordination architecture. We are developing an extension to the traditional three-layered robot architecture that enables robots to interact directly at each layer -- at the behavioral level, the robots create distributed control loops; at the executive level, they synchronize task execution; at the planning level, they use market-based techniques to allocate tasks and resources. This paper focusses on the market-based planning layer, which is comprised of two main components: a trader that participates in the market, auctioning and bidding on tasks; and a scheduler that determines task feasibility and cost for the trader, and interacts with the executive layer for task execution.


adaptive agents and multi-agents systems | 2000

Reward maximization in a non-stationary mobile robot environment

Dani Goldberg; Maja J. Matarić

In this paper, we present an approach to reward maximization in a non-stationary mobile robot environment. The approach works within the constraints of limited local sensing and limited a prior knowledge of the environment. It is based on the use of augmented Markov models (AMMs), which are essentially Markov chains having additional statistics associated with states and state transitions. We have developed an algorithm that constructs AMMs on-line and in real-time with little computational and space overhead, making it practical to maintain multiple models of the interaction dynamics between a robot and its environment during the execution of a task. For the purposes of reward maximization in a non-stationary environment, these models monitor events at increasing intervals of time and provide statistics used to discard redundant or outdated information while reducing the probability of conforming to noise. This approach has been successfully implemented with a real mobile robot performing a mine collection task. In the context of this task, we rst present experimental results validating our reward maximization criterion in a stationary environment. We then incorporate our algorithm for redundant/outdated information reduction using multiple models and apply the approach to a non-stationary environment with an abrupt change.


intelligent robots and systems | 2001

Detecting regime changes with a mobile robot using multiple models

Dani Goldberg; Maja J. Matarić

We present an approach to the detection of global environmental regime changes by a mobile robot performing a task. The approach is based on the use of augmented Markov models (AMMs), a variation of semi-Markov process. We have developed an algorithm that constructs AMMs online and in real-time with little overhead. AMMs are a general tool for capturing the interaction dynamics between a robot and its environment using the history of behavior executed by the robot. We extend AMMs to regime detection, using multiple models to monitor events at different time scales and provide statistics to detect regime changes at those time scales. This approach has been successfully implemented using a physical mobile robot performing a land mine collection task. In the context of this task we present experimental results, first validating our approach, then demonstrating a more complex proportion-maintaining scenario of the land mine collection task. Finally, we present results using an alternative reward maximization decision criterion in the same task.


Proceedings of SPIE | 1999

Mobile robot group coordination using a model of interaction dynamics

Dani Goldberg; Maja J. Matarić

We show how various levels of coordinated behavior may be achieved in a group of mobile robots by using a model of the interaction dynamics between a robot and the environment. We present augmented Markov models (AMMs) as a tool for capturing such interaction dynamics on-line an in real-time, with little computational and storage overhead. We briefly describe the structure of AMMs, then demonstrate the application of the model for resolving group coordination issues arising from three sources: individual performance, group affiliation, and group performance. Corresponding respectively to these are the three experimental examples we present - fault detection, group membership based on ability and experience, and dynamic leader selection.


national conference on artificial intelligence | 1997

Interference as a tool for designing and evaluating multi-robot controllers

Dani Goldberg; Maja J. Matarić


Multi-Robot Systems: From Swarms to Intelligent Automata: Proceedings of the 2003 International Workshop on Multi-Robot Systems | 2003

Market-Based Multi-Robot Planning in a Distributed Layered Architecture

Dani Goldberg; Vincent A. Cicirello; M. Bernardine Dias; Reid G. Simmons; Stephen F. Smith; Anthony Stentz

Collaboration


Dive into the Dani Goldberg's collaboration.

Top Co-Authors

Avatar

Maja J. Matarić

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Reid G. Simmons

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Goode

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Alan C. Schultz

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Horswill

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Stephen F. Smith

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge