Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Magdalena D. Bugajska is active.

Publication


Featured researches published by Magdalena D. Bugajska.


systems man and cybernetics | 2005

Enabling effective human-robot interaction using perspective-taking in robots

J. G. Trafton; Nicholas L. Cassimatis; Magdalena D. Bugajska; Derek Brock; Farilee E. Mintz; Alan C. Schultz

We propose that an important aspect of human-robot interaction is perspective-taking. We show how perspective-taking occurs in a naturalistic environment (astronauts working on a collaborative project) and present a cognitive architecture for performing perspective-taking called Polyscheme. Finally, we show a fully integrated system that instantiates our theoretical framework within a working robot system. Our system successfully solves a series of perspective-taking problems and uses the same frames of references that astronauts do to facilitate collaborative problem solving with a person.


IEEE Intelligent Systems | 2001

Building a multimodal human-robot interface

Dennis Perzanowski; Alan C. Schultz; William Adams; Elaine Marsh; Magdalena D. Bugajska

When we begin to build and interact with machines or robots that either look like humans or have human functionalities and capabilities, then people may well interact with their human-like machines in ways that mimic human-human communication. For example, if a robot has a face, a human might interact with it similarly to how humans interact with other creatures with faces, Specifically, a human might talk to it, gesture to it, smile at it, and so on. If a human interacts with a computer or a machine that understands spoken commands, the human might converse with the machine, expecting it to have competence in spoken language. In our research on a multimodal interface to mobile robots, we have assumed a model of communication and interaction that, in a sense, mimics how people communicate. Our interface therefore incorporates both natural language understanding and gesture recognition as communication modes. We limited the interface to these two modes to simplify integrating them in the interface and to make our research more tractable. We believe that with an integrated system, the user is less concerned with how to communicate (which interactive mode to employ for a task), and is therefore free to concentrate on the tasks and goals at hand. Because we integrate all our systems components, users can choose any combination of our interfaces modalities. The onus is on our interface to integrate the input, process it, and produce the desired results.


Ai Magazine | 2003

GRACE: an autonomous robot for the AAAI Robot challenge

Reid G. Simmons; Dani Goldberg; Adam Goode; Michael Montemerlo; Nicholas Roy; Brennan Sellner; Chris Urmson; Alan C. Schultz; Myriam Abramson; William Adams; Amin Atrash; Magdalena D. Bugajska; Michael J. Coblenz; Matt MacMahon; Dennis Perzanowski; Ian Horswill; Robert Zubek; David Kortenkamp; Bryn Wolfe; Tod Milam; Bruce Allen Maxwell

In an attempt to solve as much of the AAAI Robot Challenge as possible, five research institutions representing academia, industry, and government integrated their research into a single robot named GRACE. This article describes this first-year effort by the GRACE team, including not only the various techniques each participant brought to GRACE but also the difficult integration effort itself.


Robotics and Autonomous Systems | 2004

Integrating cognition, perception and action through mental simulation in robots

Nicholas L. Cassimatis; J. Gregory Trafton; Magdalena D. Bugajska; Alan C. Schultz

We argue that many problems in robotics arise from the difficulty of integrating multiple knowledge representation and inference techniques. We describe an architecture that integrates disparate reasoning, planning, sensation and mobility algorithms by composing them from strategies for managing mental simulations. Since simulations are conducted by modules that include high-level knowledge representation and inference techniques in addition to algorithms for sensation and reactive mobility, cognition, perception and action are continually integrated. An implemented robot using this framework in object-tacking and human–robot interaction tasks demonstrates that knowledge representation and inference techniques enable more complex and flexible robot behavior.


human-robot interaction | 2008

Integrating vision and audition within a cognitive architecture to track conversations

J. Gregory Trafton; Magdalena D. Bugajska; Benjamin R. Fransen; Raj M. Ratwani

We describe a computational cognitive architecture for robots which we call ACT-R/E (ACT-R/Embodied). ACT-R/E is based on ACT-R [1, 2] but uses different visual, auditory, and movement modules. We describe a model that uses ACT-R/E to integrate visual and auditory information to perform conversation tracking in a dynamic environment. We also performed an empirical evaluation study which shows that people see our conversational tracking system as extremely natural.


Archive | 2002

Communicating with Teams of Cooperative Robots

Dennis Perzanowski; Alan C. Schultz; William Adams; Magdalena D. Bugajska; Elaine Marsh; G. Trafton; Derek Brock; Marjorie Skubic; M. Abramson

We are designing and implementing a multi-modal interface to a team of dynamically autonomous robots. For this interface, we have elected to use natural language and gesture. Gestures can be either natural gestures perceived by a vision system installed on the robot, or they can be made by using a stylus on a Personal Digital Assistant. In this paper we describe the integrated modes of input and one of the theoretical constructs that we use to facilitate cooperation and collaboration among members of a team of robots. An integrated context and dialog processing component that incorporates knowledge of spatial relations enables cooperative activity between the multiple agents, both human and robotic.


International Journal of Social Robotics | 2009

“Like-Me” Simulation as an Effective and Cognitively Plausible Basis for Social Robotics

William G. Kennedy; Magdalena D. Bugajska; Anthony M. Harrison; J. Gregory Trafton

We present a successful design approach for social robotics based on a computational cognitive architecture and mental simulation. We discuss an approach to a Theory of Mind known as a “like-me” simulation in which the agent uses its own knowledge and capabilities as a model of another agent to predict that agent’s actions. We present three examples of a “like-me” mental simulation in a social context implemented in the embodied version of the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture, ACT-R/E (for ACT-R Embodied). Our examples show the efficacy of a simulation approach in modeling perspective taking (identifying another’s left or right hand), teamwork (simulating a teammate for better team performance), and dominant-submissive social behavior (primate social experiments). We conclude with a discussion of the cognitive plausibility of this approach and our conclusions.


systems, man and cybernetics | 2006

A Preliminary Study of Peer-to-Peer Human-Robot Interaction

Terry Fong; Jean Scholtz; Julie A. Shah; L. Fluckiger; Clayton Kunz; David Lees; J. Schreiner; M. Siegel; Laura M. Hiatt; Illah R. Nourbakhsh; Reid G. Simmons; R. Ambrose; Robert R. Burridge; Brian Antonishek; Magdalena D. Bugajska; Alan C. Schultz; J. G. Trafton

The Peer-To-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance.


nasa dod conference on evolvable hardware | 2002

Coevolution of form and function in the design of micro air vehicles

Magdalena D. Bugajska; Alan C. Schultz

This paper discusses approaches to cooperative coevolution of for and function for autonomous vehicles, specifically evolving morphology and control for an autonomous micro air vehicle (MAV). The evolution of a sensor suite with minimal size, weight, and power requirements, and reactive strategies for collision-free navigation for the simulated MAV is described. Results are presented for several different coevolutionary approaches to evolution of form and junction (single- and multiple-species models) and for two different control architectures (a rulebase controller based on the SAMUEL learning system and a neural network controller implemented and evolved using ECkit).


robot and human interactive communication | 2005

Perspective-taking with robots: experiments and models

J. G. Trafton; Alan C. Schultz; Magdalena D. Bugajska; Farilee Mintz

We suggest that to enable effective human-robot interaction, robots should be able to interact in a way that is natural to and preferred by humans. Using human-compatible representations and reasoning mechanisms should help in developing skills which support effective human-robot interaction. In this paper, we present two studies that examine a critical human-robot-interaction component: perspective-taking. We find that when a person asks a robot to perform a task with some ambiguity to the robot, the person prefers the robot to either ask for clarification or take the persons perspective and act appropriately.

Collaboration


Dive into the Magdalena D. Bugajska's collaboration.

Top Co-Authors

Avatar

Alan C. Schultz

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

William Adams

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dennis Perzanowski

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. Gregory Trafton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Derek Brock

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. G. Trafton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Nicholas L. Cassimatis

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Donald A. Sofge

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin R. Fransen

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge