Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathan Brooks is active.

Publication


Featured researches published by Nathan Brooks.


human-robot interaction | 2011

Scalable target detection for large robot teams

Huadong Wang; Andreas Kolling; Nathan Brooks; Sean Owens; Shafiq Abedin; Paul Scerri; Pei-Ju Lee; Shih Yi Chien; Michael Lewis; Katia P. Sycara

In this paper, we present an asynchronous display method, coined image queue, which allows operators to search through a large amount of data gathered by autonomous robot teams. We discuss and investigate the advantages of an asynchronous display for foraging tasks with emphasis on Urban Search and Rescue. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment in order to identify targets of interest such as injured victims. It fills the gap for comprehensive and scalable displays to obtain a network-centric perspective for UGVs. We compared the image queue to a traditional synchronous display with live video feeds and found that the image queue reduces errors and operators workload. Furthermore, it disentangles target detection from concurrent system operations and enables a call center approach to target detection. With such an approach we can scale up to very large multi-robot systems gathering huge amounts of data that is then distributed to multiple operators.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

Asynchronous Control with ATR for Large Robot Teams

Nathan Brooks; Paul Scerri; Katia P. Sycara; Huadong Wang; Shih-Yi Chien; Michael Lewis

In this paper, we discuss and investigate the advantages of an asynchronous display, called “image queue”, tested for an urban search and rescue foraging task. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment by selecting a small number of images that together cover large portions of the area searched. This asynchronous approach allows operators to search through a large amount of data gathered by autonomous robot teams, and allows comprehensive and scalable displays to obtain a network-centric perspective for unmanned ground vehicles (UGVs). In the reported experiment automatic target recognition (ATR) was used to augment utilities based on visual coverage in selecting imagery for presentation to the operator. In the cued condition a box was drawn in the region in which a possible target was detected. In the no-cue condition no box was drawn although the target detection probability continued to play a role in the selection of imagery. We found that operators using the image queue displays missed fewer victims and relied on teleoperation less often than those using streaming video. Image queue users in the no-cue condition did better in avoiding false alarms and reported lower workload than those in the cued condition.


international conference on human-computer interaction | 2011

Synchronous vs. asynchronous control for large robot teams

Huadong Wang; Andreas Kolling; Nathan Brooks; Michael Lewis; Katia P. Sycara

In this paper, we discuss and investigate the advantages of an asynchronous display, called image queue, for foraging tasks with emphasis on Urban Search and Rescue. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment, which helps the user to identify targets of interest such as injured victims. This approach allows operators to search through a large amount of data gathered by autonomous robot teams, and fills the gap for comprehensive and scalable displays to obtain a network-centric perspective for UGVs. It is found that the image queue reduces errors and operators workload comparing with the traditional synchronous display. Furthermore, it disentangles target detection from concurrent system operations and enables a call center approach to target detection. With such an approach, it could scale up to a larger multirobot systems gathering huge amounts of data with multiple operators.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

Best of Both Worlds Design and Evaluation of an Adaptive Delegation Interface

Ewart de Visser; Brian Kidwell; John Payne; Li Lu; James Parker; Nathan Brooks; Timur Chabuk; Sarah Spriggs; Amos Freedy; Paul Scerri; Raja Parasuraman

The proliferation of unmanned aerial vehicles (UAVs) in civil and military domains has spurred increasingly complex automation design for augmenting operator abilities, reducing workload, and increasing mission effectiveness. We describe the Adaptive Interface Management System (AIMS), an intelligent adaptive delegation interface for controlling and monitoring multiple unmanned vehicles, with a mixed-initiative team model language. A study was conducted to assess understanding of this model language and whether participants exhibited calibrated trust in the intelligent automation. Results showed that operators had accurate memory for role responsibility and were well calibrated to the automation. Adaptive automation design approaches like the one described in this paper can be useful to create mixedinitiative human-robot teams.


systems, man and cybernetics | 2011

SUAVE: Integrating UAV video using a 3D model

Shafiq Abedin; Huadong Wang; Michael Lewis; Nathan Brooks; Sean Owens; Paul Scerri; Katia P. Sycara

Controlling a team of Unmanned Aerial Vehicles (UAV) requires the operator to perform continuous surveillance and path planning. The operators situation awareness is likely to degrade as an increasing number of surveillance videos must be viewed and integrated. The Picture-in-Picture display (PiP) provides one solution for integrating multiple UAV camera video by allowing the operator to view the video feed in the context of surrounding terrain. The experimental SUAVE (Simple Unmanned Areal Vehicle Environment) display extends PiP methods by sampling imagery from the video stream to texture a 3D map of the terrain. The operator can then inspect this imagery using world in miniature (WIM) or fly-through methods. We investigate the properties and advantages of SUAVE in the context of a search mission with 11 UAVs finding a strong advantage for finding targets While performance is expected to improve with increasing numbers of UAVs we did not find differences in performance between models generated by 11 UAVs and those employing 22 UAVs.


pacific rim international conference on multi-agents | 2017

A Balking Queue Approach for Modeling Human-Multi-Robot Interaction for Water Monitoring

Masoume M. Raeissi; Nathan Brooks; Alessandro Farinelli

We consider multi-robot scenarios where robots ask for operator interventions when facing difficulties. As the number of robots increases, the operator quickly becomes a bottleneck for the system. Queue theory can be effectively used to optimize the scheduling of the robots’ requests. Here we focus on a specific queuing model in which the robots decide whether to join the queue or balk based on a threshold value. Those thresholds are a trade-off between the reward earned by joining the queue and cost of waiting in the queue. Though such queuing models reduce the system’s waiting time, the cost of balking usually is not considered. Our aim is thus to find appropriate balking strategies for a robotic application to reduce the waiting time considering the expected balking costs. We propose using a Q-learning approach to compute balking thresholds and experimentally demonstrate the improvement of team performance compared to previous queuing models.


systems and information engineering design symposium | 2012

Simple UAV Environment (SUAVE): A viewpoint motion control evaluation

Shafiq Abedin; Michael Lewis; Nathan Brooks; Paul Scerri; Katia P. Sycara

Controlling a team of Unmanned Aerial Vehicles (UAV) requires the operator to perform continuous surveillance and path planning. The operators situation awareness is likely to degrade as an increasing number of surveillance videos must be viewed and integrated. The Picture-in-Picture display (PiP) provides one solution for integrating multiple UAV camera video by allowing the operator to view the video feed in the context of surrounding terrain. The experimental SUAVE (Simple Unmanned Aerial Vehicle Environment) display extends PiP methods by sampling imagery from the video stream to texture a 3D map of the terrain. The operator can then inspect this imagery using world in miniature (WIM) or fly-through methods. Our previous investigation of the properties and advantages of SUAVE in the context of a search mission with 11 UAVs showed a strong advantage for finding targets. We investigated the effects of constrained versus unconstrained fly through technique to evaluate the performance in such spatially immersive displays. Results indicated no difference between these two motion control techniques.


intelligent robots and systems | 2012

Scheduling operator attention for Multi-Robot Control

Shih Yi Chien; Michael Lewis; Siddharth Mehrotra; Nathan Brooks; Katia P. Sycara


adaptive agents and multi agents systems | 2011

Allocating spatially distributed tasks in large, dynamic robot teams

Steven Okamoto; Nathan Brooks; Sean Owens; Katia P. Sycara; Paul Scerri


Autonomous Agents and Multi-Agent Systems | 2017

Interacting with team oriented plans in multi-robot systems

Alessandro Farinelli; Masoume M. Raeissi; Nicoló Marchi; Nathan Brooks; Paul Scerri

Collaboration


Dive into the Nathan Brooks's collaboration.

Top Co-Authors

Avatar

Paul Scerri

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Katia P. Sycara

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Michael Lewis

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Huadong Wang

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Sean Owens

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Shafiq Abedin

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge