Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer L. Burke is active.

Publication


Featured researches published by Jennifer L. Burke.


systems man and cybernetics | 2004

Final report for the DARPA/NSF interdisciplinary study on human-robot interaction

Jennifer L. Burke; Robin R. Murphy; Erika Rogers; Vladimir J. Lumelsky; Jean Scholtz

As part of a Defense Advanced Research Projects Agency/National Science Foundation study on human-robot interaction (HRI), over sixty representatives from academia, government, and industry participated in an interdisciplinary workshop, which allowed roboticists to interact with psychologists, sociologists, cognitive scientists, communication experts and human-computer interaction specialists to discuss common interests in the field of HRI, and to establish a dialogue across the disciplines for future collaborations. We include initial work that was done in preparation for the workshop, links to keynote and other presentations, and a summary of the findings, outcomes, and recommendations that were generated by the participants. Findings of the study include-the need for more extensive interdisciplinary interaction, identification of basic taxonomies and research issues, social informatics, establishment of a small number of common application domains, and field experience for members of the HRI community. An overall conclusion of the workshop was expressed as the following-HRI is a cross-disciplinary area, which poses barriers to meaningful research, synthesis, and technology transfer. The vocabularies, experiences, methodologies, and metrics of the communities are sufficiently different that cross-disciplinary research is unlikely to happen without sustained funding and an infrastructure to establish a new HRI community.


international conference on multimodal interfaces | 2006

Comparing the effects of visual-auditory and visual-tactile feedback on user performance: a meta-analysis

Jennifer L. Burke; Matthew S. Prewett; Ashley A. Gray; Liuquin Yang; Frederick R. B. Stilson; Michael D. Coovert; Linda R. Elliot; Elizabeth S. Redden

In a meta-analysis of 43 studies, we examined the effects of multimodal feedback on user performance, comparing visual-auditory and visual-tactile feedback to visual feedback alone. Results indicate that adding an additional modality to visual feedback improves performance overall. Both visual-auditory feedback and visual-tactile feedback provided advantages in reducing reaction times and improving performance scores, but were not effective in reducing error rates. Effects are moderated by task type, workload, and number of tasks. Visual-auditory feedback is most effective when a single task is being performed (g = .87), and under normal workload conditions (g = .71). Visual-tactile feedback is more effective when multiple tasks are begin performed (g = .77) and workload conditions are high (g = .84). Both types of multimodal feedback are effective for target acquisition tasks; but vary in effectiveness for other task types. Implications for practice and research are discussed.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

Up from the Rubble: Lessons Learned about HRI from Search and Rescue

Robin R. Murphy; Jennifer L. Burke

The Center for Robot-Assisted Search and Rescue has collected data at three responses (World Trade Center, Hurricane Charley, and the La Conchita mudslide) and nine high fidelity field exercises. Our results can be distilled into four lessons. First, building situation awareness, not autonomous navigation, is the major bottleneck in robot autonomy. Most of the robotics literature assumes a single operator single robot (SOSR), while our work shows that two operators working together are nine times more likely to find a victim. Second, human-robot interaction should not be thought of how to control the robot but rather how a team of experts can exploit the robot as an active information source. The third lesson is that team members use shared visual information to build shared mental models and facilitate team coordination. This suggests that high bandwidth, reliable communications will be necessary for effective teamwork. Fourth, victims and rescuers in close proximity to the robots respond to the robot socially. We conclude with observations about the general challenges in human-robot interaction.


human-robot interaction | 2008

Crew roles and operational protocols for rotary-wing micro-uavs in close urban environments

Robin R. Murphy; Kevin S. Pratt; Jennifer L. Burke

A crew organization and four-step operational protocol is recommended based on a cumulative descriptive field study of teleoperated rotary-wing micro air vehicles (MAV) used for structural inspection during the response and recovery phases of Hurricanes Katrina and Wilma. The use of MAVs for real civilian missions in real operating environments provides a unique opportunity to consider human-robot interaction. The analysis of the human-robot interaction during 8 days, 14 missions, and 38 flights finds that a three person crew is currently needed to perform distinct roles: Pilot, Mission Specialist, and Flight Director. The general operations procedure is driven by the need for safety of bystanders, other aircraft, the tactical team, and the MAV itself, which leads to missions being executed as a series of short, line-of-sight flights rather than a single flight. Safety concerns may limit the utility of autonomy in reducing the crew size or enabling beyond line-of-sight-operations but autonomy could lead to an increase in flights per mission and reduced Pilot training demands. This paper is expected to contribute to set a foundation for future research in HRI and MAV autonomy and to help establish regulations and acquisition guidelines for civilian operations. Additional research in autonomy, interfaces, attention, and out-of-the-loop (OOTL) control is warranted.


international symposium on safety, security, and rescue robotics | 2008

Use of Tethered Small Unmanned Aerial System at Berkman Plaza II Collapse

Kevin S. Pratt; Robin R. Murphy; Jennifer L. Burke; Jeff Craighead; Chandler Griffin; Sam Stover

A tethered Small Unmanned Aerial System (sUAS) provided structural forensic inspection of the collapsed Berkman Plaza II six-story parking garage. The sUAS, an iSENSYS IP3 miniature helicopter, was tethered to meet US Federal Aviation Administration (FAA) requirements for unregulated flight below 45 m (150 ft). This created new platform control, human-robot interaction, and safety issues in addition to the challenges posed by the active, city environment. A new technique, viewpoint-oriented Cognitive Work Analysis (CWA), was used to generate the 4:1 human-robot crew organization and operational protocol. The sUAS over three flights was able to provide useful imagery to structural engineers that had been difficult to obtain from manned helicopters due to dust obscurants. Based on these flights this work shows that tethered operations decreases team effectiveness, increases overall safety liability, and in general is not a recommended solution for sUAS flight.


international conference on multimodal interfaces | 2006

The benefits of multimodal information: a meta-analysis comparing visual and visual-tactile feedback

Matthew S. Prewett; Liuquin Yang; Frederick R. B. Stilson; Ashley A. Gray; Michael D. Coovert; Jennifer L. Burke; Elizabeth S. Redden; Linda R. Elliot

Information display systems have become increasingly complex and more difficult for human cognition to process effectively. Based upon Wickens Multiple Resource Theory (MRT), information delivered using multiple modalities (i.e., visual and tactile) could be more effective than communicating the same information through a single modality. The purpose of this meta-analysis is to compare user effectiveness when using visual-tactile task feedback (a multimodality) to using only visual task feedback (a single modality). Results indicate that using visual-tactile feedback enhances task effectiveness more so than visual feedback (g = .38). When assessing different criteria, visual-tactile feedback is particularly effective at reducing reaction time (g = .631) and increasing performance (g = .618). Follow up moderator analyses indicate that visual-tactile feedback is more effective when workload is high (g = .844) and multiple tasks are being performed (g = .767). Implications of results are discussed in the paper.


collaboration technologies and systems | 2007

Psychophysiological experimental design for use in human-robot interaction studies

Cindy L. Bethel; Jennifer L. Burke; Robin R. Murphy; Kristen Salomon

This paper outlines key experimental design issues associated with the use of psychophysiological measures in human-robot interaction (HRI) studies and summarizes related studies. Psychophysiological measurements are one tool for evaluating participantspsila reactions to a robot with which they are interacting. A brief review of psychophysiology is provided which includes: physiological activities and response tendencies; common psychophysiological measures; and advantages/issues related to psychophysiological measures. Psychophysiological experimental design recommendations are given for information required from participants before the psychophysiological measures are performed; a method to reduce habituation; post-testing assessment process; determining adequate sample sizes; and testing methods commonly used in HRI studies with recommended electrode placements. Psychophysiological measures should be utilized as part of a multi-faceted approach to experimental design including self-assessments, participant interviews, and/or video-recorded data collection methods over the course of an experimental study. Two or more methods of measurement should be utilized for convergent validity. Although psychophysiological measures may not be appropriate for all HRI studies, they can provide a valuable evaluation tool of participantspsila responses when properly incorporated into a multi-faceted experimental design.


robot and human interactive communication | 2005

Robot-assisted medical reachback: using shared visual information

Dawn R. Riddle; Robin R. Murphy; Jennifer L. Burke

Robot assisted medical reachback (RAMR) involves remote medical personnel conducting operator- and robot-mediated victim assessment in an urban search and rescue environment. A simulated medical reachback exercise was developed to examine RAMR. Key findings suggest it is critical for providers and operators to maintain a shared visual space for developing mental models and facilitating team coordination. Communication analysis across the RAMR task revealed that shared visual information was used approximately 50% of the time to facilitate the development of shared mental models, and again approximately 50% of the time to facilitate team coordination activities. Future efforts in this research domain must further investigate the use of shared visual information to facilitate shared mental models and team coordination.


intelligent robots and systems | 2008

Validating the Search and Rescue Game Environment as a robot simulator by performing a simulated anomaly detection task

Jeff Craighead; Rodrigo Gutierrez; Jennifer L. Burke; Robin R. Murphy

This paper presents the results from experiments validating the physics and environmental accuracy of a new robot simulation environment, the search and rescue game environment (SARGE), which is the foundation for series of robot-operator training games. An ATRV-Jr. outfitted with a SICK laser, GPS, and compass was used both in the real-world and in a simulated environment modeled after the real-world testing location in a simulated anomaly detection task. The ARTV-Jr., controlled by the Distributed Field Robotics Architecture, navigated through a series of waypoints in the environment. The simulated ATRV-Jr. matched the actions of the real ATRV-Jr. in both velocity and path similarity within 0.08 m/s and 0.7 m respectively.


international symposium on safety, security, and rescue robotics | 2008

A Depth Sensing Display for Bomb Disposal Robots

Brian Day; Cindy L. Bethel; Robin R. Murphy; Jennifer L. Burke

This paper describes a visual display that provides depth of objects to be grasped and was developed at the request of a local bomb squad for use with a bomb disposal robot. The display provides four key functions: (1) it allows the operator to extract the distance between the object and the robots grasper that each pixel represents, (2) it cues the operator when the object is within a predefined distance from the robot grasper, (3) it can track the object in the video display, and (4) it can continuously display the distance from the robot grasper to the selected object. The display was designed specifically for the Canesta EP200 mounted on a Remotec mini-max robot, but the display functionality is expected to be useful for any robot grasper used in conjunction with a 3D sensor. While the usability of the visual display and its impact on grasper-related performance has not been formally evaluated, the informal feedback from the subject matter experts is that this display meets their requirements.

Collaboration


Dive into the Jennifer L. Burke's collaboration.

Top Co-Authors

Avatar

Michael D. Coovert

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cindy L. Bethel

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Kensuke Kato

Kyushu University of Health and Welfare

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeonghye Han

Cheongju National University of Education

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ashley A. Gray

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge