Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ann M. Virts is active.

Publication


Featured researches published by Ann M. Virts.


performance metrics for intelligent systems | 2008

Stepfield pallets: repeatable terrain for evaluating robot mobility

Adam Jacoff; Anthony J. Downs; Ann M. Virts; Elena R. Messina

Stepfield pallets are a fabricated and repeatable terrain for evaluating robot mobility. They were developed to provide emergency responders and robot developers a common mobility challenge that could be easily replicated to capture statistically significant robot performance data. Stepfield pallets have provided robot mobility challenges for the international RoboCupRescue Robot League competitions since 2005 and have proliferated widely for qualification and practice. They are currently being proposed as a standard test apparatus to evaluate robot mobility. This paper describes the origin and design of stepfield pallets, and discusses their use in several proposed standard test methods for response robots.


performance metrics for intelligent systems | 2009

Evaluating speech translation systems: applying SCORE to TRANSTAC technologies

Craig I. Schlenoff; Gregory A. Sanders; Brian A. Weiss; Frederick M. Proctor; Michelle Potts Steves; Ann M. Virts

The Spoken Language Communication and Translation System for Tactical Use (TRANSTAC) program is a Defense Advanced Research Projects Agency (DARPA) advanced technology research and development program. The goal of the TRANSTAC program is to demonstrate capabilities to rapidly develop and field free-form, two-way translation systems that enable speakers of different languages to communicate with one another in realworld tactical situations without an interpreter. The National Institute of Standards and Technology (NIST), along with support from MITRE and Appen Pty Ltd., have been funded to serve as the Independent Evaluation Team (IET) for the TRANSTAC Program. The IET is responsible for analyzing the performance of the TRANSTAC systems by designing and executing multiple TRANSTAC evaluations and analyzing the results of the evaluation. To accomplish this, NIST has applied the SCORE (System, Component, and Operationally Relevant Evaluations) Framework. SCORE is a unified set of criteria and software tools for defining a performance evaluation approach for complex intelligent systems. It provides a comprehensive evaluation blueprint that assesses the technical performance of a system and its components through isolating variables as well as capturing end-user utility of the system in realistic use-case environments. This document describes the TRANSTAC program and explains how the SCORE framework was applied to assess the technical and utility performance of the TRANSTAC systems.


field and service robotics | 2014

Advancing the State of Urban Search and Rescue Robotics Through the RoboCupRescue Robot League Competition

Raymond Sheh; Adam Jacoff; Ann M. Virts; Tetsuya Kimura; Johannes Pellenz; Sören Schwertfeger; Jackrit Suthakorn

The RoboCupRescue Robot League is an international competition that has grown to be an effective driver for the dissemination of solutions to the challenges posed by Urban Search and Rescue Robotics and accelerated the development of the performance standards that are crucial to widespread effective deployment of robotic systems for these applications. In this paper, we will discuss how this competition has come to be more than simply a venue where teams compete to find a champion and is now “A League of Teams with one goal: to Develop and Demonstrate Advanced Robotic Capabilities for Emergency Responders.”


Journal of Field Robotics | 2007

Applying SCORE to field-based performance evaluations of soldier worn sensor technologies

Craig I. Schlenoff; Michelle Potts Steves; Brian A. Weiss; Michael O. Shneier; Ann M. Virts

Soldiers are often asked to perform missions that last many hours and are extremely stressful. After a mission is complete, the soldiers are typically asked to provide a report describing the most important things that happened during the mission. Due to the various stresses associated with military missions, there are undoubtedly many instances in which important information is missed or not reported and, therefore, not available for use when planning future missions. The ASSIST (Advanced Soldier Sensor Information System and Sensors Technology) program is addressing this challenge by instrumenting soldiers with sensors that they can wear directly on their uniforms. During the mission, the sensors continuously record what is going on around the soldier. With this information, soldiers are able to give more accurate reports without relying solely on their memory. In order for systems like this (often termed autonomous or intelligent systems) to be successful, they must be comprehensively and quantitatively evaluated to ensure that they will function appropriately and as expected in a wartime environment. The primary contribution of this paper is to introduce and define a framework and approach to performance evaluation called SCORE (System, Component, and Operationally Relevant Evaluation) and describe the results of applying it to evaluate the ASSIST technology. As the name implies, SCORE is built around the premise that, in order to get a true picture of how a system performs in the field, it must be evaluated at the component level, the system level, and in operationally relevant environments. The SCORE framework provides proven techniques to aid in the performance evaluation of many types of intelligent systems. To date, SCORE has only been applied to technologies under development (formative evaluation), but the authors believe that this approach would lend itself equally well to the evaluation of technologies ready to be fielded (summative evaluation).


performance metrics for intelligent systems | 2010

Comprehensive standard test suites for the performance evaluation of mobile robots

Adam Jacoff; Hui-Min Huang; Elena R. Messina; Ann M. Virts; Anthony J. Downs

Robots must possess certain sets of capabilities to suit critical operations such as emergency responses. In the mobility function, ground robots must be able to handle many types of obstacles and terrain complexities, including traversing and negotiating positive and negative obstacles, various types of floor surfaces or terrains, and confined passageways. Additional mobility requirements include the ability to sustain specified speeds and to tow payloads with different weights. Standard test methods are required to evaluate how well candidate robots meet these requirements. A set of test methods focused on evaluating the mobility function has been collected into a test suite. Likewise, in other functions such as sensing, communication, manipulation, energy/power, Human-System Interaction (HSI), logistics, and safety, corresponding test suites are required. Also needed are test suites for aerial and aquatic robots. Under the sponsorship of DHS, NIST researchers are collaborating with others to establish such a collection of test suites under the standards development organization ASTM International. Apparatuses must be set up to challenge specific robot capabilities in repeatable ways to facilitate direct comparison of different robot models as well as particular configurations of similar robot models.


performance metrics for intelligent systems | 2012

Using competitions to advance the development of standard test methods for response robots

Adam Jacoff; Raymond Sheh; Ann M. Virts; Tetsuya Kimura; Johannes Pellenz; Sören Schwertfeger; Jackrit Suthakorn

Competitions are an effective aid to the development and dissemination of standard test methods, especially in rapidly developing, fields with a wide variety of requirements and capabilities such as Urban Search and Rescue robotics. By exposing the development process to highly developmental systems that push the boundaries of current capabilities, it is possible to gain an insight into how the test methods will respond to the robots of the future. The competition setting also allows for the rapid iterative refinement of the test methods and apparatuses in response to new developments. For the research community, introducing the concepts behind the test methods at the research and development stage can also help to guide their work towards the operationally relevant requirements embodied by the test methods and apparatuses. This also aids in the dissemination of the test methods themselves as teams fabricate them in their own laboratories and re-use them in work outside the competition. In this paper, we discuss how international competitions, and in particular the RoboCupRescue Robot League competition, have played a crucial role in the development of standard test metrics for response robots as part of the ASTM International Committee of Homeland Security Applications; Operational Equipment; Robots (E54.08.01). We will also discuss how the competition has helped to drive a vibrant robot developer community towards solutions that are relevant to first responders.


performance metrics for intelligent systems | 2009

Performance measurements for evaluating static and dynamic multiple human detection and tracking systems in unstructured environments

Barry A. Bodt; Richard Camden; Harry A. Scott; Adam Jacoff; Tsai Hong; Tommy Chang; Rick Norcross; Tony Downs; Ann M. Virts

The Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance (CTA) conducted an assessment and evaluation of multiple algorithms for real-time detection of pedestrians in Laser Detection and Ranging (LADAR) and video sensor data taken from a moving platform. The algorithms were developed by Robotics CTA members and then assessed in field experiments jointly conducted by the National Institute of Standards and Technology (NIST) and ARL. A robust, accurate and independent pedestrian tracking system was developed to provide ground truth. The ground truth was used to evaluate the CTA member algorithms for uncertainty and error in their results. A real-time display system was used to provide early detection of errors in data collection.


performance metrics for intelligent systems | 2012

Emergency response robot evaluation exercise

Adam Jacoff; Hui-Min Huang; Ann M. Virts; Anthony J. Downs; Raymond Sheh

More than 60 robot test methods are being developed by a team led by the National Institute of Standards and Technology (NIST) with the sponsorship of U.S. Department of Homeland Security (DHS). These test methods are being specified and standardized under the standards development organization ASTM International. These standards are developed for the purposes of identifying the capabilities of mobile robots to help emergency response organizations assess the applicability of the robots. The test methods are developed using an iterative process during which they are prototyped and validated by the participating researchers, developers, emergency response users, and robot manufacturers. We have conducted a series of evaluation exercises based on the test method implementations. These events were participated by representatives from all the different segments of the community. As such, these events present a unique opportunity for advancing the test methods, collecting capability data, and identifying robotic technology focusing issues. This paper describes an exercise event that this effort recently conducted.


Test Methods and Knowledge Representation for Urban Search and Rescue Robots | 2007

Test Methods and Knowledge Representation for Urban Search and Rescue Robots

Craig I. Schlenoff; Elena R. Messina; Alan M. Lytle; Brian A. Weiss; Ann M. Virts

Urban Search and Rescue (USAR) is defined as “the strategy, tactics, and operations for locating, providing medical treatment, and extrication of entrapped victims.” (Federal Emergency Management Agency 2000) USAR teams exist at national, state, and local levels. At the national level, the Federal Emergency Management Agency (FEMA), which is part of the Department of Homeland Security, has Task Forces that respond to major disasters. There are many challenges in diverse disciplines entailed in applying robots for USAR. Examples include range and penetration limitations for wireless radio signals that send commands to the robots from the operator control station, the ability of the platforms to withstand moisture, dust, and other contaminants, and the resolution of onboard navigation cameras.


international symposium on safety, security, and rescue robotics | 2017

Events for the application of measurement science to evaluate ground, aerial, and aquatic robots

Adam Jacoff; Richard Candell; Anthony J. Downs; Hui-Min Huang; Kenneth Kimble; Kamel S. Saidi; Raymond Sheh; Ann M. Virts

This paper reports on three measurement science field exercises for evaluating ground, aerial, and aquatic robots. These events, conducted from February to June 2017, were conducted in close co-ordination with the responder community, standards organizations, manufacturers, and academia. Test data from a wide variety of robot platforms were gathered in a wide variety of standard and prototypical test methods ranging from mobility and manipulation to sensors and endurance.

Collaboration


Dive into the Ann M. Virts's collaboration.

Top Co-Authors

Avatar

Adam Jacoff

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Brian A. Weiss

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Craig I. Schlenoff

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Anthony J. Downs

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Hui-Min Huang

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michelle Potts Steves

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Elena R. Messina

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael O. Shneier

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Raymond Sheh

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Kamel S. Saidi

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge