Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam Norton is active.

Publication


Featured researches published by Adam Norton.


Journal of Field Robotics | 2015

Analysis of Human-robot Interaction at the DARPA Robotics Challenge Trials

Holly A. Yanco; Adam Norton; Willard Ober; David Shane; Anna Skinner; Jack Maxwell Vice

In December 2013, the Defense Advanced Research Projects Agency DARPA Robotics Challenge DRC Trials were held in Homestead, Florida. The DRC Trials were designed to test the capabilities of humanoid robots in disaster response scenarios with degraded communications. Each team created their own interaction method to control their robot, either the Boston Dynamics Atlas robot or a robot built by the team itself. Of the 15 competing teams, eight participated in our study of human-robot interaction. We observed the participating teams from the field with the robot and in the control room with the operators, noting many performance metrics, such as critical incidents and utterances, and categorizing their interaction methods according to the number of operators, control methods, and amount of interaction. We decomposed each task into a series of subtasks, different from the DRC Trials official subtasks for points, to gain a better understanding of each teams performance in varying complexities of mobility and manipulation. Each teams interaction methods have been compared to their performance, and correlations have been analyzed to understand why some teams ranked higher than others. We discuss lessons learned from this study, and we have found in general that the guidelines for human-robot interaction for unmanned ground vehicles still hold true: more sensor fusion, fewer operators, and more automation lead to better performance.


2011 IEEE Conference on Technologies for Practical Robot Applications | 2011

Hand and finger registration for multi-touch joysticks on software-based operator control units

Mark Micire; Eric McCann; Munjal Desai; Katherine M. Tsui; Adam Norton; Holly A. Yanco

Robot control typically requires many physical joysticks, buttons, and switches. Taking inspiration from video game controllers, we have created a Dynamically Resizing, Ergonomic, and Multi-touch (DREAM) controller to allow for the development of a software-based operator control unit (SoftOCU). The DREAM Controller is created wherever a person places his or her hand; thus we needed to develop an algorithm for accurate hand and finger registration. Tested with a set of 405 hands from 62 users, our algorithm correctly identified 97% of the hands.


The International Journal of Robotics Research | 2017

Analysis of human–robot interaction at the DARPA Robotics Challenge Finals:

Adam Norton; Willard Ober; Lisa Baraniecki; Eric McCann; Jean Scholtz; David Shane; Anna Skinner; Robert Watson; Holly A. Yanco

In June 2015, the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge (DRC) Finals were held in Pomona, California. The DRC Finals served as the third phase of the program designed to test the capabilities of semi-autonomous, remote humanoid robots to perform disaster response tasks with degraded communications. All competition teams were responsible for developing their own interaction method to control their robot. Of the 23 teams in the competition, 20 consented to participate in this study of human–robot interaction (HRI). The evaluation team observed the consenting teams during task execution in their control rooms (with the operators), and all 23 teams were observed on the field during the public event (with the robot). A variety of data were collected both before the competition and on-site. Each participating team’s interaction methods were distilled into a set of characteristics pertaining to the robot, operator strategies, control methods, and sensor fusion. Each task was decomposed into subtasks that were classified according to the complexity of the mobility and/or manipulation actions being performed. Performance metrics were calculated regarding the number of task attempts, performance time, and critical incidents, which were then correlated to each team’s interaction methods. The results of this analysis suggest that a combination of HRI characteristics, including balancing the capabilities of the operator with those of the robot and multiple sensor fusion instances with variable reference frames, positively impacted task performance. A set of guidelines for designing HRI with remote, semi-autonomous humanoid robots is proposed based on these results.


Archive | 2016

PRELIMINARY DEVELOPMENT OF TEST METHODS TO EVALUATE LOWER BODY WEARABLE ROBOTS FOR HUMAN PERFORMANCE AUGMENTATION

B. Carlson; Adam Norton; Holly A. Yanco

Wearable robotics are prevalent in the medical domain for prosthetic and rehabilitation uses, and those for performance augmentation of able bodied people for industrial and military domains are also on the rise. Some common metrics exist for evaluating these systems, such as metabolic cost, but they are incomplete with regards to the many other characteristics to be compared between systems. To this end, we are developing holistic test methods, specifically those for lower body wearable robots focused on performance augmentation. We discuss the test methods’ structure, considerations, and development. Prototypes of the test methods have been exercised with a user wearing a B-Temia Dermoskeleton system. Our initial development has led to a baseline set of basic and applied tasks that can be evaluated comparatively between performing the task without and with the system, measuring simple task-based metrics based on time, repetitions, loading capacity, and range of motion. Future work includes exercising the test methods with more wearable robotic systems and formulating a working task group to assist in driving development.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

The Effect of a Powered Lower-Body Exoskeleton on Physical and Cognitive Warfighter Performance

Blake Bequette; Adam Norton; Eric Jones; Leia Stirling

This study analyzed the performance of twelve military members in a simulated, fatigue-inducing patrol task under three conditions: wearing a powered exoskeleton, wearing an unpowered exoskeleton, and without wearing an exoskeleton. While walking with weight at a prescribed pace over obstacles while following a confederate, participants were subject to a dual-task cognitive test in which they answered radio calls and visually scanned for lighted targets. Cognitive load was varied through a secondary radio task and measured with a visual reaction time test. Physical load and cognitive load were varied throughout the test. For this paper, the dependent measures of interest were reaction time for the visual task and lag time behind the confederate. Significant differences and interactions were found in the visual reaction time among the exoskeleton conditions, physical loads, and cognitive loads. Significant differences and interactions were also found for the lag time of the subject behind their prescribed pace, and the variability of this lag time. Both measures had significant interactions with subject. Future work should examine what design features of the exoskeleton and capability of the human are related to these variabilities. An understanding of subject variability can lead to improvements in integrated exoskeleton design.


Archive | 2018

Perspectives on Human-Robot Team Performance from an Evaluation of the DARPA Robotics Challenge

Adam Norton; Willard Ober; Lisa Baraniecki; David Shane; Anna Skinner; Holly A. Yanco

The DARPA Robotics Challenge (DRC) was a competition designed to advance the capabilities of remotely teleoperated semi-autonomous humanoid robots performing in a disaster response scenario with degraded communications. Throughout the DRC, our evaluation team conducted two studies of human-robot interaction (HRI) for the Trials and Finals competitions. From these studies, we have generated recommendations and design guidelines for HRI with remote, semi-autonomous humanoids, but our findings also have implications outside of the competition’s domain. In this article, we discuss our perspectives on effective and ineffective human-robot teams based upon our experiences at the DRC. We consider the impact of various interfacing and control techniques, the effect of versatile robot design on task performance, and the operational context under which these factors work together to function in a human-centric environment. We use these underlying components of HRI to review how the advancements made at the DRC can be applied to present day robot applications and key capabilities for effective human-robot teams in the future.


Archive | 2016

Preliminary Development of a Test Method for Obstacle Detection and Avoidance in Industrial Environments

Adam Norton; Holly A. Yanco

There is currently no standard method for comparing autonomous capabilities between systems. We propose a test method for evaluating an automated mobile system’s ability to detect and avoid obstacles, specifically those in an industrial environment. To this end, a taxonomy is being generated to determine the relevant physical characteristics of obstacles so that they can be accurately represented in the test method. Our preliminary development includes the design of an apparatus, props, procedures, and metrics. We have fabricated a series of obstacle test props that reflect a variety of physical characteristics and performed a series of tests with a small mobile robot towards validation of the test method. Future work includes expanding the taxonomy, designing more obstacle test props, collecting test data with more AGVs and robots, and formalizing our work as a potential standard test method through the ASTM F45 Committee on Driverless Automatic Guided Industrial Vehicles, specifically F45.03 Object Detection and Protection.


technical symposium on computer science education | 2014

Artbotics with lego mindstorms (abstract only)

Adam Norton; Holly A. Yanco

This workshop introduces participants to the Artbotics program, which combines art and robotics to teach students about computer science while creating kinetic, interactive sculptures. The material covered will be provided in introductory fashion, requiring no prior experience with computer science, art, or robotics. The Lego Mindstorms NXT platform will be used to create two projects during the workshop: a spirograph-like drawing produced by programming a car holding a marker to drive using a sequence of motor movements (teaching the need for looping in programming) and an interactive, kinetic sculpture that reacts to sensor input (teaching the need for decisions in programming and building simple mechanisms). Examples of both projects can be seen at youtube.com/artbotics. The workshop will end with a short discussion of lessons learned and best practices, using examples from previous Artbotics programs for a variety of ages. Topics will include appropriate time frames, how to best use limited resources, and appropriate levels of depth for each age group. The workshop administrators will be providing laptops with the proper Lego Mindstorms NXT software, Lego Mindstorms NXT kits, and all needed building materials.


human-robot interaction | 2012

Situation understanding bot through language and environment

Daniel J. Brooks; Cameron Finucane; Adam Norton; Constantine Lignos; Vasumathi Raman; Hadas Kress-Gazit; Mikhail S. Medvedev; Ian Perera; Abraham Shultz; Sean McSheehy; Mitch Marcus; Holly A. Yanco

This video shows a demonstration of a fully autonomous robot, an iRobot ATRV-JR, which can be given commands using natural language. Users type commands to the robot on a tablet computer, which are then parsed and processed using semantic analysis. This information is used to build a plan representing the high level autonomous behaviors the robot should perform [2] [1]. The robot can be given commands to be executed immediately (e.g., “Search the floor for hostages.”) as well as standing orders for use over the entire run (e.g., “Let me know if you see any bombs.”). In the scenario shown in the video, the robot is asked to identify and defuse bombs, as well as to report if it finds any hostages or bad guys. Users can also query the robot through this interface. The robot conveys information to the user through text and a graphical interface on a tablet computer. The system can add icons to the map displayed and highlight areas of the map to convey concepts such as “I am here.” The video contains segments taken from a continuous 20 minute long run, shown at 4× speed. This work is a demonstration of a larger project called Situation Understanding Bot Through Language and Environment (SUBTLE). For more information, see www.subtlebot.org.


Archive | 2011

Hand and finger registration for control applications

Eric McCann; Mark Micire; Holly A. Yanco; Adam Norton

Collaboration


Dive into the Adam Norton's collaboration.

Top Co-Authors

Avatar

Holly A. Yanco

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Eric McCann

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Katherine M. Tsui

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Brooks

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Mark Micire

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Mikhail S. Medvedev

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Munjal Desai

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abraham Shultz

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

Alberto Rodriguez

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge