Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher M. Reardon is active.

Publication


Featured researches published by Christopher M. Reardon.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

Real-Time Multiple Human Perception With Color-Depth Cameras on a Mobile Robot

Hao Zhang; Christopher M. Reardon; Lynne E. Parker

The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.


international conference on robotics and automation | 2015

Adaptive human-centered representation for activity recognition of multiple individuals from 3D point cloud sequences

Hao Zhang; Christopher M. Reardon; Chi Zhang; Lynne E. Parker

Activity recognition of multi-individuals (ARMI) within a group, which is essential to practical human-centered robotics applications such as childhood education, is a particularly challenging and previously not well studied problem. We present a novel adaptive human-centered (AdHuC) representation based on local spatio-temporal features (LST) to address ARMI in a sequence of 3D point clouds. Our human-centered detector constructs affiliation regions to associate LST features with humans by mining depth data and using a cascade of rejectors to localize humans in 3D space. Then, features are detected within each affiliation region, which avoids extracting irrelevant features from dynamic background clutter and addresses moving cameras on mobile robots. Our feature descriptor is able to adapt its support region to linear perspective view variations and encode multi-channel information (i.e., color and depth) to construct the final representation. Empirical studies validate that the AdHuC representation obtains promising performance on ARMI using a Meka humanoid robot to play multi-people Simon Says games. Experiments on benchmark datasets further demonstrate that our adaptive human-centered representation outperforms previous approaches for activity recognition from color-depth data.


international conference on robotics and automation | 2009

Using critical junctures and environmentally-dependent information for management of tightly-coupled cooperation in heterogeneous robot teams

Lynne E. Parker; Christopher M. Reardon; Heeten Choxi; Cortney L. Bolden

This paper addresses the challenge of forming appropriate heterogeneous robot teams to solve tightly-coupled, potentially multi-robot tasks, in which the robot capabilities may vary over the environment in which the task is being performed. Rather than making use of a permanent tightly-coupled robot team for performing the task, our approach aims to recognize when tight coupling is needed, and then only form tight cooperative teams at those times. This results in important cost savings, since coordination is only used when the independent operation of the team members would put mission success at risk. Our approach is to define a new semantic information type, called environmentally dependent information, which allows us to capture certain environmentally-dependent perceptual constraints on vehicle capabilities. We define locations at which the robot team must transition between tight and weak cooperation as critical junctures. Note that these critical juncture points are a function of the robot team capabilities and the environmental characteristics, and are not due to a change in the task itself.We calculate critical juncture points by making use of our prior ASyMTRe approach, which can automatically configure heterogeneous robot team solutions to enable sharing of sensory capabilities across robots. We demonstrate these concepts in experiments involving a human-controlled blimp and an autonomous ground robot in a target localization task.


international symposium on safety, security, and rescue robotics | 2017

Optimizing autonomous surveillance route solutions from minimal human-robot interaction

Christopher M. Reardon; Fei Han; Hao Zhang; Jonathan Fink

Resource-constrained surveillance tasks represent a promising domain for autonomous robotic systems in a variety of real-world applications. In particular, we consider tasks where the system must maximize the probability of detecting a target while traversing an environment subject to resource constraints that make full coverage infeasible. In order to perform well, accurate knowledge of the underlying distribution of the surveillance targets is essential for practical use, but this is typically not available to robots. To successfully address surveillance route planning in human-robot teams, the design and optimization of human-robot interaction is critical. Further, in human-robot teaming, the human often possesses essential knowledge of the mission, environment, or other agents. In this paper, we introduce a new approach named Human-robot Autonomous Route Planning (HARP) that explores the space of surveillance solutions to maximize task-performance using information provided through interactions with humans. Experimental results have shown that with minimal interaction, we can successfully leverage human knowledge to create more successful surveillance routes under resource constraints.


international conference on robotics and automation | 2017

Simultaneous Feature and Body-Part Learning for real-time robot awareness of human behaviors

Fei Han; Xue Yang; Christopher M. Reardon; Yu Zhang; Hao Zhang

Robot awareness of human actions is an essential research problem in robotics with many important real-world applications, including human-robot collaboration and teaming. Over the past few years, depth sensors have become a standard device widely used by intelligent robots for 3D perception, which can also offer human skeletal data in 3D space. Several methods based on skeletal data were designed to enable robot awareness of human actions with satisfactory accuracy. However, previous methods treated all body parts and features equally important, without the capability to identify discriminative body parts and features. In this paper, we propose a novel simultaneous Feature And Body-part Learning (FABL) approach that simultaneously identifies discriminative body parts and features, and efficiently integrates all available information together to enable real-time robot awareness of human behaviors. We formulate FABL as a regression-like optimization problem with structured sparsity-inducing norms to model interrelationships of body parts and features. We also develop an optimization algorithm to solve the formulated problem, which possesses a theoretical guarantee to find the optimal solution. To evaluate FABL, three experiments were performed using public benchmark datasets, including the MSR Action3D and CAD-60 datasets, as well as a Baxter robot in practical assistive living applications. Experimental results show that our FABL approach obtains a high recognition accuracy with a processing speed of the order-of-magnitude of 101 Hz, which makes FABL a promising method to enable real-time robot awareness of human behaviors in practical robotics applications.


Frontiers in Neurorobotics | 2018

Shaping of Shared Autonomous Solutions With Minimal Interaction

Christopher M. Reardon; Hao Zhang; Jonathan Fink

A fundamental problem in creating successful shared autonomy systems is enabling efficient specification of the problem for which an autonomous system can generate a solution. We present a general paradigm, Interactive Shared Solution Shaping (IS3), broadly applied to shared autonomous systems where a human can use their domain knowledge to interactively provide feedback during the autonomous planning process. We hypothesize that this interaction process can be optimized so that with minimal interaction, near-optimal solutions can be achieved. We examine this hypothesis in the space of resource-constrained mobile search and surveillance and show that without directly instructing a robot or complete communication of a believed target distribution, the human teammate is able to successfully shape the generation of an autonomous search route. This ability is demonstrated in three experiments that show (1) the IS3 approach can improve performance in that routes generated from interactions in general reduce the variance of the target detection performance, and increase overall target detection; (2) the entire IS3 autonomous route generation systems performance, including cost of interaction along with movement cost, experiences a tradeoff between performance vs. numbers of interactions that can be optimized; (3) the IS3 autonomous route generation system is able to perform within constraints by generating tours that stay under budget when executed by a real robot in a realistic field environment.


international conference on robotics and automation | 2016

SRAC: Self-Reflective Risk-Aware Artificial Cognitive models for robot response to human activities

Hao Zhang; Christopher M. Reardon; Fei Han; Lynne E. Parker

In human-robot teaming, interpretation of human actions, recognition of new situations, and appropriate decision making are crucial abilities for cooperative robots (“co-robots”) to interact intelligently with humans. Given an observation, it is important that human activities are interpreted the same way by co-robots as human peers so that robot actions can be appropriate to the activity at hand. A novel interpretability indicator is introduced to address this issue. When a robot encounters a new scenario, the pretrained activity recognition model, no matter how accurate in a known situation, may not produce the correct information necessary to act appropriately and safely in new situations. To effectively and safely interact with people, we introduce a new generalizability indicator that allows a co-robot to self-reflect and reason about when an observation falls outside the co-robots learned model. Based on topic modeling and the two novel indicators, we propose a new Self-reflective Risk-aware Artificial Cognitive (SRAC) model, which allows a robot to make better decisions by incorporating robot action risks and identifying new situations. Experiments both using real-world datasets and on physical robots suggest that our SRAC model significantly outperforms the traditional methodology and enables better decision making in response to human behaviors.


robot and human interactive communication | 2015

Response prompting for intelligent robot instruction of students with intellectual disabilities

Christopher M. Reardon; Hao Zhang; Rachel Wright; Lynne E. Parker

Instruction of students with intellectual disability (ID) presents both unique challenges and a compelling opportunity for socially embedded robots to empower an important group in our population. We propose the creation of an autonomous, intelligent robot instructor (IRI) to teach socially valid life skills to students with ID. We present the construction of a complete IRI system for this purpose. Experimental results show the IRI is capable of teaching a non-trivial life skill to students with ID, and participants feel interaction with the IRI is beneficial.


intelligent robots and systems | 2003

Indoor target intercept using an acoustic sensor network and dual wavefront path planning

Lynne E. Parker; Ben Birch; Christopher M. Reardon


computer vision and pattern recognition | 2014

Simplex-Based 3D Spatio-temporal Feature Description for Action Recognition

Hao Zhang; Wenjun Zhou; Christopher M. Reardon; Lynne E. Parker

Collaboration


Dive into the Christopher M. Reardon's collaboration.

Top Co-Authors

Avatar

Hao Zhang

Colorado School of Mines

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fei Han

Colorado School of Mines

View shared research outputs
Top Co-Authors

Avatar

Kevin Lee

Oak Ridge Associated Universities

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Birch

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Chi Zhang

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Cortney L. Bolden

Lockheed Martin Advanced Technology Laboratories

View shared research outputs
Top Co-Authors

Avatar

Heeten Choxi

Lockheed Martin Advanced Technology Laboratories

View shared research outputs
Top Co-Authors

Avatar

Wenjun Zhou

University of Tennessee

View shared research outputs
Researchain Logo
Decentralizing Knowledge