Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. Willard Curtis is active.

Publication


Featured researches published by J. Willard Curtis.


AIAA Guidance, Navigation, and Control Conference | 2012

An Adaptive Backstepping Controller for a Hypersonic Air-Breathing Missile

B. J. Bialy; Justin R. Klotz; J. Willard Curtis; Warren E. Dixon

This paper presents the development of an adaptive controller for a hypersonic air-breathing missile with terminal constraints. The controller is designed to regulate the longitudinal dynamics of a hypersonic vehicle model via a backstepping approach. The backstepping approach is used to compensate for uncertainties in the dynamics that do not satisfy the matching condition while ensuring asymptotic tracking of a desired velocity profile and asymptotic regulation of the vehicle position, angle of attack, body angle, and angular rates. A Lyapunov-based stability analysis is used to prove the asymptotic regulation of the controlled states. Simulation results are presented to verify the performance of the controller.


systems, man and cybernetics | 2014

Information fusion in human-robot collaboration using neural network representation

Ashwin P. Dani; Michael J. McCourt; J. Willard Curtis; Siddhartha S. Mehta

In this paper, an algorithm for hard and soft data fusion is developed for tracking moving objects using hard data from sensors on autonomous agents and soft data from human observations. Two main challenges are identified and addressed in this paper: 1. how to model the human observation, 2. how to estimate state using soft data and fuse it with the state estimates from the sensors on autonomous agents (e.g., a camera sensor). A novel approach is developed to model perceived human observations to the real physical states using artificial neural networks (ANN). A particle filter (PF) is used to estimate a moving targets state based on range and bearing observation data from a human observer and an EKF is used to estimate the target state using on-board camera sensor. The range measurement is represented using Kumaraswamys double bounded distribution. The state estimates computed based on a model of human observation learned by an ANN are fused with the state estimates from the on-board sensors using a fast covariance intersection (CI) algorithm. The CI algorithm yields consistent fused estimates in the absence of unknown correlations between state estimates obtained using human measurements and robot sensor measurements. The performance of the developed algorithms is validated on a target tracking simulation platform.


advances in computing and communications | 2014

Moving target acquisition through state uncertainty minimization

Juan Pablo Ramirez; Emily A. Doucette; J. Willard Curtis; Nicholas R. Gans

This work addresses the task of a mobile sensor platform searching for a moving target. We show that minimizing the entropy of the probability distribution of the target state estimate can result in a control input for the mobile sensor that acquires the target in less iterations than an exhaustive search. We also show that this approach can be used to track the target after it is acquired. We apply a particle filter framework to estimate the state of the target and propose an information-based cost function to optimize as part of a control law for the mobile sensor. We include simulation results to illustrate the performance of our approach.


ieee/ion position, location and navigation symposium | 2016

Map merging of rotated, corrupted, and different scale maps using rectangular features

Jinyoung Park; Andrew J. Sinclair; Ryan E. Sherrill; Emily A. Doucette; J. Willard Curtis

Integrating data from multiple cooperative robots can be important for expanding their individual capabilities. In an environmental mapping scenario, multiple ground robots map different local areas. Algorithm complexity on merging the maps to build a global map depends on the three factors: orientation, accuracy and scale of the maps. When the three factors are all unknown, the map merging becomes a challenging problem. In this paper, a new approach on merging of two maps with the three factors are unknown. The idea is to estimate the best shared-areas by means of rectangular features. The information of dimensions and connections of maximal empty rectangles allows the algorithms to match orientations and scales, also to find overlapping points. The advantage of this approach is that a map merging is accomplished without any location estimations between the robots. This paper explains the map-merging process with an example of a simple environment, and presents a result with a practical environment.


2016 Resilience Week (RWS) | 2016

The human should be part of the control loop

William D. Nothwang; Michael J. McCourt; Ryan M. Robinson; Samuel A. Burden; J. Willard Curtis

The capabilities of autonomy have grown to encompass new application spaces that until recently were considered exclusive to humans. In the past, automation has focused on applications where it was preferable to completely replace the human. Today, though, we have the opportunity to leverage the complementary strengths of both human and autonomy technologies to maximize performance and limit risk, and the human should therefore remain “in” or “on” the loop. To adequately assess when and how to accomplish this, it requires us to assess not only the capabilities, but the risks and the ethical questions; coupled to this are the issues with degradation of performance in specific instances (for instance, recovery from failure) that may require a human to remain the sole control authority. This paper investigates the contributors to success/failure in current human-autonomy integration frameworks, and proposes guidelines for safe and resilient use of humans and autonomy with regard to performance, consequence, and the stability of human-machine switching. Key to our proposed approach are (i) the relative error rate between the human and autonomy and (ii) the consequence of possible events.


systems, man and cybernetics | 2016

Passive switched system analysis of semi-autonomous systems

Michael J. McCourt; Ryan M. Robinson; William D. Nothwang; Emily A. Doucette; J. Willard Curtis

While autonomous capabilities have proliferated across a wide range of commercial and domestic applications, some tasks require intermittent aid from a human operator. Guaranteeing the safety of these intermittently-teleoperated systems requires stability guarantees that hold in the presence of switching. In this paper, we consider the problem of controlling a robotic vehicle using both a human controller and an autonomous controller. The strategy is to allow the human operator to switch between manual control and autonomous control as needed. The feedback loop is analyzed and shown to be stable using a notion of passivity from nonlinear system analysis. Finally, an example is provided to demonstrate the approach.


systems, man and cybernetics | 2016

Degree of automation in command and control decision support systems

Ryan M. Robinson; Michael J. McCourt; Amar R. Marathe; William D. Nothwang; Emily A. Doucette; J. Willard Curtis

This paper investigates the effects of integrating automation into the various stages of information processing in a military command and control scenario. Command and control (C2) is an extreme decision-making paradigm characterized by high uncertainty, high risk, and severe time pressure. We introduce a principled approach to decision support system (DSS) design that specifically addresses these issues. Our approach establishes the principles of communicating confidence in sensor estimates and consequence of actions in an intuitive, timely manner. We hypothesize that automation designed to communicate confidence and/or consequence will improve task performance over systems that neglect these concepts. Toward this end, human-subjects experiments were conducted to compare the effects of displaying confidence/consequence information in a C2 target-tracking and interdiction scenario. Four variations of a decision support interface were designed, each with a distinct “degree of automation”: (i) an instantaneous sensor measurement visualization (baseline), (ii) a confidence-based visualization, (iii) a confidence- and consequence-based visualization, and (iv) a confidence- and consequence-based visualization with explicit decision recommendations. While increasing automation generally improved results, the inclusion of consequence information did not have a major effect, perhaps because the scenario was overly-simplified.


advances in computing and communications | 2017

Ground target tracking and trajectory prediction by UAV using a single camera and 3D road geometry recovery

Yingmao Li; Emily A. Doucette; J. Willard Curtis; Nicholas R. Gans

In this paper, we propose a new method to address the dual problem of ground target tracking in the image plane and 3D road geometry recovery using a single vision sensor on-board an unmanned aerial vehicle. We recover the road geometry from a single aerial image by a novel structure from motion algorithm with the simple assumption that road or lane boundaries are parallel curves and do not have significant twist. The coordinate of the ground target in the camera frame is then estimated using the recovered road geometry. An extended Kalman filter is finally applied to track and predict the trajectory of the target using recent target motion and the road geometry. The experimental results on simulated data and real world image show the feasibility of our approach.


Proceedings of SPIE | 2017

Distributed subterranean exploration and mapping with teams of UAVs

John G. Rogers; Ryan E. Sherrill; Arthur Schang; Shava L. Meadows; Eric P. Cox; Brendan Byrne; David Baran; J. Willard Curtis; Kevin M. Brink

Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.


Proceedings of SPIE | 2016

Human-machine Teaming for Effective Estimation and Path Planning

Michael J. McCourt; Siddhartha S. Mehta; Emily A. Doucette; J. Willard Curtis

While traditional sensors provide accurate measurements of quantifiable information, humans provide better qualitative information and holistic assessments. Sensor fusion approaches that team humans and machines can take advantage of the benefits provided by each while mitigating the shortcomings. These two sensor sources can be fused together using Bayesian fusion, which assumes that there is a method of generating a probabilistic representation of the sensor measurement. This general framework of fusing estimates can also be applied to joint human-machine decision making. In the simple case, binary decisions can be fused by using a probability of taking an action versus inaction from each decision-making source. These are fused together to arrive at a final probability of taking an action, which would be taken if above a specified threshold. In the case of path planning, rather than binary decisions being fused, complex decisions can be fused by allowing the human and machine to interact with each other. For example, the human can draw a suggested path while the machine planning algorithm can refine it to avoid obstacles and remain dynamically feasible. Similarly, the human can revise a suggested path to achieve secondary goals not encoded in the algorithm such as avoiding dangerous areas in the environment.

Collaboration


Dive into the J. Willard Curtis's collaboration.

Top Co-Authors

Avatar

Emily A. Doucette

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicholas R. Gans

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kaveh Fathian

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Zhen Kan

University of Florida

View shared research outputs
Top Co-Authors

Avatar

Ryan E. Sherrill

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge