Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chau Ton is active.

Publication


Featured researches published by Chau Ton.


intelligent robots and systems | 2015

Human-autonomy sensor fusion for rapid object detection

Ryan M. Robinson; Hyungtae Lee; Michael J. McCourt; Amar R. Marathe; Heesung Kwon; Chau Ton; William D. Nothwang

Human-autonomy sensor fusion is an emerging technology with a wide range of applications, including object detection/recognition, surveillance, collaborative control, and prosthetics. For object detection, humans and computer-vision-based systems employ different strategies to locate targets, likely providing complementary information. However, little effort has been made in combining the outputs of multiple autonomous detectors and multiple human-generated responses. This paper presents a method for integrating several sources of human- and autonomy-generated information for rapid object detection tasks. Human electroencephalography (EEG) and button-press responses from rapid serial visual presentation (RSVP) experiments are fused with outputs from trained object detection algorithms. Three fusion methods-Bayesian, Dempster-Shafer, and Dynamic Dempster-Shafer-are implemented for comparison. Results demonstrate that fusion of these human classifiers with computer-vision-based detectors improves object detection accuracy over purely computer-vision-based detection (5% relative increase in mean average precision) and the best individual computer vision algorithm (28% relative increase in mean average precision). Computer vision fused with button press response and/or the XDAWN + Bayesian Linear Discriminant Analysis neural classifier provides considerable improvement, while computer vision fused with other neural classifiers provides little or no improvement. Of the three fusion methods, Dynamic Dempster-Shafer Theory (DDST) Fusion exhibits the greatest performance in this application.


Computers and Electronics in Agriculture | 2017

Multiple camera fruit localization using a particle filter

Siddhartha S. Mehta; Chau Ton; S. Asundi; Thomas F. Burks

Apart from socioeconomic factors, success of robotics in agriculture lies in developing economically attractive solutions with efficiency comparable to that of the humans. Fruit localization is one of the building blocks in many robotic agricultural operations (e.g., yield mapping and robotic harvesting) that determines 3D Euclidean positions of the fruits using one or several sensors. It is crucial to guarantee the performance of the localization methods in the presence of fruit detection errors and unknown fruit motion (e.g., due to wind gust), so that the desired efficiency of the subsequent systems can be achieved. For instance, inaccurate localization may severely affect fruit picking efficiency in robotic harvesting. The presented estimation-based localization approach provides estimates of the fruit positions in the presence of fruit detection errors and unknown fruit motion, and it is based on a new sensing procedure that uses multiple (⩾2) inexpensive monocular cameras. A nonlinear estimator called particle filter is developed to estimate the unknown position of the fruits using image measurements obtained from multiple cameras. The particle filter is partitioned into clusters to independently localize individual fruits, while the behavior of the clusters is manipulated at global level to maintain a single filter structure. Since the accuracy of localization is affected by errors in fruit detection, the presented sensor model includes non-Gaussian fruit detection errors along with image noise. Fruit motion can significantly reduce harvesting efficiency due to errors in locating moving fruits. In contrast to existing methods, the dynamics of fruit motion are derived and included in the localization framework to obtain time-varying position estimates of the moving fruits. A detailed theoretical foundation is provided for the new estimation-based fruit localization approach, and it is validated through extensive Monte Carlo simulations. The performance of the estimator is evaluated by varying the design parameters, measurement noise, number of fruits, amount of overlap in clustered fruit scenarios, and fruit velocity. Correlation of these parameters with the performance of the estimator is derived, and guidelines are presented for selecting the design parameters and predicting performance bounds under given operating conditions.


systems, man and cybernetics | 2015

Human-Assisted RRT for Path Planning in Urban Environments

Siddhartha S. Mehta; Chau Ton; Michael J. McCourt; Zhen Kan; Emily A. Doucette; Wess W. Curtis

A human-RRT (Rapidly-exploring Random Tree) collaborative algorithm is presented for path planning in urban environments. The well-known RRT algorithm is modified for efficient planning in cluttered, yet structured urban environments. To engage the expert human knowledge in dynamic replanning of autonomous vehicles, a graphical user interface is developed that enables interaction with the automated RRT planner in real-time. The interface can be used to invoke standard planning attributes such as way areas, space constrains, and waypoints. In addition, the human can draw desired trajectories using the touch interface for the RRT planner to follow. Based on new information and evidence collected by human, state-dependent risk or penalty to grow paths based on an objective function can also be specified using the interface.


Journal of The Franklin Institute-engineering and Applied Mathematics | 2015

Vision-based navigation and guidance of a sensorless missile☆

Siddhartha S. Mehta; Chau Ton; Zhen Kan; J. W. Curtis

Abstract The objective of this paper is to develop a vision-based terminal guidance system for sensorless missiles. Specifically, monocular vision-based relative navigation and robust control methods are developed for a sensorless missile to intercept a ground target maneuvering with unknown time-varying velocity. A mobile wireless sensor and actor network is considered wherein a moving airborne monocular camera (e.g., attached to an aircraft) provides image measurements of the missile (actor) while another moving monocular camera (e.g., attached to a small UAV) tracks a ground target. The challenge is to express the unknown time-varying target position in the time-varying missile frame using image feedback from cameras moving with unknown trajectories. In a novel relative navigation approach, assuming the knowledge of a single geometric length on the missile, the time-varying target position is obtained by fusing the daisy-chained image measurements of the missile and the target into a homography-based Euclidean reconstruction method. The three-dimensional interception problem is posed in pursuit guidance, proportional navigation, and the proposed hybrid guidance framework. Interestingly, it will be shown that by appropriately defining the error system a single control structure can be maintained across all the above guidance methods. The control problem is formulated in terms of target dynamics in a ‘virtual’ camera mounted on the missile, which enables design of an adaptive nonlinear visual servo controller that compensates for the unknown time-varying missile–target relative velocity. Stability and zero-miss distance analysis of the proposed controller is presented, and a high-fidelity numerical simulation verifies the performance of the guidance laws.


advances in computing and communications | 2017

Nonsingular terminal sliding mode control with unknown control direction

Chau Ton; Siddhartha S. Mehta; Zhen Kan

This paper considers a nonsingular sliding mode controller for a second order system without a priori knowledge of the control direction. A novel nonsingular terminal sliding hypersurface is presented to adapt to the challenges of unknown control direction and avoid singularity issues at the origin. In contrast to the Nussbaum gain, where the equilibrium point is reached asymptotically, or the classical linear hypersurface, where the states are reached exponentially, the control structure in this paper guarantees that the states are reached in finite time. Additionally, the control algorithm is bounded and globally finite time stable in the presence of input matrix uncertainty and exogenous disturbance. Simulation results are provided to demonstrate the robustness of the developed control algorithm.


advances in computing and communications | 2016

Leader-follower consensus with unknown control direction

Chau Ton; Zhen Kan; Emily A. Doucette; J. W. Curtis; Siddhartha S. Mehta

The paper considers leader-follower consensus of multi-agent networks with unknown control direction. Sliding mode control is used to achieve consensus tracking under fixed topology with the assumption that the position of the leader is known to a subset of the followers. The proposed consensus law assumes unknown sign in the control input matrix of the followers and does not require the knowledge of the leaders velocity. Lyapunov-based analysis is presented to show that if the directed graph of the network has a directed spanning tree then sliding mode control law can guarantee consensus tracking. Simulations results are provided to verify the feasibility of the proposed controller.


ieee control systems letters | 2017

Super-Twisting Control of Double Integrator Systems With Unknown Constant Control Direction

Chau Ton; Siddhartha S. Mehta; Zhen Kan

A continuous sliding mode controller is developed for double integrator systems with constant unknown control direction. Additionally, the system is assumed to be subjected to unknown non-vanishing disturbances. The controller yields finite time convergence to hyper sliding surface and guarantees that the origin of the system is exponentially stable. Simulation and experimental results are provided to validate the proposed control algorithm.


systems, man and cybernetics | 2016

Curious partner: An approach to realize common ground in human-autonomy collaboration

Siddhartha S. Mehta; Chau Ton; Emily A. Doucette; J. W. Curtis

A dialog-based human-autonomy interaction ap- proach, called curious partner, is presented for a class of systems where the role of autonomy is to assist humans in decision making tasks. Even if the human and the autonomy share the environment and receive identical information, they may have inconsistencies in the representation of the environment due to difference in their perception and expert knowledge. The curious partner interaction framework is presented to resolve model-level differences between the human and the autonomy to establish common ground. The knowledge base of the autonomy is modeled using a Bayesian engine. The autonomys dialog with the human acts as a feedback mechanism to resolve any differences either by suggesting maximally probable actions to human based on the state of its Bayesian model or by updating its model to achieve analogous world representation.


systems, man and cybernetics | 2015

Mutual Information Based Risk-Aware Active Sensing in an Urban Environment

Zhen Kan; Chau Ton; Michael J. McCourt; J. Willard Curtis; Emily A. Doucette; Siddhartha S. Mehta

The risk-aware path planning problem is considered, which aims to locate a target in a congested urban environment and facilitate aid in the decision making on target interdiction. The target is modeled as a ground vehicle moving randomly within a road network and following traffic rules. To locate the target, a heterogeneous sensor network composed of passive sensors (e.g., Static traffic cameras and mobile human observers) and active sensors (e.g., A UAV) is tasked to cooperatively search for the target. A sample-based Bayesian filter is developed to fuse various sensor measurements to estimate the target state. To facilitate the decision making on target interdiction, a notion of risk is considered, which evaluates the incurred loss of target interdiction at certain locations based on incomplete information of target state and urban factors (e.g., The proximity to critical areas such as populated shopping malls, schools, military, or government buildings). As opposed to the static traffic cameras and the randomly walking human observers that passively provide target measurements, the UAV actively plans its path, based on mutual information, to maximize the in formativeness of future measurements. In contrast to classical target tracking that only focuses on reducing the uncertainty of target state, the risk is encoded in the particle weights to guide the motion of UAV to improve target state estimation and, ultimately, reduce the risk of decision on target interdiction. Simulation results are provided to demonstrate the integrated sensing framework and the risk-aware path planning algorithm.


IEEE Transactions on Systems, Man, and Cybernetics | 2018

Controllability Ensured Leader Group Selection on Signed Multiagent Networks

Baike She; Siddhartha Mehta; Chau Ton; Zhen Kan

Collaboration


Dive into the Chau Ton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhen Kan

University of Florida

View shared research outputs
Top Co-Authors

Avatar

Emily A. Doucette

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. W. Curtis

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Willard Curtis

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wess W. Curtis

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge