Hongwei Mao
Arizona State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hongwei Mao.
Proceedings of SPIE | 2010
Hongwei Mao; Chenhui Yang; Glen P. Abousleman; Jennie Si
In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.
Journal of Neurophysiology | 2015
Yuan Yuan; Hongwei Mao; Jennie Si
The outcomes that result from previous behavior affect future choices in several ways, but the neural mechanisms underlying these effects remain to be determined. Previous studies have shown that the lateral (AGl) and medial (AGm) agranular areas of the rat frontal cortex are involved in the learning and selection of action. Here we describe the activity of single neurons in AGl and AGm as rats learned to perform a directional choice task. Our analysis shows that single-cell activity in AGl and AGm was modulated by the outcome of the previous trial. A larger proportion of neurons encoded the previous trials outcome shortly after cue onset than during other time periods of a trial. Most of these neurons had greater activity after correct trials than after error trials, a difference that increased as behavioral performance improved. The number of neurons encoding the previous trials outcome correlated positively with performance accuracy. In summary, we found that neurons in both AGl and AGm encode the outcome of the immediately preceding trial, information that might play a role in the successful selection of action based on past experience.
Optical Engineering | 2014
Hongwei Mao; Chenhui Yang; Glen P. Abousleman; Jennie Si
Abstract. In real-world scenarios, a target tracking system could be severely compromised by interactions, i.e., influences from the proximity and/or behavior of other targets or background objects. Closely spaced targets are difficult to distinguish, and targets may be partially or totally invisible for uncontrolled durations when occluded by other objects. These situations are very likely to degrade the performance or cause the tracker to fail because the system may use invalid target observations to update the tracks. To address these issues, we propose an integrated multitarget tracking system. A background-subtraction–based method is used to automatically detect moving objects in video frames captured by a moving camera. The data association method evaluates the overlap rates between newly detected objects (observations) and already-tracked targets and makes decisions pertaining to whether a target is interacting with other targets and whether it has a valid observation. According to the association results, distinct strategies are employed to update and manage the tracks of interacting versus well-isolated targets. This system has been tested with real-world airborne videos from the DARPA Video Verification of Identity program database and demonstrated excellent track continuity in the presence of occlusions and multiple target interactions, very low false alarm rate, and real-time operation on an ordinary general-purpose computer.
Proceedings of SPIE | 2013
Xiang Gao; Hongwei Mao; Eric Munson; Glen P. Abousleman; Jennie Si
In this paper, we propose a real-time embedded video target tracking algorithm for use with real-world airborne video. The proposed system is designed to detect and track multiple targets from a moving camera in complicated motion scenarios such as occlusion, closely spaced targets passing in opposite directions, move-stop-move, etc. In our previous work, we developed a robust motion-based detection and tracking system, which achieved real-time performance on a desktop computer. In this paper, we extend our work to real-time implementation on a Texas Instruments OMAP 3730 ARM + DSP embedded processor by replacing the previous sequential motion estimation and tracking processes with a parallel implementation. To achieve real-time performance on the heterogeneous-core ARM + DSP OMAP platform, the C64x+ DSP core is utilized as a motion estimation preprocessing unit for target detection. Following the DSP-based motion estimation step, the descriptors of potential targets are passed to the general-purpose ARM Cortex A8 for further processing. Simultaneously, the DSP begins preprocessing the next frame. By maximizing the parallel computational capability of the DSP, and operating the DSP and ARM asynchronously, we reduce the average processing time for each video frame by up to 60% as compared to an ARM-only approach.
Proceedings of SPIE | 2013
Hongwei Mao; Glen P. Abousleman; Jennie Si
In real-world target tracking scenarios, interactions among multiple moving targets can severely compromise the performance of the tracking system. Targets involved in interactions are typically closely-spaced and are often partially or entirely occluded by other objects. In these cases, valid target observations are unlikely to be available. To address this issue, we present an integrated multi-target tracking system. The data association method evaluates the overlap rates between newly detected objects (target observations) and already-tracked targets, and makes decisions pertaining to whether a target is interacting with other targets and whether it has a valid observation. Thus, the system is capable of recognizing target interactions and will reject invalid target observations. According to the association results, distinct strategies are adopted to update and manage the tracks of interacting versus well-isolated targets. Testing results on real-world airborne video sequences demonstrate the excellent performance of the proposed system for tracking targets with multiple target interactions. Moreover, the system operates in real time on an ordinary desktop computer.
Proceedings of SPIE | 2011
Hongwei Mao; Chenhui Yang; Glen P. Abousleman; Jennie Si
In real-world outdoor video, moving targets such as vehicles and people may be partially or fully occluded by background objects such as buildings and trees, which makes tracking them continuously a very challenging task. In the present work, we present a system to address the problem of tracking targets through occlusions in a motion-based target detection and tracking framework. For an existing track that is fully occluded, a Kalman filter is applied to predict the targets current position based upon its previous locations. However, the prediction may drift from the targets true trajectory due to accumulated prediction errors, especially when the occlusion is of long duration. To address this problem, tracks that have disappeared are checked with an extra data association procedure that evaluates the potential association between the track and the new detections, which could be a previously tracked target that is just coming out of occlusion. Another issue that arises with motion-based tracking is that the algorithm may consider the visible part of a partially occluded target as the entire target region. This is problematic because an inaccurate target motion trajectory model will be built, causing the Kalman filter to generate inaccurate target position predictions, which can yield a divergence between the track and the true target trajectory. Accordingly, we present a method that provides reasonable estimates of the partially-occluded target centers. Experimental results conducted on real-world unmanned air vehicle (UAV) video sequences demonstrate that the proposed system significantly improves the track continuity in various occlusion scenarios.
world congress on computational intelligence | 2012
Chenhui Yang; Hongwei Mao; Yuan Yuan; Bing Cheng; Jennie Si
How neuronal firing activities encode meaningful behavior is an ultimate challenge to neuroscientists. To make the problem tractable, we use a rat model to elucidate how an ensemble of single neuron firing events leads to conscious, goal-directed movement and control. This study discusses findings based on single unit, multi-channel simultaneous recordings from rats frontal areas while they learned to perform a decision and control task. To study neural firing activities, first and foremost we needed to identify single unit firing action potentials, or perform spike sorting prior to any analysis on the ensemble of neural activities. After that, we studied cortical neural firing rates to characterize their changes as rats learned a directional paddle control task. Single units from the rats frontal areas were inspected for their possible encoding mechanism of directional and sequential movement parameters. Our results entail both high level statistical snapshots of the neural data and more detailed neuronal roles in relation to rats learning control behavior.
Proceedings of SPIE | 2011
Chenhui Yang; Hongwei Mao; Glen P. Abousleman; Jennie Si
In video tracking systems using image subtraction for motion detection, the global motion is usually estimated to compensate for the camera motion. The accuracy and robustness of the global motion compensation critically affects the performance of the target tracking process. The global motion between video frames can be estimated by matching the features from the image background. However, the features from moving targets contain both camera and target motion and should not be used to calculate the global motion. Sparse optical flow is a classical image matching method. However, the image features selected by optical flow may come from moving targets, with some of the image features matched not being accurate, which leads to poor video tracking performance. Least Median of Square (LMedS) is a popular robust linear regression model and has been applied to real-time video tracking systems implemented in hardware to process up to 7.5 frames/second. In this paper, we use a robust regression method to select features only from the image background for robust global motion estimation, and we develop a real-time (10 frames/second), software-based video tracking system that runs on an ordinary Windows-based general-purpose computer. The software optimization and parameter tuning for real-time execution are discussed in detail. The tracking performance is evaluated with real-world Unmanned Air Vehicle (UAV) video, and we demonstrate the improved global motion estimation in terms of accuracy and robustness.
Proceedings of SPIE | 2010
Chenhui Yang; Hongwei Mao; Glen P. Abousleman; Jennie Si
Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.
Archive | 2015
Nathalie Picard; Peter L. Strick; Scott T. Grafton; Konrad P. Körding; Daniel E. Acuna; Nicholas F. Wymbs; Chelsea A. Reynolds; Se-Woong Park; Dagmar Sternad; Yuan Yuan; Hongwei Mao; Jennie Si