Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Huaizu Jiang is active.

Publication


Featured researches published by Huaizu Jiang.


IEEE Transactions on Image Processing | 2015

Salient Object Detection: A Benchmark

Ali Borji; Ming-Ming Cheng; Huaizu Jiang; Jia Li

We extensively compare, qualitatively and quantitatively, 41 state-of-the-art models (29 salient object detection, 10 fixation prediction, 1 objectness, and 1 baseline) over seven challenging data sets for the purpose of benchmarking salient object detection and segmentation methods. From the results obtained so far, our evaluation shows a consistent rapid progress over the last few years in terms of both accuracy and running time. The top contenders in this benchmark significantly outperform the models identified as the best in the previous benchmark conducted three years ago. We find that the models designed specifically for salient object detection generally work better than models in closely related areas, which in turn provides a precise definition and suggests an appropriate treatment of this problem that distinguishes it from other problems. In particular, we analyze the influences of center bias and scene complexity in model performance, which, along with the hard cases for the state-of-the-art models, provide useful hints toward constructing more challenging large-scale data sets and better saliency models. Finally, we propose probable solutions for tackling several open problems, such as evaluation scores and data set bias, which also suggest future research directions in the rapidly growing field of salient object detection.


british machine vision conference | 2011

Automatic Salient Object Segmentation Based on Context and Shape Prior

Huaizu Jiang; Jingdong Wang; Zejian Yuan; Tie Liu; Nanning Zheng

We propose a novel automatic salient object segmentation algorithm which integrates both bottom-up salient stimuli and object-level shape prior, i.e., a salient object has a well-defined closed boundary. Our approach is formalized as an iterative energy minimization framework, leading to binary segmentation of the salient object. Such energy minimization is initialized with a saliency map which is computed through context analysis based on multi-scale superpixels. Object-level shape prior is then extracted combining saliency with object boundary information. Both saliency map and shape prior update after each iteration. Experimental results on two public benchmark datasets show that our proposed approach outperforms state-of-the-art methods.


ieee international conference on automatic face gesture recognition | 2017

Face Detection with the Faster R-CNN

Huaizu Jiang; Erik G. Learned-Miller

While deep learning based methods for generic object detection have improved rapidly in the last two years, most approaches to face detection are still based on the R-CNN framework [11], leading to limited accuracy and processing speed. In this paper, we investigate applying the Faster RCNN [26], which has recently demonstrated impressive results on various object detection benchmarks, to face detection. By training a Faster R-CNN model on the large scale WIDER face dataset [34], we report state-of-the-art results on the WIDER test set as well as two other widely used face detection benchmarks, FDDB and the recently released IJB-A.


International Journal of Computer Vision | 2017

Salient Object Detection: A Discriminative Regional Feature Integration Approach

Jingdong Wang; Huaizu Jiang; Zejian Yuan; Ming-Ming Cheng; Xiaowei Hu; Nanning Zheng

Feature integration provides a computational framework for saliency detection, and a lot of hand-crafted integration rules have been developed. In this paper, we present a principled extension, supervised feature integration, which learns a random forest regressor to discriminatively integrate the saliency features for saliency computation. In addition to contrast features, we introduce regional object-sensitive descriptors: the objectness descriptor characterizing the common spatial and appearance property of the salient object, and the image-specific backgroundness descriptor characterizing the appearance of the background of a specific image, which are shown more important for estimating the saliency. To the best of our knowledge, our supervised feature integration framework is the first successful approach to perform the integration over the saliency features for salient object detection, and outperforms the integration approach over the saliency maps. Together with fusing the multi-level regional saliency maps to impose the spatial saliency consistency, our approach significantly outperforms state-of-the-art methods on seven benchmark datasets. We also discuss several followup works which jointly learn the representation and the saliency map using deep learning.


IEEE Transactions on Image Processing | 2014

Regularized Tree Partitioning and Its Application to Unsupervised Image Segmentation

Jingdong Wang; Huaizu Jiang; Yangqing Jia; Xian-Sheng Hua; Changshui Zhang; Long Quan

In this paper, we propose regularized tree partitioning approaches. We study normalized cut (NCut) and average cut (ACut) criteria over a tree, forming two approaches: 1) normalized tree partitioning (NTP) and 2) average tree partitioning (ATP). We give the properties that result in an efficient algorithm for NTP and ATP. In addition, we present the relations between the solutions of NTP and ATP over the maximum weight spanning tree of a graph and NCut and ACut over this graph. To demonstrate the effectiveness of the proposed approaches, we show its application to image segmentation over the Berkeley image segmentation data set and present qualitative and quantitative comparisons with state-of-the-art methods.In this paper, we propose regularized tree partitioning approaches. We study normalized cut (NCut) and average cut (ACut) criteria over a tree, forming two approaches: 1) normalized tree partitioning (NTP) and 2) average tree partitioning (ATP). We give the properties that result in an efficient algorithm for NTP and ATP. In addition, we present the relations between the solutions of NTP and ATP over the maximum weight spanning tree of a graph and NCut and ACut over this graph. To demonstrate the effectiveness of the proposed approaches, we show its application to image segmentation over the Berkeley image segmentation data set and present qualitative and quantitative comparisons with state-of-the-art methods.


international conference on image processing | 2013

Probabilistic salient object contour detection based on superpixels

Huaizu Jiang; Yang Wu; Zejian Yuan

In this paper, we propose a data-driven approach to detect the probabilistic salient object contour, which is formulated as predicting the probability of superpixel boundaries being on the object contour based on the learned regressor. Each superpixel boundary is jointly described by the superpixel saliency, superpixel contrast, and boundary geometry features. Experimental results on the benchmark data set validate the effectiveness of our approach. Furthermore, we demonstrate that the predicted probabilistic salient object contour is useful for improving the multiple segmentations for salient object detection.


Frontiers of Computer Science in China | 2018

Joint salient object detection and existence prediction

Huaizu Jiang; Ming-Ming Cheng; Shi-Jie Li; Ali Borji; Jingdong Wang

Recent advances in supervised salient object detection modeling has resulted in significant performance improvements on benchmark datasets. However, most of the existing salient object detection models assume that at least one salient object exists in the input image. Such an assumption often leads to less appealing saliency maps on the background images with no salient object at all. Therefore, handling those cases can reduce the false positive rate of a model. In this paper, we propose a supervised learning approach for jointly addressing the salient object detection and existence prediction problems. Given a set of background-only images and images with salient objects, as well as their salient object annotations, we adopt the structural SVM framework and formulate the two problems jointly in a single integrated objective function: saliency labels of superpixels are involved in a classification term conditioned on the salient object existence variable, which in turn depends on both global image and regional saliency features and saliency labels assignments. The loss function also considers both image-level and region-level mis-classifications. Extensive evaluation on benchmark datasets validate the effectiveness of our proposed joint approach compared to the baseline and state-of-the-art models.


IEEE Transactions on Image Processing | 2015

Online Multi-Target Tracking With Unified Handling of Complex Scenarios

Huaizu Jiang; Jinjun Wang; Yihong Gong; Na Rong; Zhenhua Chai; Nanning Zheng

Complex scenarios, including miss detections, occlusions, false detections, and trajectory terminations, make the data association challenging. In this paper, we propose an online tracking-by-detection method to track multiple targets with unified handling of aforementioned complex scenarios, where current detection responses are linked to the previous trajectories. We introduce a dummy node to each trajectory to allow it to temporally disappear. If a trajectory fails to find its matching detection, it will be linked to its corresponding dummy node until the emergence of its matching detection. Source nodes are also incorporated to account for the entrance of new targets. The standard Hungarian algorithm, extended by the dummy nodes, can be exploited to solve the online data association implicitly in a global manner, although it is formulated between two consecutive frames. Moreover, as dummy nodes tend to accumulate in a fake or disappeared trajectory while they only occasionally appear in a real trajectory, we can deal with false detections and trajectory terminations by simply checking the number of consecutive dummy nodes. Our approach works on a single, uncalibrated camera, and requires neither scene prior knowledge nor explicit occlusion reasoning, running at 132 frames/s on the PETS09-S2L1 benchmark sequence. The experimental results validate the effectiveness of the dummy nodes in complex scenarios and show that our proposed approach is robust against false detections and miss detections. Quantitative comparisons with other methods on five benchmark sequences demonstrate that we can achieve comparable results with the most existing offline methods and better results than other online algorithms.


european conference on computer vision | 2018

Unsupervised Hard Example Mining from Videos for Improved Object Detection

SouYoung Jin; Aruni RoyChowdhury; Huaizu Jiang; Ashish Singh; Aditya Prasad; Deep Chakraborty; Erik G. Learned-Miller

Important gains have recently been obtained in object detection by using training objectives that focus on hard negative examples, i.e., negative examples that are currently rated as positive or ambiguous by the detector. These examples can strongly influence parameters when the network is trained to correct them. Unfortunately, they are often sparse in the training data, and are expensive to obtain. In this work, we show how large numbers of hard negatives can be obtained automatically by analyzing the output of a trained detector on video sequences. In particular, detections that are isolated in time, i.e., that have no associated preceding or following detections, are likely to be hard negatives. We describe simple procedures for mining large numbers of such hard negatives (and also hard positives) from unlabeled video data. Our experiments show that retraining detectors on these automatically obtained examples often significantly improves performance. We present experiments on multiple architectures and multiple data sets, including face detection, pedestrian detection and other object categories.


arXiv: Computer Vision and Pattern Recognition | 2014

Salient Object Detection: A Survey.

Ali Borji; Ming–Ming Cheng; Huaizu Jiang; Jia Li

Collaboration


Dive into the Huaizu Jiang's collaboration.

Top Co-Authors

Avatar

Erik G. Learned-Miller

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nanning Zheng

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Zejian Yuan

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Ali Borji

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aditya Prasad

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge