Xiang Ruan
Omron
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xiang Ruan.
computer vision and pattern recognition | 2013
Chuan Yang; Huchuan Lu; Xiang Ruan; Ming-Hsuan Yang
Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects. Instead of considering the contrast between the salient objects and their surrounding regions, we consider both foreground and background cues in a different way. We rank the similarity of the image elements (pixels or regions) with foreground cues or background cues via graph-based manifold ranking. The saliency of the image elements is defined based on their relevances to the given seeds or queries. We represent the image as a close-loop graph with super pixels as nodes. These nodes are ranked based on the similarity to background and foreground queries, based on affinity matrices. Saliency detection is carried out in a two-stage scheme to extract background regions and foreground salient objects efficiently. Experimental results on two large benchmark databases demonstrate the proposed method performs well when against the state-of-the-art methods in terms of accuracy and speed. We also create a more difficult benchmark database containing 5,172 images to test the proposed saliency model and make this database publicly available with this paper for further studies in the saliency field.
international conference on computer vision | 2013
Xiaohui Li; Huchuan Lu; Xiang Ruan; Ming-Hsuan Yang
In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via super pixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.
computer vision and pattern recognition | 2015
Lijun Wang; Huchuan Lu; Xiang Ruan; Ming-Hsuan Yang
This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods.
computer vision and pattern recognition | 2015
Na Tong; Huchuan Lu; Xiang Ruan; Ming-Hsuan Yang
We propose a bootstrap learning algorithm for salient object detection in which both weak and strong models are exploited. First, a weak saliency map is constructed based on image priors to generate training samples for a strong model. Second, a strong classifier based on samples directly from an input image is learned to detect salient pixels. Results from multiscale saliency maps are integrated to further improve the detection performance. Extensive experiments on six benchmark datasets demonstrate that the proposed bootstrap learning algorithm performs favorably against the state-of-the-art saliency detection methods. Furthermore, we show that the proposed bootstrap learning approach can be easily applied to other bottom-up saliency models for significant improvement.
european conference on computer vision | 2016
Linzhao Wang; Lijun Wang; Huchuan Lu; Pingping Zhang; Xiang Ruan
Deep networks have been proved to encode high level semantic features and delivered superior performance in saliency detection. In this paper, we go one step further by developing a new saliency model using recurrent fully convolutional networks (RFCNs). Compared with existing deep network based methods, the proposed network is able to incorporate saliency prior knowledge for more accurate inference. In addition, the recurrent architecture enables our method to automatically learn to refine the saliency map by correcting its previous errors. To train such a network with numerous parameters, we propose a pre-training strategy using semantic segmentation data, which simultaneously leverages the strong supervision of segmentation tasks for better training and enables the network to capture generic representations of objects for saliency detection. Through extensive experimental evaluations, we demonstrate that the proposed method compares favorably against state-of-the-art approaches, and that the proposed recurrent deep model as well as the pre-training method can significantly improve performance.
IEEE Signal Processing Letters | 2014
Na Tong; Huchuan Lu; Xiang Ruan
We propose a salient object detection algorithm via multi-scale analysis on superpixels. First, multi-scale segmentations of an input image are computed and represented by superpixels. In contrast to prior work, we utilize various Gaussian smoothing parameters to generate coarse or fine results, thereby facilitating the analysis of salient regions. At each scale, three essential cues from local contrast, integrity and center bias are considered within the Bayesian framework. Next, we compute saliency maps by weighted summation and normalization. The final saliency map is optimized by a guided filter which further improves the detection results. Extensive experiments on two large benchmark datasets demonstrate the proposed algorithm performs favorably against state-of-the-art methods. The proposed method achieves the highest precision value of 97.39% when evaluated on one of the most popular datasets, the ASD dataset.
Pattern Recognition | 2015
Na Tong; Huchuan Lu; Ying Zhang; Xiang Ruan
Previous saliency detection algorithms used to focus on low level features directly or utilize a bunch of sample images and manually labeled ground truth to train a high level learning model. In this paper, we propose a novel coding-based saliency measure by exploring both global and local cues for saliency computation. Firstly, we construct a bottom-up saliency map by considering global contrast information via low level features. Secondly, by using a locality-constrained linear coding algorithm, a top-down saliency map is formulated based on the reconstruction error. To better exploit the local and global information, we integrate the bottom-up and top-down maps as the final saliency map. Extensive experimental results on three large benchmark datasets demonstrate that the proposed approach outperforms 22 state-of-the-art methods in terms of three popular evaluation measures, i.e., the Precision and Recall curve, Area Under ROC Curve and F-measure value. Furthermore, the proposed coding-based method can be easily applied in other methods for significant improvement. HighlightsWe present a coding-based algorithm for salient object detection.Integration of local and global cues makes the saliency maps more accurate, intact.Bottom-up maps provide foreground and background codebooks for following steps.Fusion of FC and BC based results makes the saliency results more uniform, robust.Our coding-based method can be easily applied in other methods for improvement.
computer vision and pattern recognition | 2016
Ying Zhang; Baohua Li; Huchuan Lu; Atshushi Irie; Xiang Ruan
Person re-identification addresses the problem of matching people across disjoint camera views and extensive efforts have been made to seek either the robust feature representation or the discriminative matching metrics. However, most existing approaches focus on learning a fixed distance metric for all instance pairs, while ignoring the individuality of each person. In this paper, we formulate the person re-identification problem as an imbalanced classification problem and learn a classifier specifically for each pedestrian such that the matching model is highly tuned to the individuals appearance. To establish correspondence between feature space and classifier space, we propose a Least Square Semi-Coupled Dictionary Learning (LSSCDL) algorithm to learn a pair of dictionaries and a mapping function efficiently. Extensive experiments on a series of challenging databases demonstrate that the proposed algorithm performs favorably against the state-of-the-art approaches, especially on the rank-1 recognition rate.
IEEE Transactions on Image Processing | 2016
Huchuan Lu; Xiaohui Li; Xiang Ruan; Ming-Hsuan Yang
In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction error. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. First, we compute dense and sparse reconstruction errors on the background templates for each image region. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, the pixel-level reconstruction error is computed by the integration of multi-scale reconstruction errors. Both the pixellevel dense and sparse reconstruction errors are then weighted by image compactness, which could more accurately detect saliency. In addition, we introduce a novel Bayesian integration method to combine saliency maps, which is applied to integrate the two saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against 24 state-of-the-art methods in terms of precision, recall, and F-measure on three public standard salient object detection databases.
Pattern Recognition | 2016
Ying Zhang; Huchuan Lu; Xiang Ruan; Shun Sakai
In this paper, we propose a novel anomaly detection approach based on Locality Sensitive Hashing Filters (LSHF), which hashes normal activities into multiple feature buckets with Locality Sensitive Hashing (LSH) functions to filter out abnormal activities. An online updating procedure is also introduced into the framework of LSHF for adapting to the changes of the video scenes. Furthermore, we develop a new evaluation function to evaluate the hash map and employ the Particle Swarm Optimization (PSO) method to search for the optimal hash functions, which improves the efficiency and accuracy of the proposed anomaly detection method. Experimental results on multiple datasets demonstrate that the proposed algorithm is capable of localizing various abnormal activities in real world surveillance videos and outperforms state-of-the-art anomaly detection methods. HighlightsWe present a locality sensitive hashing filters based method for anomaly detection.Normal activities are hashed by hash functions into buckets to build filters.Abnormality of a test sample is estimated by filter response of its nearest bucket.Online updating mechanism increase the adaptability to scene changes.Searching for optimal hash functions improves the detection accuracy.Our method performs favorably against previous anomaly detection algorithms.