Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qiling Tang is active.

Publication


Featured researches published by Qiling Tang.


Pattern Recognition | 2007

Extraction of salient contours from cluttered scenes

Qiling Tang; Nong Sang; Tianxu Zhang

The responses of neurons in the primary visual cortex (V1) to stimulus inside the receptive field (RF) can be markedly modulated by stimuli outside the classical receptive field. The modulation, relying on contextual configurations, yields excitatory and inhibitory activities. The V1 neurons compose a functional network by lateral interactions and accomplish specific visual tasks in a dynamic and flexible fashion. Well-organized structures and conspicuous image locations are more salient and thus can pop out perceptually from the background. The excitatory and inhibitory activities give different visual physiological interpretations to the two kinds of saliencies. A model of contour extraction, inspired by visual cortical mechanisms of perceptual grouping, is presented. We unify the dual processes of spatial facilitation and surround inhibition to extract salient contours from complex scenes, and in this way coherent spatial configurations and region boundaries could stand out from their surround. The proposed method can selectively retain object contours, and meanwhile can dramatically reduce non-meaningful elements resulting from a texture background. This work gives a clear understanding for the roles of the inhibition and facilitation in grouping, and provides a biologically motivated computational strategy for contour extraction in computer vision.


Pattern Recognition Letters | 2011

Image segmentation via coherent clustering in L * a * b * color space

Rui Huang; Nong Sang; Dapeng Luo; Qiling Tang

Automatic image segmentation is always a fundamental but challenging problem in computer vision. The simplest approach to image segmentation may be clustering feature vectors of pixels at first, then labeling each pixel with its corresponding cluster. This requires that the clustering on feature space must be robust. However, most of popular clustering algorithms could not obtain a robust clustering result yet, if the clusters in feature space have a complex distribution. Generally, for most of clustering-based segmentation methods, it still needs more constraints of positional relations between pixels in image lattice to be utilized during the procedure of clustering. Our works in this paper address the problem of image segmentation under the paradigm of pure clustering-then-labeling. A robust clustering algorithm which could maintain good coherence of data in feature space is proposed and utilized to do clustering on the L^*a^*b^* color feature space of pixels. Image segmentation is straightforwardly obtained by setting each pixel with its corresponding cluster. Further, based on the theory of Minimum Description Length, an effective approach to automatic parameter selection for our segmentation method is also proposed. We test our segmentation method on Berkeley segmentation database, and the experimental results show that our method compares favorably against some state-of-the-art segmentation methods.


Pattern Recognition Letters | 2010

On selection and combination of weak learners in AdaBoost

Changxin Gao; Nong Sang; Qiling Tang

Despite of its great success, two key problems are still unresolved for AdaBoost algorithms: how to select the most discriminative weak learners and how to optimally combine them. In this paper, a new AdaBoost algorithm is proposed to make improvement in the two aspects. First, we select the most discriminative weak learners by minimizing a novel distance related criterion, i.e., error-degree-weighted training error metric (ETEM) together with generalization capability metric (GCM), rather than training error rate only. Second, after getting the coefficients that are set empirically, we combine the weak learners optimally by tuning the coefficients using kernel-based perceptron. Experiments with synthetic and real scene data sets show our algorithm outperforms conventional AdaBoost.


international conference on pattern recognition | 2010

Saliency Based on Multi-scale Ratio of Dissimilarity

Rui Huang; Nong Sang; Leyuan Liu; Qiling Tang

Recently, many vision applications tend to utilize saliency maps derived from input images to guide them to focus on processing salient regions in images. In this paper, we propose a simple and effective method to quantify the saliency for each pixel in images. Specially, we define the saliency for a pixel in a ratio form, where the numerator measures the number of dissimilar pixels in its center-surround and the denominator measures the total number of pixels in its center-surround. The final saliency is obtained by combining these ratios of dissimilarity over multiple scales. For images, the saliency map generated by our method not only has a high quality in resolution also looks more reasonable. Finally, we apply our saliency map to extract the salient regions in images, and compare the performance with some state-of-the-art methods over an established ground-truth which contains 1000 images.


international conference on image processing | 2013

Learning to detect contours in natural images via biologically motivated schemes

Qiling Tang; Nong Sang; Haihua Liu

A model for detecting contours in natural images is presented by combining the visual perceptual mechanisms and machine learning. The surround stimuli will enhance the response of the central stimulus if they can form a precise spatial configuration. On the other hand, surround inhibition will reduce the responses to homogeneous elements. Facilitation and inhibition activities in the primary visual cortex (V1) are used to enhance the well-organized structures and to reduce the non-meaningful distractors engendering from texture fields, respectively. We approach the task of facilitatory and inhibitory cue integration as a supervised learning problem using the logistic regression model. Our experiments demonstrate that the model can dramatically reduce texture edges and spurious contours, and meanwhile can to some extent avoid ground-truth contours missed by the detector.


international symposium on neural networks | 2005

A neural network model for extraction of salient contours

Qiling Tang; Nong Sang; Tianxu Zhang

In this paper, we construct a neural network structure to model the mechanisms of visual perception for salient contour - for some stimulus conditions, the response of a visual central stimulus is suppressed and for other conditions the response is enhanced. The proposed method which distinguishes between contours and texture edges can effectively eliminates surround textures, and at the same time, enables preservation of smooth contour. In particular, while some contours embedded in cluttered background are destructed due to surround disturbance, our approach can restore them better.


Optical Engineering | 2010

Cascade of hierarchical context and appearance for object detection

Changxin Gao; Nong Sang; Jun Gao; Qiling Tang

Conventional object-detection and localization approaches require extensive time to process the sliding background windows, which are not like the object at all. Global context of the subwindow gives access to alleviate such problems. In addition, many patch-based approaches often fail to search the patches at the correct locations and local context can help to resolve that. We propose an object-detection framework, which is top down and simple to implement. It combines global contextual features, local contextual features, and local appearance features in a coarse-to-fine cascade, which enables fast detection. The three features mentioned above play different roles in the process of object detection, and the representation with rich information makes detection robust and effective. The proposed approach shows satisfactory performance in both speed and accuracy.


asian conference on computer vision | 2006

Extraction of salient contours via excitatory-inhibitory interactions in the visual cortex

Qiling Tang; Nong Sang; Tianxu Zhang

In this paper we mimic a biological visual strategy to extract salient contours from complex scenes. Psychophysical and physiological studies show that the response to the stimulus within the receptive field is affected by the presence of surrounding stimuli— the response is suppressed significantly by similarly oriented stimuli in the surround while this suppression is converted to strong facilitation with the addition of collinear stimuli in the surround. According to this property of visual perception, we enhance salient contours and at the same time reduce the interference of the extraneous elements. Our results show the feasibility of the proposed method.


Optical Engineering | 2009

Visual codebook construction for class-specific recognition

Jun Gao; Nong Sang; Changxin Gao; Qiling Tang; Jun Sang

Creating a visual codebook is an important problem in object recognition. Using a compact visual codebook can boost computational efficiency and reduce memory cost. A simple and effective method is proposed for visual feature codebook construction. On the basis of a feedforward hierarchical model, a robust local descriptor is proposed and an a priori statistical scheme is applied to the class-specific feature-learning stage. The experiments show that the proposed approach achieves reliable performance with shorter codebook length, and incremental learning can be easily enabled.


international workshop on education technology and computer science | 2009

Biologically Inspired Class-Specific Codebook Construction

Jun Gao; Changxin Gao; Nong Sang; Qiling Tang

Aiming at class-specific recognition tasks, a novel method is presented to improve the object recognition performance of a biologically inspired model by learning classspecific feature codebook. The feature codebook is multi-class shared in the original model, and the content proportion for different codeword type is set in uniform distribution.According to corresponding discriminability, we modify the codebook content proportion for different codeword types(feature vector sizes and filter scales). The test results demonstrate that the codebooks built with proposed modification achieve higher total-length efficiency.

Collaboration


Dive into the Qiling Tang's collaboration.

Top Co-Authors

Avatar

Nong Sang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Changxin Gao

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Rui Huang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tianxu Zhang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dapeng Luo

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Jun Gao

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Leyuan Liu

Central China Normal University

View shared research outputs
Top Co-Authors

Avatar

Jun Sang

Chongqing University

View shared research outputs
Top Co-Authors

Avatar

Lamei Zou

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhiguo Cao

Huazhong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge