Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Songsong Wu is active.

Publication


Featured researches published by Songsong Wu.


Applied Soft Computing | 2015

Bayesian sample steered discriminative regression for biometric image classification

Guangwei Gao; Jian Yang; Songsong Wu; Xiao-Yuan Jing; Dong Yue

The flow chart of the proposed BSDR algorithm. Using the method described in Section 3.2, the target response vector for each class k can be computed. Then the linear mapping function for class k can be obtained. With all the w ? k computed, the linear mapping matrix can be obtained by W= w ? 1 , ?, w ? K . For a test sample y, its labeling feature can be extracted by ry=WTy. Classification can be done by NNC with cosine distance. A novel image feature extraction method BSDR is proposed.BSDR refers to Bayesian sample steered discriminative regression.BSDR both uses the sample class label and the sample appearance for mapping function learning.Experimental results demonstrate the effectiveness of the proposed BSDR method. Regression techniques, such as ridge regression (RR) and logistic regression (LR), have been widely used in supervised learning for pattern classification. However, these methods mainly exploit the class label information for linear mapping function learning. They will become less effective when the number of training samples per class is small. In visual classification tasks such as face recognition, the appearance of the training sample images also conveys important discriminative information. This paper proposes a novel regression based classification model, namely Bayesian sample steered discriminative regression (BSDR), which simultaneously exploits the sample class label and the sample appearance for linear mapping function learning by virtue of the Bayesian formula. BSDR learns a linear mapping for each class to extract the image class label features, and classification can be simply done by nearest neighbor classifier. The proposed BSDR method has advantages such as small number of mappings, insensitiveness to input feature dimensionality and robustness to small sample size. Extensive experiments on several biometric databases also demonstrate the promising classification performance of our method.


IEEE Access | 2016

Locality-Constrained Double Low-Rank Representation for Effective Face Hallucination

Guangwei Gao; Xiao-Yuan Jing; Pu Huang; Quan Zhou; Songsong Wu; Dong Yue

Recently, position-patch-based face hallucination methods have received much attention, and obtained promising progresses due to their effectiveness and efficiency. A locality-constrained double low-rank representation (LCDLRR) method is proposed for effective face hallucination in this paper. LCDLRR attempts to directly use the image-matrix based regression model to compute the representation coefficients to maintain the essential structural information. On the other hand, LCDLRR imposes a low-rank constraint on the representation coefficients to adaptively select the training samples that belong to the same subspace as the inputs. Moreover, a locality constraint is also enforced to preserve the locality and the sparsity simultaneously. Compared with previous methods, our proposed LCDLRR considers locality manifold structure, cluster constraints, and structure error simultaneously. Extensive experimental results on standard face hallucination databases indicate that our proposed method outperforms some state-of-the-art algorithms in terms of both visual quantity and objective metrics.


CCF Chinese Conference on Computer Vision | 2017

Deep Context Convolutional Neural Networks for Semantic Segmentation

Wenbin Yang; Quan Zhou; Yawen Fan; Guangwei Gao; Songsong Wu; Weihua Ou; Huimin Lu; Jie Cheng; Longin Jan Latecki

Recent years have witnessed the great progress for semantic segmentation using deep convolutional neural networks (DCNNs). This paper presents a novel fully convolutional network for semantic segmentation using multi-scale contextual convolutional features. Since objects in natural images tend to be with various scales and aspect ratios, capturing the rich contextual information is very critical for dense pixel prediction. On the other hand, with going deeper of the convolutional layers, the convolutional feature maps of traditional DCNNs gradually become coarser, which may be harmful for semantic segmentation. According to these observations, we attempt to design a deep context convolutional network (DCCNet), which combines the feature maps from different levels of network in a holistic manner for semantic segmentation. The proposed network allows us to fully exploit local and global contextual information, ranging from an entire scene, though sub-regions, to every single pixel, to perform pixel label estimation. The experimental results demonstrate that our DCCNet (without any postprocessing) outperforms state-of-the-art methods on PASCAL VOC 2012, which is the most popular and challenging semantic segmentation dataset.


Pattern Recognition | 2016

Uncorrelated multi-set feature learning for color face recognition

Fei Wu; Xiao-Yuan Jing; Xiwei Dong; Qi Ge; Songsong Wu; Qian Liu; Dong Yue; Jingyu Yang

Most existing color face feature extraction methods need to perform color space transformation, and they reduce correlation of color components on the data level that has no direct connection with classification. Some methods extract features from R, G and B components serially with orthogonal constraints on the feature level, yet the serial extraction manner might make discriminabilities of features derived from three components distinctly different. Multi-set feature learning can jointly learn features from multiple sets of data effectively. In this paper, we propose two novel color face recognition approaches, namely multi-set statistical uncorrelated projection analysis (MSUPA) and multi-set discriminating uncorrelated projection analysis (MDUPA), which extract discriminant features from three color components together and simultaneously reduce the global statistical and global discriminating feature-level correlation between color components in a multi-set manner, respectively. Experiments on multiple public color face databases demonstrate that the proposed approaches outperform several related state-of-the-arts. We propose a multi-set statistical uncorrelated projection analysis approach.We define a supervised correlation that is the discriminating correlation.We propose a multi-set discriminating uncorrelated projection analysis approach.Performance of our approaches is demonstrated on multiple color face databases.


international conference on multimedia and expo | 2016

Unsupervised visual domain adaptation via dictionary evolution

Songsong Wu; Xiao-Yuan Jing; Dong Yue; Jian Zhang; K Jian Yang; Jingyu Yang

In real-word visual applications, distribution mismatch between samples from different domains may significantly degrade classification performance. To improve the generalization capability of classifier across domains, domain adaptation has attracted a lot of interest in computer vision. This work focuses on unsupervised domain adaptation which is still challenging because no labels are available in the target domain. Most of the attention has been dedicated to seeking domain-invariant feature by exploring the shared structure between domains, ignoring the valuable discriminative information contained in the labeled source data. In this paper, we propose a Dictionary Evolution (DE) approach to construct discriminative features robust to domain shift. Specifically, DE aims to adapt a discriminative dictionary learnt based on labeled source samples to unlabeled target samples through a gradual transition process. We show that the learnt dictionary is endowed with cross-domain data representation ability and powerful discriminant capability. Empirical results on real world data sets demonstrate the advantages of the proposed approach over competing methods.


Knowledge Based Systems | 2018

“Like charges repulsion and opposite charges attraction” law based multilinear subspace analysis for face recognition

Fei Wu; Xiao-Yuan Jing; Songsong Wu; Guangwei Gao; Qi Ge; Ruchuan Wang

Abstract Multiple image variations occur in natural face images, such as the changes of pose, illumination, occlusion and expression. For non-specific variations based face recognition, learning effective features is an important research topic. Subspace learning is a widely used face recognition technique; however, numerous subspace analysis methods do not fully utilize the prior information of facial variations. Tensor-based multilinear subspace analysis methods can take advantage of the prior information, but they need to be further improved. With respect to a single facial variation, we observe that the image samples belonging to the same variation-state but different classes tend to cluster together, whereas those belonging to different variation-states but the same class tend to remain separate. This is adverse to classification. In this paper, motivated by the idea of charge law, “like charges repulsion and opposite charges attraction”, in which like and opposite charges are regarded as same and different variation-states, respectively, we propose a non-specific variations based discriminant analysis (NVDA) criterion. It searches for an optimal discriminant subspace in which samples belonging to same variation-state but different classes are separable, whereas those belonging to different variation-states but same class cluster together. We then propose a novel face recognition approach called non-specific variations based multi-subspace analysis (NVMSA), which serially utilizes NVDA criterion to learn multiple discriminant subspaces corresponding to different variations. In the proposed approach, we design a strategy to select the serial calculation order of variations and provide a rule to choose projection vectors with favorable discriminant capabilities. Furthermore, we formulate the locally statistical orthogonal constraints for the multiple subspaces learning to remove the local correlation of discriminant features obtained from multiple variations. Experiments on the AR, Weizmann, PIE and LFW databases demonstrate the effectiveness and efficiency of the proposed approach.


Neural Computing and Applications | 2017

Large-scale image recognition based on parallel kernel supervised and semi-supervised subspace learning

Fei Wu; Xiao-Yuan Jing; Qian Liu; Songsong Wu; Guo-Liang He

Kernel discriminant subspace learning technique is effective to exploit the structure of image dataset in the high-dimensional nonlinear space. However, for large-scale image recognition applications, this technique usually suffers from large computational burden. Although some kernel accelerating methods have been presented, how to greatly reduce computing time and simultaneously keep favorable recognition accuracy is still challenging. In this paper, we introduce the idea of parallel computing into kernel subspace learning and build a parallel kernel discriminant subspace learning framework. In this framework, we firstly design a random non-overlapping equal data division strategy to divide the whole training set into several subsets and assign each computational node a subset. Then, we separately learn kernel discriminant subspaces from these subsets without mutual communications and finally select the most appropriate subspace to classify test samples. Under the built framework, we propose two novel kernel subspace learning approaches, i.e., parallel kernel discriminant analysis (PKDA) and parallel kernel semi-supervised discriminant analysis (PKSDA). We show the superiority of the proposed approaches in terms of time complexity as compared with related methods, and provide the fundamental supports for our framework. For experiment, we establish a parallel computing environment and employ three public large-scale image databases as experiment data. Experimental results demonstrate the efficiency and effectiveness of the proposed approaches.


international conference on image processing | 2016

Locality-constrained matrix regression for position-patch based face hallucination

Guangwei Gao; Xiao-Yuan Jing; Quan Zhou; Songsong Wu; Dong Yue

Position-patch based face hallucination approaches have been proposed to replace the probabilistic graph-based or manifold learning-based models recently. In this paper, we propose a novel position-based face hallucination method based on locality-constrained matrix regression (LcMR). LcMR uses nuclear norm to characterize the reconstruction error straightforward, thus preserving the essential structural information of the input. On the other hand, LcMR imposes a locality constraint onto the combination coefficients to reach sparsity and locality simultaneously. The locality constraint can derive an analytical solution to the optimization problem. Moreover, LcMR can be solved using alternating direction method of multipliers. Experimental results demonstrate the superiority of the proposed method over some state-of-the-art approaches.


international conference on image processing | 2014

Learning image manifold using neighboring similarity integration

Songsong Wu; Xiao-Yuan Jing; Jian Yang; Jingyu Yang

The perspective of image manifold and associated manifold learning methods have demonstrated promising results in finding the underlying structure from images in the high dimensional space. Conventional manifold learning methods construct the similarity relationship of image set only based on the pairwise Euclidean distance of images, so they may obtain deceptive similarity and suffer performance degradation. In this paper, we present an Neighboring Similarity Integration(NSI) algorithm to explore image manifold under probability preserving principle. NSI is based on the neighboring similarity of image samples and the local structures of image manifold, and can increases the estimation accuracy of similarity and enhance the learning ability for image manifold. The experimental results of image visualization problem on Yale and MNIST databases are presented to demonstrate the effectiveness of the proposed method.


Computers & Electrical Engineering | 2018

Robust low-resolution face recognition via low-rank representation and locality-constrained regression

Guangwei Gao; Zangyi Hu; Pu Huang; Meng Yang; Quan Zhou; Songsong Wu; Dong Yue

Abstract Recognizing face images in low-resolution (LR) scenarios have bigger challenges than recognizing those in high-resolution (HR) scenarios due to that LR images usually lack discriminative details. Previous methods ignore the existence of occlusions in the LR probe images. To alleviate this problem, we propose a low-rank representation and locality-constrained regression (LLRLCR) based method in this paper to learn occlusion-robust representations features for final face recognition tasks. For HR gallery set, LLRLCR uses double low-rank representation to reveal the underlying holistic data structures; for LR probe, LLRLCR uses locality-constrained matrix regression to keep regression errors structural information and to learn robust and discriminative representation features. The proposed method allows us to fully exploit the structure information in gallery and probe data simultaneously. Finally, after getting the occlusion-robust features, the face labels can be predicted via a simple yet powerful sparse representation based classifier engine. Experiments on some standard face databases have indicated that the proposed method can obtain promising recognition performance than some state-of-the-art LR face recognition approaches.

Collaboration


Dive into the Songsong Wu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guangwei Gao

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Dong Yue

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Jingyu Yang

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Fei Wu

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Quan Zhou

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Pu Huang

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Qi Ge

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Qian Liu

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jian Yang

Nanjing University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge