Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qiyue Yin is active.

Publication


Featured researches published by Qiyue Yin.


Neurocomputing | 2015

Multi-view clustering via pairwise sparse subspace representation

Qiyue Yin; Shu Wu; Ran He; Liang Wang

Multi-view clustering, which aims to cluster datasets with multiple sources of information, has a wide range of applications in the communities of data mining and pattern recognition. Generally, it makes use of the complementary information embedded in multiple views to improve clustering performance. Recent methods usually find a low-dimensional embedding of multi-view data, but often ignore some useful prior information that can be utilized to better discover the latent group structure of multi-view data. To alleviate this problem, a novel pairwise sparse subspace representation model for multi-view clustering is proposed in this paper. The objective function of our model mainly includes two parts. The first part aims to harness prior information to achieve a sparse representation of each high-dimensional data point with respect to other data points in the same view. The second part aims to maximize the correlation between the representations of different views. An alternating minimization method is provided as an efficient solution for the proposed multi-view clustering algorithm. A detailed theoretical analysis is also conducted to guarantee the convergence of the proposed method. Moreover, we show that the must-link and cannot-link constraints can be naturally integrated into the proposed model to obtain a link constrained multi-view clustering model. Extensive experiments on five real world datasets demonstrate that the proposed model performs better than several state-of-the-art multi-view clustering methods.


IEEE Transactions on Image Processing | 2015

Cross-Modal Subspace Learning via Pairwise Constraints

Ran He; Man Zhang; Liang Wang; Ye Ji; Qiyue Yin

In multimedia applications, the text and image components in a web document form a pairwise constraint that potentially indicates the same semantic concept. This paper studies cross-modal learning via the pairwise constraint and aims to find the common structure hidden in different modalities. We first propose a compound regularization framework to address the pairwise constraint, which can be used as a general platform for developing cross-modal algorithms. For unsupervised learning, we propose a multi-modal subspace clustering method to learn a common structure for different modalities. For supervised learning, to reduce the semantic gap and the outliers in pairwise constraints, we propose a cross-modal matching method based on compound ℓ21 regularization. Extensive experiments demonstrate the benefits of joint text and image modeling with semantically induced pairwise constraints, and they show that the proposed cross-modal methods can further reduce the semantic gap between different modalities and improve the clustering/matching accuracy.


IEEE Transactions on Image Processing | 2015

Robust Subspace Clustering With Complex Noise

Ran He; Yingya Zhang; Zhenan Sun; Qiyue Yin

Subspace clustering has important and wide applications in computer vision and pattern recognition. It is a challenging task to learn low-dimensional subspace structures due to complex noise existing in high-dimensional data. Complex noise has much more complex statistical structures, and is neither Gaussian nor Laplacian noise. Recent subspace clustering methods usually assume a sparse representation of the errors incurred by noise and correct these errors iteratively. However, large corruptions incurred by complex noise cannot be well addressed by these methods. A novel optimization model for robust subspace clustering is proposed in this paper. Its objective function mainly includes two parts. The first part aims to achieve a sparse representation of each high-dimensional data point with other data points. The second part aims to maximize the correntropy between a given data point and its low-dimensional representation with other points. Correntropy is a robust measure so that the influence of large corruptions on subspace clustering can be greatly suppressed. An extension of pairwise link constraints is also proposed as prior information to deal with complex noise. Half-quadratic minimization is provided as an efficient solution to the proposed robust subspace clustering formulations. Experimental results on three commonly used data sets show that our method outperforms state-of-the-art subspace clustering methods.


Neurocomputing | 2018

Cross-modal subspace learning for fine-grained sketch-based image retrieval

Peng Xu; Qiyue Yin; Yongye Huang; Yi-Zhe Song; Zhanyu Ma; Liang Wang; Tao Xiang; W. Bastiaan Kleijn; Jun Guo

Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are unsufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research.


conference on information and knowledge management | 2015

Incomplete Multi-view Clustering via Subspace Learning

Qiyue Yin; Shu Wu; Liang Wang

Multi-view clustering, which explores complementary information between multiple distinct feature sets for better clustering, has a wide range of applications, e.g., knowledge management and information retrieval. Traditional multi-view clustering methods usually assume that all examples have complete feature sets. However, in real applications, it is often the case that some examples lose some feature sets, which results in incomplete multi-view data and notable performance degeneration. In this paper, a novel incomplete multi-view clustering method is therefore developed, which learns unified latent representations and projection matrices for the incomplete multi-view data. To approximate the high level scaled indicator matrix defined to represent class label matrix, the latent representations are expected to be non-negative and column orthogonal. Besides, since data are often with high dimensional and noisy features, the projection matrices are enforced to be sparse so as to select relevant features when learning the latent space. Furthermore, the inter-view and intra-view data structure is preserved to further enhance the clustering performance. To these ends, an objective is developed with efficient optimization strategy and convergence analysis. Extensive experiments demonstrate that our model performs better than the state-of-the-art multi-view clustering methods in various settings.


Pattern Recognition | 2017

Unified subspace learning for incomplete and unlabeled multi-view data

Qiyue Yin; Shu Wu; Liang Wang

Class indicator matrix is learned for incomplete and unlabeled multi-view data.Preserving the inter-view and intra-view data similarity can improve performance.Running time is in the same magnitudes with that of the mainstream methods.Obtain best results for incomplete multi-view clustering and cross-modal retrieval. Multi-view data with each view corresponding to a type of feature set are common in real world. Usually, previous multi-view learning methods assume complete views. However, multi-view data are often incomplete, namely some samples have incomplete feature sets. Besides, most data are unlabeled due to a large cost of manual annotation, which makes learning of such data a challenging problem. In this paper, we propose a novel subspace learning framework for incomplete and unlabeled multi-view data. The model directly optimizes the class indicator matrix, which establishes a bridge for incomplete feature sets. Besides, feature selection is considered to deal with high dimensional and noisy features. Furthermore, the inter-view and intra-view data similarities are preserved to enhance the model. To these ends, an objective is developed along with an efficient optimization strategy. Finally, extensive experiments are conducted for multi-view clustering and cross-modal retrieval, achieving the state-of-the-art performance under various settings.


european conference on computer vision | 2016

Instance-Level Coupled Subspace Learning for Fine-Grained Sketch-Based Image Retrieval

Peng Xu; Qiyue Yin; Yonggang Qi; Yi-Zhe Song; Zhanyu Ma; Liang Wang; Jun Guo

Fine-grained sketch-based image retrieval (FG-SBIR) is a newly emerged topic in computer vision. The problem is challenging because in addition to bridging the sketch-photo domain gap, it also asks for instance-level discrimination within object categories. Most prior approaches focused on feature engineering and fine-grained ranking, yet neglected an important and central problem: how to establish a fine-grained cross-domain feature space to conduct retrieval. In this paper, for the first time we formulate a cross-domain framework specifically designed for the task of FG-SBIR that simultaneously conducts instance-level retrieval and attribute prediction. Different to conventional photo-text cross-domain frameworks that performs transfer on category-level data, our joint multi-view space uniquely learns from the instance-level pair-wise annotations of sketch and photo. More specifically, we propose a joint view selection and attribute subspace learning algorithm to learn domain projection matrices for photo and sketch, respectively. It follows that visual attributes can be extracted from such matrices through projection to build a coupled semantic space to conduct retrieval. Experimental results on two recently released fine-grained photo-sketch datasets show that the proposed method is able to perform at a level close to those of deep models, while removing the need for extensive manual annotations.


conference on information and knowledge management | 2015

Multi-view Clustering via Structured Low-rank Representation

Dong Wang; Qiyue Yin; Ran He; Liang Wang; Tieniu Tan

In this paper, we present a novel solution to multi-view clustering through a structured low-rank representation. When assuming similar samples can be linearly reconstructed by each other, the resulting representational matrix reflects the cluster structure and should ideally be block diagonal. We first impose low-rank constraint on the representational matrix to encourage better grouping effect. Then representational matrices under different views are allowed to communicate with each other and share their mutual cluster structure information. We develop an effective algorithm inspired by iterative re-weighted least squares for solving our formulation. During the optimization process, the intermediate representational matrix from one view serves as a cluster structure constraint for that from another view. Such mutual structural constraint fine-tunes the cluster structures from both views and makes them more and more agreeable. Extensive empirical study manifests the superiority and efficacy of the proposed method.


international conference on image processing | 2014

Semi-supervised subspace segmentation

Dong Wang; Qiyue Yin; Ran He; Liang Wang; Tieniu Tan

Subspace segmentation methods usually rely on the raw explicit feature vectors in an unsupervised manner. In many applications, it is cheap to obtain some pairwise link information that tells whether two data points are in the same subspace or not. Though partially available, such link information serves as some kind of high-level semantics, which can be further used as a constraint to improve the segmentation accuracy. By constructing a link matrix and using it as a regularizer, we propose a semi-supervised subspace segmentation model where the partially observed subspace membership prior can be encoded. Specificly, under the common linear representation assumption, we enforce the representational coefficient to be consistent with the link matrix. Thus the low-level and high-level information about the data can be integrated to produce more precise segmentation results. We then develop an effective algorithm to optimize our model in an alternating minimization way. Experimental results for both motion segmentation and face clustering validate that incorporating such link information is helpful to assist and bias the unsupervised subspace segmentation methods.


CCF Chinese Conference on Computer Vision | 2017

Learning Shared and Specific Factors for Multi-modal Data

Qiyue Yin; Yan Huang; Shu Wu; Liang Wang

In real world, it is common that an entity is represented by multiple modalities, which motivates multi-modal learning, e.g., multi-modal clustering and cross-modal retrieval. Traditional methods based on deep neural networks usually assume a joint factor or multiple similar factors are learned. However, different modalities representing the same content share both common and modality-specific characteristics, and few approaches can fully discover those features, i.e., consistency and complementarity. In this paper, we propose to learn shared and specific factors for each modality. Then the consistency can be explored through the shared factors. By combining the shared and specific factors, the complementarity will be excavated. Finally, a triadic autoencoder with deep architecture is developed for the shared and specific factors learning. Extensive experiments are conducted for cross-modal retrieval and multi-model clustering, which clearly demonstrate the effectiveness of our model.

Collaboration


Dive into the Qiyue Yin's collaboration.

Top Co-Authors

Avatar

Liang Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shu Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ran He

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dong Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jun Guo

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Man Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Peng Xu

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Tieniu Tan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ye Ji

Shandong University

View shared research outputs
Top Co-Authors

Avatar

Zhanyu Ma

Beijing University of Posts and Telecommunications

View shared research outputs
Researchain Logo
Decentralizing Knowledge