Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Quanming Yao is active.

Publication


Featured researches published by Quanming Yao.


international conference on data mining | 2015

Fast Low-Rank Matrix Learning with Nonconvex Regularization

Quanming Yao; James Tin-Yau Kwok; Wenliang Zhong

Low-rank modeling has a lot of important applications in machine learning, computer vision and social network analysis. While the matrix rank is often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better recovery performance. However, the resultant optimization problem is much more challenging. A very recent state-of-the-art is based on the proximal gradient algorithm. However, it requires an expensive full SVD in each proximal step. In this paper, we show that for many commonly-used nonconvex low-rank regularizers, a cutoff can be derived to automatically threshold the singular values obtained from the proximal operator. This allows the use of power method to approximate the SVD efficiently. Besides, the proximal operator can be reduced to that of a much smaller matrix projected onto this leading subspace. Convergence, with a rate of O(1/T) where T is the number of iterations, can be guaranteed. Extensive experiments are performed on matrix completion and robust principal component analysis. The proposed method achieves significant speedup over the state-of-the-art. Moreover, the matrix solution obtained is more accurate and has a lower rank than that of the traditional nuclear norm regularizer.


Visual Informatics | 2017

VISTopic: A visual analytics system for making sense of large document collections using hierarchical topic modeling

Yi Yang; Quanming Yao; Huamin Qu

Abstract Effective analysis of large text collections remains a challenging problem given the growing volume of available text data. Recently, text mining techniques have been rapidly developed for automatically extracting key information from massive text data. Topic modeling, as one of the novel techniques that extracts a thematic structure from documents, is widely used to generate text summarization and foster an overall understanding of the corpus content. Although powerful, this technique may not be directly applicable for general analytics scenarios since the topics and topic–document relationship are often presented probabilistically in models. Moreover, information that plays an important role in knowledge discovery, for example, times and authors, is hardly reflected in topic modeling for comprehensive analysis. In this paper, we address this issue by presenting a visual analytics system, VISTopic, to help users make sense of large document collections based on topic modeling. VISTopic first extracts a set of hierarchical topics using a novel hierarchical latent tree model (HLTM) (Liu et al., 2014). In specific, a topic view accounting for the model features is designed for overall understanding and interactive exploration of the topic organization. To leverage multi-perspective information for visual analytics, VISTopic further provides an evolution view to reveal the trend of topics and a document view to show details of topical documents. Three case studies based on the dataset of IEEE VIS conference demonstrate the effectiveness of our system in gaining insights from large document collections.


international symposium on neural networks | 2017

Zero-shot learning with a partial set of observed attributes

Yaqing Wang; James Tin-Yau Kwok; Quanming Yao; Lionel M. Ni

Attributes are human-annotated semantic descriptions of label classes. In zero-shot learning (ZSL), they are often used to construct a semantic embedding for knowledge transfer from known classes to new classes. While collecting all attributes for the new classes is criticized as expensive, a subset of these attributes are often easy to acquire. In this paper, we extend ZSL methods to handle this partial set of observed attributes. We first recover the missing attributes through structured matrix completion. We use the low-rank assumption, and leverage properties of the attributes by extracting their rich semantic information from external sources. The resultant optimization problem can be efficiently solved with alternating minimization, in which each of its subproblems has a simple closed-form solution. The predicted attributes can then be used as semantic embeddings in ZSL. Experimental results show that the proposed method outperform existing methods in recovering the structured missing matrix. Moreover, methods using our predicted attributes in ZSL outperforms methods using either the partial set of observed attributes or other semantic embeddings.


international joint conference on artificial intelligence | 2017

Efficient Inexact Proximal Gradient Algorithm for Nonconvex Problems

Quanming Yao; James Tin-Yau Kwok; Fei Gao; Wei Chen; Tie-Yan Liu

The proximal gradient algorithm has been popularly used for convex optimization. Recently, it has also been extended for nonconvex problems, and the current state-of-the-art is the nonmonotone accelerated proximal gradient algorithm. However, it typically requires two exact proximal steps in each iteration, and can be inefficient when the proximal step is expensive. In this paper, we propose an efficient proximal gradient algorithm that requires only one inexact (and thus less expensive) proximal step in each iteration. Convergence to a critical point is still guaranteed and has a O(1/k) convergence rate, which is the best rate for nonconvex problems with first-order methods. Experiments on a number of problems demonstrate that the proposed algorithm has comparable performance as the state-of-the-art, but is much faster.


chinese conference on pattern recognition | 2012

Efficient Group Learning with Hypergraph Partition in Multi-task Learning

Quanming Yao; Xiubao Jiang; Mingming Gong; Xinge You; Yu Liu; Duanquan Xu

Recently, wide concern has been aroused in multi-task learning (MTL) area, which assumes that affinitive tasks should own similar parameter representation so that joint learning is both appropriate and reciprocal. Researchers also find that imposing similar parameter representation constraint on dissimilar tasks may be harmful to MTL. However, it’s difficult to determine which tasks are similar. Z Kang et al [1] proposed to simultaneously learn the groups and parameters to address this problem. But the method is inefficient and cannot scale to large data. In this paper, using the property of the parameter matrix, we describe the group learning process as permuting the parameter matrix into a block diagonal matrix, which can be modeled as a hypergraph partition problem. The optimization algorithm scales well to large data. Extensive experiments demonstrate that our method is advantageous over existing MTL methods in terms of accuracy and efficiency.


international conference on artificial intelligence | 2015

Accelerated inexact soft-impute for fast large-scale matrix completion

Quanming Yao; James Tin-Yau Kwok


knowledge discovery and data mining | 2017

Meta-Graph Based Recommendation Fusion over Heterogeneous Information Networks

Huan Zhao; Quanming Yao; Jianda Li; Yangqiu Song; Dik Lun Lee


national conference on artificial intelligence | 2015

Colorization by patch-based local low-rank matrix completion

Quanming Yao; James Tin-Yau Kwok


Journal of Machine Learning Research | 2018

Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity

Quanming Yao; James Tin-Yau Kwok


international conference on learning representations | 2017

Loss-aware Binarization of Deep Networks

Lu Hou; Quanming Yao; James Tin-Yau Kwok

Collaboration


Dive into the Quanming Yao's collaboration.

Top Co-Authors

Avatar

James Tin-Yau Kwok

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yaqing Wang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dik Lun Lee

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Huan Zhao

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Huamin Qu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jianda Li

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Lu Hou

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wenliang Zhong

Hong Kong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge