Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiangfeng Wang is active.

Publication


Featured researches published by Xiangfeng Wang.


international conference on multimedia retrieval | 2018

Deep Extreme Multi-label Learning

Wenjie Zhang; Junchi Yan; Xiangfeng Wang; Hongyuan Zha

Extreme multi-label learning (XML) or classification has been a practical and important problem since the boom of big data. The main challenge lies in the exponential label space which involves 2L possible label sets especially when the label dimension L is huge, e.g., in millions for Wikipedia labels. This paper is motivated to better explore the label space by originally establishing an explicit label graph. In the meanwhile, deep learning has been widely studied and used in various classification problems including multi-label classification, however it has not been properly introduced to XML, where the label space can be as large as in millions. In this paper, we propose a practical deep embedding method for extreme multi-label classification, which harvests the ideas of non-linear embedding and graph priors-based label space modeling simultaneously. Extensive experiments on public datasets for XML show that our method performs competitive against state-of-the-art result.


Neurocomputing | 2018

On the flexibility of block coordinate descent for large-scale optimization

Xiangfeng Wang; Wenjie Zhang; Junchi Yan; Xiaoming Yuan; Hongyuan Zha

Abstract We consider a large-scale minimization problem (not necessarily convex) with non-smooth separable convex penalty. Problems in this form widely arise in many modern large-scale machine learning and signal processing applications. In this paper, we present a new perspective towards the parallel Block Coordinate Descent (BCD) methods. Specifically we explicitly give a concept of so-called two-layered block variable updating loop for parallel BCD methods in modern computing environment comprised of multiple distributed computing nodes. The outer loop refers to the block variable updating assigned to distributed nodes, and the inner loop involves the updating step inside each node. Each loop allows to adopt either Jacobi or Gauss–Seidel update rule. In particular, we give detailed theoretical convergence analysis to two practical schemes: Jacobi/Gauss–Seidel and Gauss–Seidel/Jacobi that embodies two algorithms respectively. Our new perspective and behind theoretical results help devise parallel BCD algorithms in a principled fashion, which in turn lend them a flexible implementation for BCD methods suited to the parallel computing environment. The effectiveness of the algorithm framework is verified on the benchmark tasks of large-scale l1 regularized sparse logistic regression and non-negative matrix factorization.


ieee international conference on multimedia big data | 2017

A Divide-and-Conquer Approach for Large-Scale Multi-label Learning

Wenjie Zhang; Xiangfeng Wang; Junchi Yan; Hongyuan Zha

Recently, the multi-label learning has drawn considerable attention as it has many applications in text classification, image annotation and query/keyword suggestions etc. In recent years, a number of remedies have been proposed to address this challenging task. However, they are either tree based methods which has the expensive train costs or embedding based methods which has relatively lower accuracy since using simple reduction techniques. This paper addresses the issue by developing an efficient divide-and-conquer based approach. Specifically, it involves: a) utilizing the feature vector to cluster the training data into several clusters, b) reformulating the multi-label problems as recommended problems by treating each label as an item to be recommended, and c) learning an advanced factorization model to recommend the subset of labels to each point for local cluster. Extensive experiments on several real world multi-label datasets demonstrate the efficiency of our proposed algorithm.


chinese conference on pattern recognition | 2016

Parallel Randomized Block Coordinate Descent for Neural Probabilistic Language Model with High-Dimensional Output Targets

Xin Liu; Junchi Yan; Xiangfeng Wang; Hongyuan Zha

Training a large probabilistic neural network language model, with typical high-dimensional output is excessively time-consuming, which is one of the main reasons that more simplified models such as n-gram is often more popular despite the inferior performance. In this paper a Chinese neural probabilistic language model is trained using the Fudan Chinese Language Corpus. As hundreds of thousands of distinct words have been tokenized from the raw corpus, the model contains tens of millions of parameters. To address the challenge, popular parallel computing platform MPI (Message Passing Interface) based on cluster is employed to implement the parallel neural network language model. Specifically, we propose a new method termed as Parallel Randomized Block Coordinate Descent (PRBCD) to train this model cost-effectively. Different from traditional coordinate descent method, our new method could be employed in network with multiple layers, allowing scaling up the gradients with respect to hidden units proportionally based on sampled parameters. We empirically show that our PRBCD is stable and is well suited for language models, which contain only a few layers while often have a large amount of parameters and extremely high-dimensional output targets.


international joint conference on artificial intelligence | 2016

On modeling and predicting individual paper citation count over time

Shuai Xiao; Junchi Yan; Changsheng Li; Bo Jin; Xiangfeng Wang; Xiaokang Yang; Stephen M. Chu; Hongyuan Zhu


national conference on artificial intelligence | 2017

On Predictive Patent Valuation: Forecasting Patent Citations and Their Types.

Xin Liu; Junchi Yan; Shuai Xiao; Xiangfeng Wang; Hongyuan Zha; Stephen M. Chu


arXiv: Learning | 2015

Active Sample Learning and Feature Selection: A Unified Approach.

Changsheng Li; Xiangfeng Wang; Weishan Dong; Junchi Yan; Qingshan Liu; Hongyuan Zha


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

Dynamic Structure Embedded Online Multiple-Output Regression for Streaming Data

Changsheng Li; Fan Wei; Weishan Dong; Xiangfeng Wang; Qingshan Liu; Xin Zhang


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

Joint Active Learning with Feature Selection via CUR Matrix Decomposition

Changsheng Li; Xiangfeng Wang; Weishan Dong; Junchi Yan; Qingshan Liu; Hongyuan Zha


national conference on artificial intelligence | 2016

Spatially regularized streaming sensor selection

Changsheng Li; Fan Wei; Weishan Dong; Xiangfeng Wang; Junchi Yan; Xiaobin Zhu; Qingshan Liu; Xin Zhang

Collaboration


Dive into the Xiangfeng Wang's collaboration.

Top Co-Authors

Avatar

Junchi Yan

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Hongyuan Zha

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Qingshan Liu

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shuai Xiao

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Wenjie Zhang

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Bo Jin

East China Normal University

View shared research outputs
Top Co-Authors

Avatar

Xiaokang Yang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Xin Liu

East China Normal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge