Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jinfeng Yi is active.

Publication


Featured researches published by Jinfeng Yi.


international conference on data mining | 2012

Robust Ensemble Clustering by Matrix Completion

Jinfeng Yi; Tianbao Yang; Rong Jin; Anil K. Jain; Mehrdad Mahdavi

Data clustering is an important task and has found applications in numerous real-world problems. Since no single clustering algorithm is able to identify all different types of cluster shapes and structures, ensemble clustering was proposed to combine different partitions of the same data generated by multiple clustering algorithms. The key idea of most ensemble clustering algorithms is to find a partition that is consistent with most of the available partitions of the input data. One problem with these algorithms is their inability to handle uncertain data pairs, i.e. data pairs for which about half of the partitions put them into the same cluster and the other half do the opposite. When the number of uncertain data pairs is large, they can mislead the ensemble clustering algorithm in generating the final partition. To overcome this limitation, we propose an ensemble clustering approach based on the technique of matrix completion. The proposed algorithm constructs a partially observed similarity matrix based on the data pairs whose cluster memberships are agreed upon by most of the clustering algorithms in the ensemble. It then deploys the matrix completion algorithm to complete the similarity matrix. The final data partition is computed by applying an efficient spectral clustering algorithm to the completed matrix. Our empirical studies with multiple real-world datasets show that the proposed algorithm performs significantly better than the state-of-the-art algorithms for ensemble clustering.


Machine Learning | 2015

Efficient distance metric learning by adaptive sampling and mini-batch stochastic gradient descent (SGD)

Qi Qian; Rong Jin; Jinfeng Yi; Lijun Zhang; Shenghuo Zhu

Distance metric learning (DML) is an important task that has found applications in many domains. The high computational cost of DML arises from the large number of variables to be determined and the constraint that a distance metric has to be a positive semi-definite (PSD) matrix. Although stochastic gradient descent (SGD) has been successfully applied to improve the efficiency of DML, it can still be computationally expensive in order to ensure that the solution is a PSD matrix. It has to, at every iteration, project the updated distance metric onto the PSD cone, an expensive operation. We address this challenge by developing two strategies within SGD, i.e. mini-batch and adaptive sampling, to effectively reduce the number of updates (i.e. projections onto the PSD cone) in SGD. We also develop hybrid approaches that combine the strength of adaptive sampling with that of mini-batch online learning techniques to further improve the computational efficiency of SGD for DML. We prove the theoretical guarantees for both adaptive sampling and mini-batch based approaches for DML. We also conduct an extensive empirical study to verify the effectiveness of the proposed algorithms for DML.


knowledge discovery and data mining | 2015

An Efficient Semi-Supervised Clustering Algorithm with Sequential Constraints

Jinfeng Yi; Lijun Zhang; Tianbao Yang; Wei Liu; Jun Wang

Semi-supervised clustering leverages side information such as pairwise constraints to guide clustering procedures. Despite promising progress, existing semi-supervised clustering approaches overlook the condition of side information being generated sequentially, which is a natural setting arising in numerous real-world applications such as social network and e-commerce system analysis. Given emerged new constraints, classical semi-supervised clustering algorithms need to re-optimize their objectives over all data samples and constraints in availability, which prevents them from efficiently updating the obtained data partitions. To address this challenge, we propose an efficient dynamic semi-supervised clustering framework that casts the clustering problem into a search problem over a feasible convex set, i.e., a convex hull with its extreme points being an ensemble of m data partitions. According to the principle of ensemble clustering, the optimal partition lies in the convex hull, and can thus be uniquely represented by an m-dimensional probability simplex vector. As such, the dynamic semi-supervised clustering problem is simplified to the problem of updating a probability simplex vector subject to the newly received pairwise constraints. We then develop a computationally efficient updating procedure to update the probability simplex vector in O(m2) time, irrespective of the data size n. Our empirical studies on several real-world benchmark datasets show that the proposed algorithm outperforms the state-of-the-art semi-supervised clustering algorithms with visible performance gain and significantly reduced running time.


national conference on artificial intelligence | 2013

Inferring Users’ Preferences from Crowdsourced Pairwise Comparisons: A Matrix Completion Approach

Jinfeng Yi; Rong Jin; Shaili Jain; Anil K. Jain


neural information processing systems | 2012

Semi-Crowdsourced Clustering: Generalizing Crowd Labeling by Robust Distance Metric Learning

Jinfeng Yi; Rong Jin; Shaili Jain; Tianbao Yang; Anil K. Jain


national conference on artificial intelligence | 2012

Crowdclustering with sparse pairwise labels: A matrix completion approach

Jinfeng Yi; Rong Jin; Anil K. Jain; Shaili Jain


international conference on machine learning | 2014

Efficient Algorithms for Robust One-bit Compressive Sensing

Lijun Zhang; Jinfeng Yi; Rong Jin


neural information processing systems | 2012

Stochastic Gradient Descent with Only One Projection

Mehrdad Mahdavi; Tianbao Yang; Rong Jin; Shenghuo Zhu; Jinfeng Yi


international conference on machine learning | 2013

Semi-supervised Clustering by Input Pattern Assisted Pairwise Similarity Matrix Completion

Jinfeng Yi; Lijun Zhang; Rong Jin; Qi Qian; Anil K. Jain


international conference on machine learning | 2013

Online Kernel Learning with a Near Optimal Sparsity Bound

Lijun Zhang; Jinfeng Yi; Rong Jin; Ming Lin; Xiaofei He

Collaboration


Dive into the Jinfeng Yi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anil K. Jain

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Mehrdad Mahdavi

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cho-Jui Hsieh

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge