Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ian En-Hsu Yen is active.

Publication


Featured researches published by Ian En-Hsu Yen.


knowledge discovery and data mining | 2017

PPDsparse: A Parallel Primal-Dual Sparse Method for Extreme Classification

Ian En-Hsu Yen; Xiangru Huang; Wei Dai; Pradeep Ravikumar; Inderjit S. Dhillon; Eric P. Xing

Extreme Classification comprises multi-class or multi-label prediction where there is a large number of classes, and is increasingly relevant to many real-world applications such as text and image tagging. In this setting, standard classification methods, with complexity linear in the number of classes, become intractable, while enforcing structural constraints among classes (such as low-rank or tree-structure) to reduce complexity often sacrifices accuracy for efficiency. The recent PD-Sparse method addresses this via an algorithm that is sub-linear in the number of variables, by exploiting primal-dual sparsity inherent in a specific loss function, namely the max-margin loss. In this work, we extend PD-Sparse to be efficiently parallelized in large-scale distributed settings. By introducing separable loss functions, we can scale out the training, with network communication and space efficiency comparable to those in one-versus-all approaches while maintaining an overall complexity sub-linear in the number of classes. On several large-scale benchmarks our proposed method achieves accuracy competitive to the state-of-the-art while reducing the training time from days to tens of minutes compared with existing parallel or sparse methods on a cluster of 100 cores.


knowledge discovery and data mining | 2018

Scalable Spectral Clustering Using Random Binning Features

Lingfei Wu; Pin-Yu Chen; Ian En-Hsu Yen; Fangli Xu; Yinglong Xia; Charu C. Aggarwal

Spectral clustering is one of the most effective clustering approaches that capture hidden cluster structures in the data. However, it does not scale well to large-scale problems due to its quadratic complexity in constructing similarity graphs and computing subsequent eigendecomposition. Although a number of methods have been proposed to accelerate spectral clustering, most of them compromise considerable information loss in the original data for reducing computational bottlenecks. In this paper, we present a novel scalable spectral clustering method using Random Binning features (RB) to simultaneously accelerate both similarity graph construction and the eigendecomposition. Specifically, we implicitly approximate the graph similarity (kernel) matrix by the inner product of a large sparse feature matrix generated by RB. Then we introduce a state-of-the-art SVD solver to effectively compute eigenvectors of this large matrix for spectral clustering. Using these two building blocks, we reduce the computational cost from quadratic to linear in the number of data points while achieving similar accuracy. Our theoretical analysis shows that spectral clustering via RB converges faster to the exact spectral clustering than the standard Random Feature approximation. Extensive experiments on 8 benchmarks show that the proposed method either outperforms or matches the state-of-the-art methods in both accuracy and runtime. Moreover, our method exhibits linear scalability in both the number of data samples and the number of RB features.


neural information processing systems | 2015

Sparse Linear Programming via primal and dual augmented coordinate descent

Ian En-Hsu Yen; Kai Zhong; Cho-Jui Hsieh; Pradeep Ravikumar; Inderjit S. Dhillon


neural information processing systems | 2015

A dual-augmented block minimization framework for learning with limited memory

Ian En-Hsu Yen; Shan-Wei Lin; Shou-De Lin


arXiv: Machine Learning | 2018

D2KE: From Distance to Kernel and Embedding.

Lingfei Wu; Ian En-Hsu Yen; Fangli Xu; Pradeep Ravikumar; Michael J. Witbrock


international conference on machine learning | 2017

Doubly Greedy Primal-Dual Coordinate Descent for Sparse Empirical Risk Minimization.

Qi Lei; Ian En-Hsu Yen; Chao-Yuan Wu; Inderjit S. Dhillon; Pradeep Ravikumar


empirical methods in natural language processing | 2017

Word Mover’s Embedding: From Word2Vec to Document Embedding

Lingfei Wu; Ian En-Hsu Yen; Kun Xu; Fangli Xu; Avinash Balakrishnan; Pin-Yu Chen; Pradeep Ravikumar; Michael J. Witbrock


neural information processing systems | 2016

Dual Decomposed Learning with Factorwise Oracle for Structural SVM of Large Output Domain

Ian En-Hsu Yen; Xiangru Huang; Kai Zhong; Ruohan Zhang; Pradeep Ravikumar; Inderjit S. Dhillon


international conference on machine learning | 2016

A convex atomic-norm approach to multiple sequence alignment and motif discovery

Ian En-Hsu Yen; Xin Lin; Jiong Zhang; Pradeep Ravikumar; Inderjit S. Dhillon


neural information processing systems | 2018

Representer Point Selection for Explaining Deep Neural Networks

Chih-Kuan Yeh; Joon Kim; Ian En-Hsu Yen; Pradeep Ravikumar

Collaboration


Dive into the Ian En-Hsu Yen's collaboration.

Top Co-Authors

Avatar

Pradeep Ravikumar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Inderjit S. Dhillon

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiangru Huang

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jiong Zhang

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Kai Zhong

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Qi Lei

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge