Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kuan Liu is active.

Publication


Featured researches published by Kuan Liu.


international conference on acoustics, speech, and signal processing | 2016

A comparison between deep neural nets and kernel acoustic models for speech recognition

Zhiyun Lu; Dong Quo; Alireza Bagheri Garakani; Kuan Liu; Avner May; Aurélien Bellet; Linxi Fan; Michael Collins; Brian Kingsbury; Michael Picheny; Fei Sha

We study large-scale kernel methods for acoustic modeling and compare to DNNs on performance metrics related to both acoustic modeling and recognition. Measuring perplexity and frame-level classification accuracy, kernel-based acoustic models are as effective as their DNN counterparts. However, on token-error-rates DNN models can be significantly better. We have discovered that this might be attributed to DNNs unique strength in reducing both the perplexity and the entropy of the predicted posterior probabilities. Motivated by our findings, we propose a new technique, entropy regularized perplexity, for model selection. This technique can noticeably improve the recognition performance of both types of models, and reduces the gap between them. While effective on Broadcast News, this technique could be also applicable to other tasks.


arXiv: Machine Learning | 2018

Kernel Approximation Methods for Speech Recognition

Avner May; Alireza Bagheri Garakani; Zhiyun Lu; Dong Guo; Kuan Liu; Aurélien Bellet; Linxi Fan; Michael Collins; Daniel J. Hsu; Brian Kingsbury; Michael Picheny; Fei Sha

We study large-scale kernel methods for acoustic modeling in speech recognition and compare their performance to deep neural networks (DNNs). We perform experiments on four speech recognition datasets, including the TIMIT and Broadcast News benchmark tasks, and compare these two types of models on frame-level performance metrics (accuracy, cross-entropy), as well as on recognition metrics (word/character error rate). In order to scale kernel methods to these large datasets, we use the random Fourier feature method of Rahimi and Recht [2007]. We propose two novel techniques for improving the performance of kernel acoustic models. First, in order to reduce the number of random features required by kernel models, we propose a simple but effective method for feature selection. The method is able to explore a large number of non-linear features while maintaining a compact model more efficiently than existing approaches. Second, we present a number of frame-level metrics which correlate very strongly with recognition performance when computed on the heldout set; we take advantage of these correlations by monitoring these metrics during training in order to decide when to stop learning. This technique can noticeably improve the recognition performance of both DNN and kernel models, while narrowing the gap between them. Additionally, we show that the linear bottleneck method of Sainath et al. [2013a] improves the performance of our kernel models significantly, in addition to speeding up training and making the models more compact. Together, these three methods dramatically improve the performance of kernel acoustic models, making their performance comparable to DNNs on the tasks we explored.


conference on recommender systems | 2016

Temporal learning and sequence modeling for a job recommender system

Kuan Liu; Xing Shi; Anoop Kumar; Linhong Zhu; Prem Natarajan

We present our solution to the job recommendation task for RecSys Challenge 2016. The main contribution of our work is to combine temporal learning with sequence modeling to capture complex user-item activity patterns to improve job recommendations. First, we propose a time-based ranking model applied to historical observations and a hybrid matrix factorization over time re-weighted interactions. Second, we exploit sequence properties in user-items activities and develop a RNN-based recommendation model. Our solution achieved 5th place in the challenge among more than 100 participants. Notably, the strong performance of our RNN approach shows a promising new direction in employing sequence modeling for recommendation systems.


arXiv: Learning | 2014

How to Scale Up Kernel Methods to Be As Good As Deep Neural Nets

Zhiyun Lu; Avner May; Kuan Liu; Alireza Bagheri Garakani; Dong Guo; Aurélien Bellet; Linxi Fan; Michael Collins; Brian Kingsbury; Michael Picheny; Fei Sha


neural information processing systems | 2013

Similarity Component Analysis

Soravit Changpinyo; Kuan Liu; Fei Sha


international conference on artificial intelligence and statistics | 2015

Similarity Learning for High-Dimensional Sparse Data

Kuan Liu; Aurélien Bellet; Fei Sha


national conference on artificial intelligence | 2018

A Batch Learning Framework for Scalable Personalized Ranking

Kuan Liu; Prem Natarajan


conference on recommender systems | 2017

WMRB: Learning to Rank in a Scalable Batch Training Approach.

Kuan Liu; Prem Natarajan


arXiv: Machine Learning | 2018

Learn to Combine Modalities in Multimodal Deep Learning.

Kuan Liu; Yanen Li; Ning Xu; Prem Natarajan


arXiv: Information Retrieval | 2018

A Sequential Embedding Approach for Item Recommendation with Heterogeneous Attributes.

Kuan Liu; Xing Shi; Prem Natarajan

Collaboration


Dive into the Kuan Liu's collaboration.

Top Co-Authors

Avatar

Prem Natarajan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Fei Sha

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Alireza Bagheri Garakani

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linxi Fan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xing Shi

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge