Tzu-Kuo Huang
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tzu-Kuo Huang.
siam international conference on data mining | 2010
Liang Xiong; Xi Chen; Tzu-Kuo Huang; Jeff G. Schneider; Jaime G. Carbonell
Real-world relational data are seldom stationary, yet traditional collaborative filtering algorithms generally rely on this assumption. Motivated by our sales prediction problem, we propose a factor-based algorithm that is able to take time into account. By introducing additional factors for time, we formalize this problem as a tensor factorization with a special constraint on the time dimension. Further, we provide a fully Bayesian treatment to avoid tuning parameters and achieve automatic model complexity control. To learn the model we develop an efficient sampling procedure that is capable of analyzing large-scale data sets. This new algorithm, called Bayesian Probabilistic Tensor Factorization (BPTF), is evaluated on several real-world problems including sales prediction and movie recommendation. Empirical results demonstrate the superiority of our temporal model.
international conference on machine learning | 2009
Tzu-Kuo Huang; Jeff G. Schneider
Virtually all methods of learning dynamic systems from data start from the same basic assumption: that the learning algorithm will be provided with a sequence, or trajectory, of data generated from the dynamic system. In this paper we consider the case where the data is not sequenced. The learning algorithm is presented a set of data points from the systems operation but with no temporal ordering. The data are simply drawn as individual disconnected points. While making this assumption may seem absurd at first glance, we observe that many scientific modeling tasks have exactly this property. In this paper we restrict our attention to learning linear, discrete time models. We propose several algorithms for learning these models based on optimizing approximate likelihood functions and test the methods on several synthetic data sets.
european conference on machine learning | 2012
Tzu-Kuo Huang; Jeff G. Schneider
Vector Auto-regressive (VAR) models are useful for analyzing temporal dependencies among multivariate time series, known as Granger causality. There exist methods for learning sparse VAR models, leading directly to causal networks among the variables of interest. Another useful type of analysis comes from clustering methods, which summarize multiple time series by putting them into groups. We develop a methodology that integrates both types of analyses, motivated by the intuition that Granger causal relations in real-world time series may exhibit some clustering structure, in which case the estimation of both should be carried out together. Our methodology combines sparse learning and a nonparametric bi-clustered prior over the VAR model, conducting full Bayesian inference via blocked Gibbs sampling. Experiments on simulated and real data demonstrate improvements in both model estimation and clustering quality over standard alternatives, and in particular biologically more meaningful clusters in a T-cell activation gene expression time series dataset than those by other methods.
international conference on machine learning | 2014
Xuezhi Wang; Tzu-Kuo Huang; Jeff G. Schneider
neural information processing systems | 2011
Tzu-Kuo Huang; Jeff G. Schneider
uncertainty in artificial intelligence | 2015
Yifei Ma; Tzu-Kuo Huang; Jeff G. Schneider
international conference on machine learning | 2013
Tzu-Kuo Huang; Jeff G. Schneider
neural information processing systems | 2013
Tzu-Kuo Huang; Jeff G. Schneider
arXiv: Robotics | 2018
Henggang Cui; Vladan Radosavljevic; Fang-Chieh Chou; Tsung-Han Lin; Thi Nguyen; Tzu-Kuo Huang; Jeff G. Schneider; Nemanja Djuric
international conference on artificial intelligence and statistics | 2010
Tzu-Kuo Huang; Le Song; Jeff G. Schneider