Zebang Shen
Zhejiang University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zebang Shen.
international joint conference on artificial intelligence | 2017
Tengfei Zhou; Hui Qian; Zebang Shen; Chao Zhang; Congfu Xu
By restricting the iterate on a nonlinear manifold, the recently proposed Riemannian optimization methods prove to be both efficient and effective in low rank tensor completion problems. However, existing methods fail to exploit the easily accessible side information, due to their format mismatch. Consequently, there is still room for improvement. To fill the gap, in this paper, a novel Riemannian model is proposed to tightly integrate the original model and the side information by overcoming their inconsistency. For this model, an efficient Riemannian conjugate gradient descent solver is devised based on a new metric that captures the curvature of the objective. Numerical experiments suggest that our method is more accurate than the stateof-the-art without compromising the efficiency.
international joint conference on artificial intelligence | 2018
Zhou Tengfei; Hui Qian; Zebang Shen; Chao Zhang; Chengwei Wang; Shichen Liu; Wenwu Ou
With the recent proliferation of recommendation system, there have been a lot of interests in sessionbased prediction methods, particularly those based on Recurrent Neural Network (RNN) and their variants. However, existing methods either ignore the dwell time prediction that plays an important role in measuring user’s engagement on the content, or fail to process very short or noisy sessions. In this paper, we propose a joint predictor, JUMP, for both user click and dwell time in session-based settings. To map its input into a feature vector, JUMP adopts a novel three-layered RNN structure which includes a fast-slow layer for very short sessions and an attention layer for noisy sessions. Experiments demonstrate that JUMP outperforms state-of-the-art methods in both user click and dwell time prediction.
international joint conference on artificial intelligence | 2017
Zebang Shen; Hui Qian; Tongzhou Mu; Chao Zhang
Nowadays, algorithms with fast convergence, small memory footprints, and low per-iteration complexity are particularly favorable for artificial intelligence applications. In this paper, we propose a doubly stochastic algorithm with a novel accelerating multi-momentum technique to solve large scale empirical risk minimization problem for learning tasks. While enjoying a provably superior convergence rate, in each iteration, such algorithm only accesses a mini batch of samples and meanwhile updates a small block of variable coordinates, which substantially reduces the amount of memory reference when both the massive sample size and ultra-high dimensionality are involved. Specifically, to obtain an -accurate solution, our algorithm requires only O(log(1/ )/ √ ) overall computation for the general convex case and O((n + √ nκ) log(1/ )) for the strongly convex case. Empirical studies on huge scale datasets are conducted to illustrate the efficiency of our method in practice.
arXiv: Numerical Analysis | 2016
Chao Zhang; Zebang Shen; Hui Qian; Tengfei Zhou
international conference on artificial intelligence | 2015
Zebang Shen; Hui Qian; Tengfei Zhou; Song Wang
international joint conference on artificial intelligence | 2016
Zebang Shen; Hui Qian; Tengfei Zhou; Tongzhou Mu
international conference on machine learning | 2018
Zebang Shen; Aryan Mokhtari; Hui Qian; Peilin Zhao; Tengfei Zhou
international conference on artificial intelligence and statistics | 2018
Jiahao Xie; Hui Qian; Zebang Shen; Chao Zhang
national conference on artificial intelligence | 2016
Tengfei Zhou; Hui Qian; Zebang Shen; Congfu Xu
arXiv: Machine Learning | 2016
Zebang Shen; Hui Qian; Chao Zhang; Tengfei Zhou