Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Renbo Zhao is active.

Publication


Featured researches published by Renbo Zhao.


international conference on acoustics, speech, and signal processing | 2016

Online nonnegative matrix factorization with outliers

Renbo Zhao; Vincent Y. F. Tan

We propose a unified and systematic framework for performing online nonnegative matrix factorization in the presence of outliers. Our framework is particularly suited to large-scale data. We propose two solvers based on projected gradient descent and the alternating direction method of multipliers. We prove that the sequence of objective values converges almost surely by appealing to the quasi-martingale convergence theorem. We also show the sequence of learned dictionaries converges to the set of stationary points of the expected loss function almost surely. In addition, we extend our basic problem formulation to various settings with different constraints and regularizers. We also adapt the solvers and analyses to each setting. We perform extensive experiments on both synthetic and real datasets. These experiments demonstrate the computational efficiency and efficacy of our algorithms on tasks such as (parts-based) basis learning, image denoising, shadow removal, and foreground-background separation.


IEEE Transactions on Signal Processing | 2017

Online Nonnegative Matrix Factorization With Outliers

Renbo Zhao; Vincent Y. F. Tan

We propose an optimization framework for performing online Non-negative Matrix Factorization (NMF) in the presence of outliers, based on l\ regularization and stochastic approximation. Due to the online nature of the algorithm, the proposed method has extremely low computational and storage complexity and is thus particularly applicable in this age of BigData. Furthermore, our algorithm shows promising performance in dealing with outliers, which previous online NMF algorithms fail to cope with. Convergence analysis shows the dictionary learned by our algorithm converges to that learned by its batch counterpart almost surely, as data size tends to infinity. We show numerically on a range of face datasets that our algorithm is superior to the state-of-the-art NMF algorithms in terms of running time, basis representations and reconstruction of original images. We also observe that our algorithm performs well even when the density of outliers reaches 40%. We provide explanations behind this seemingly surprising result.


IEEE Transactions on Signal Processing | 2018

A Unified Convergence Analysis of the Multiplicative Update Algorithm for Regularized Nonnegative Matrix Factorization

Renbo Zhao; Vincent Y. F. Tan

The multiplicative update (MU) algorithm has been extensively used to estimate the basis and coefficient matrices in nonnegative matrix factorization (NMF) problems under a wide range of divergences and regularizers. However, theoretical convergence guarantees have only been derived for a few special divergences without regularization. In this work, we provide a conceptually simple, self-contained, and unified proof for the convergence of the MU algorithm applied on NMF with a wide range of divergences and regularizers. Our main result shows the sequence of iterates (i.e., pairs of basis and coefficient matrices) produced by the MU algorithm converges to the set of stationary points of the nonconvex NMF optimization problem. Our proof strategy has the potential to open up new avenues for analyzing similar problems in machine learning and signal processing.


international conference on acoustics, speech, and signal processing | 2017

A unified convergence analysis of the multiplicative update algorithm for nonnegative matrix factorization

Renbo Zhao; Vincent Y. F. Tan

The multiplicative update (MU) algorithm has been used extensively to estimate the basis and coefficient matrices in nonnegative matrix factorization (NMF) problems under a wide range of divergences and regularizations. However, theoretical convergence guarantees have only been derived for a few special divergences. In this work, we provide a conceptually simple, self-contained, and unified proof for the convergence of the MU algorithm applied on NMF with a wide range of divergences and regularizations. Our result shows the sequence of iterates (i.e., pairs of basis and coefficient matrices) produced by the MU algorithm converges to the set of stationary points of the NMF (optimization) problem. Our proof strategy has the potential to open up new avenues for analyzing similar problems.


international conference on artificial intelligence and statistics | 2017

Online Nonnegative Matrix Factorization with General Divergences

Renbo Zhao; Vincent Y. F. Tan; Huan Xu


IEEE Transactions on Information Theory | 2017

Adversarial Top-

Changho Suh; Vincent Y. F. Tan; Renbo Zhao


uncertainty in artificial intelligence | 2017

K

Renbo Zhao; William B. Haskell; Vincent Y. F. Tan


arXiv: Optimization and Control | 2018

Ranking

Le Thi Khanh Hien; Renbo Zhao; William B. Haskell


IEEE Transactions on Signal Processing | 2018

Stochastic L-BFGS Revisited: Improved Convergence Rates and Practical Acceleration Strategies.

Renbo Zhao; William B. Haskell; Vincent Y. F. Tan


Archive | 2017

An Inexact Primal-Dual Smoothing Framework for Large-Scale Non-Bilinear Saddle Point Problems.

Le Thi Khanh Hien; Renbo Zhao; William B. Haskell

Collaboration


Dive into the Renbo Zhao's collaboration.

Top Co-Authors

Avatar

Vincent Y. F. Tan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

William B. Haskell

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Le Thi Khanh Hien

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge