Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yu-Xiang Wang is active.

Publication


Featured researches published by Yu-Xiang Wang.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Block-Sparse RPCA for Salient Motion Detection

Zhi Gao; Loong Fah Cheong; Yu-Xiang Wang

Recent evaluation [2], [13] of representative background subtraction techniques demonstrated that there are still considerable challenges facing these methods. Challenges in realistic environment include illumination change causing complex intensity variation, background motions (trees, waves, etc.) whose magnitude can be greater than those of the foreground, poor image quality under low light, camouflage, etc. Existing methods often handle only part of these challenges; we address all these challenges in a unified framework which makes little specific assumption of the background. We regard the observed image sequence as being made up of the sum of a low-rank background matrix and a sparse outlier matrix and solve the decomposition using the Robust Principal Component Analysis method. Our contribution lies in dynamically estimating the support of the foreground regions via a motion saliency estimation step, so as to impose spatial coherence on these regions. Unlike smoothness constraint such as MRF, our method is able to obtain crisply defined foreground regions, and in general, handles large dynamic background motion much better. Furthermore, we also introduce an image alignment step to handle camera jitter. Extensive experiments on benchmark and additional challenging data sets demonstrate that our method works effectively on a wide range of complex scenarios, resulting in best performance that significantly outperforms many state-of-the-art approaches.


web search and data mining | 2016

DiFacto: Distributed Factorization Machines

Mu Li; Ziqi Liu; Alexander J. Smola; Yu-Xiang Wang

Factorization Machines offer good performance and useful embeddings of data. However, they are costly to scale to large amounts of data and large numbers of features. In this paper we describe DiFacto, which uses a refined Factorization Machine model with sparse memory adaptive constraints and frequency adaptive regularization. We show how to distribute DiFacto over multiple machines using the Parameter Server framework by computing distributed subgradients on minibatches asynchronously. We analyze its convergence and demonstrate its efficiency in computational advertising datasets with billions examples and features.


conference on recommender systems | 2015

Fast Differentially Private Matrix Factorization

Ziqi Liu; Yu-Xiang Wang; Alexander J. Smola

Differentially private collaborative filtering is a challenging task, both in terms of accuracy and speed. We present a simple algorithm that is provably differentially private, while offering good performance, using a novel connection of differential privacy to Bayesian posterior sampling via Stochastic Gradient Langevin Dynamics. Due to its simplicity the algorithm lends itself to efficient implementation. By careful systems design and by exploiting the power law behavior of the data to maximize CPU cache bandwidth we are able to generate 1024 dimensional models at a rate of 8.5 million recommendations per second on a single PC.


International Journal of Computer Vision | 2015

Practical Matrix Completion and Corruption Recovery Using Proximal Alternating Robust Subspace Minimization

Yu-Xiang Wang; Choon Meng Lee; Loong Fah Cheong; Kim-Chuan Toh

Low-rank matrix completion is a problem of immense practical importance. Recent works on the subject often use nuclear norm as a convex surrogate of the rank function. Despite its solid theoretical foundation, the convex version of the problem often fails to work satisfactorily in real-life applications. Real data often suffer from very few observations, with support not meeting the randomness requirements, ubiquitous presence of noise and potentially gross corruptions, sometimes with these simultaneously occurring. This paper proposes a Proximal Alternating Robust Subspace Minimization method to tackle the three problems. The proximal alternating scheme explicitly exploits the rank constraint on the completed matrix and uses the


privacy in statistical databases | 2016

On-Average KL-Privacy and Its Equivalence to Generalization for Max-Entropy Mechanisms

Yu-Xiang Wang; Jing Lei; Stephen E. Fienberg


Social Science Research Network | 2017

Non-Stationary Stochastic Optimization with Local Spatial and Temporal Changes

Xi Chen; Yining Wang; Yu-Xiang Wang

\ell _0


neural information processing systems | 2013

Provable Subspace Clustering: When LRR meets SSC

Yu-Xiang Wang; Huan Xu; Chenlei Leng


international conference on machine learning | 2013

Noisy Sparse Subspace Clustering

Yu-Xiang Wang; Huan Xu

ℓ0 pseudo-norm directly in the corruption recovery step. We show that the proposed method for the non-convex and non-smooth model converges to a stationary point. Although it is not guaranteed to find the global optimal solution, in practice we find that our algorithm can typically arrive at a good local minimizer when it is supplied with a reasonably good starting point based on convex optimization. Extensive experiments with challenging synthetic and real data demonstrate that our algorithm succeeds in a much larger range of practical problems where convex optimization fails, and it also outperforms various state-of-the-art algorithms.


international conference on machine learning | 2015

Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo

Yu-Xiang Wang; Stephen E. Fienberg; Alexander J. Smola

We define On-Average KL-Privacy and present its properties and connections to differential privacy, generalization and information-theoretic quantities including max-information and mutual information. The new definition significantly weakens differential privacy, while preserving its minimal design features such as composition over small group and multiple queries as well as closeness to post-processing. Moreover, we show that On-Average KL-Privacy is equivalent to generalization for a large class of commonly-used tools in statistics and machine learning that samples from Gibbs distributions—a class of distributions that arises naturally from the maximum entropy principle. In addition, a byproduct of our analysis yields a lower bound for generalization error in terms of mutual information which reveals an interesting interplay with known upper bounds that use the same quantity.


Journal of Machine Learning Research | 2016

Trend filtering on graphs

Yu-Xiang Wang; James Sharpnack; Alexander J. Smola; Ryan J. Tibshirani

We consider a non-stationary sequential stochastic optimization problem, in which the underlying cost functions change over time under a variation budget constraint. We propose an

Collaboration


Dive into the Yu-Xiang Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zachary C. Lipton

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yining Wang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aarti Singh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Veeranjaneyulu Sadhanala

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ziqi Liu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Huan Xu

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge