Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ganzhao Yuan is active.

Publication


Featured researches published by Ganzhao Yuan.


very large data bases | 2012

Low-rank mechanism: optimizing batch queries under differential privacy

Ganzhao Yuan; Zhenjie Zhang; Marianne Winslett; Xiaokui Xiao; Yin Yang; Zhifeng Hao

Differential privacy is a promising privacy-preserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result, such that it is provably hard for the adversary to infer the presence or absence of any individual record from the published noisy results. The main objective in differentially private query processing is to maximize the accuracy of the query results, while satisfying the privacy guarantees. Previous work, notably the matrix mechanism [16], has suggested that processing a batch of correlated queries as a whole can potentially achieve considerable accuracy gains, compared to answering them individually. However, as we point out in this paper, the matrix mechanism is mainly of theoretical interest; in particular, several inherent problems in its design limit its accuracy in practice, which almost never exceeds that of naive methods. In fact, we are not aware of any existing solution that can effectively optimize a query batch under differential privacy. Motivated by this, we propose the Low-Rank Mechanism (LRM), the first practical differentially private technique for answering batch queries with high accuracy, based on a low rank approximation of the workload matrix. We prove that the accuracy provided by LRM is close to the theoretical lower bound for any mechanism to answer a batch of queries under differential privacy. Extensive experiments using real data demonstrate that LRM consistently outperforms state-of-the-art query processing solutions under differential privacy, by large margins.


computer vision and pattern recognition | 2015

ℓ 0 TV: A new method for image restoration in the presence of impulse noise

Ganzhao Yuan; Bernard Ghanem

Total Variation (TV) is an effective and popular prior model in the field of regularization-based image processing. This paper focuses on TV for image restoration in the presence of impulse noise. This type of noise frequently arises in data acquisition and transmission due to many reasons, e.g. a faulty sensor or analog-to-digital converter errors. Removing this noise is an important task in image restoration. State-of-the-art methods such as Adaptive Outlier Pursuit(AOP) [42], which is based on TV with ℓ02-norm data fidelity, only give sub-optimal performance. In this paper, we propose a new method, called ℓ0TV -PADMM, which solves the TV-based restoration problem with ℓ0-norm data fidelity. To effectively deal with the resulting non-convex non-smooth optimization problem, we first reformulate it as an equivalent MPEC (Mathematical Program with Equilibrium Constraints), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our ℓ0TV -PADMM method finds a desirable solution to the original ℓ0-norm optimization problem and is proven to be convergent under mild conditions. We apply ℓ0TV -PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that ℓ0TV -PADMM outperforms state-of-the-art image restoration methods.


ACM Transactions on Database Systems | 2015

Optimizing Batch Linear Queries under Exact and Approximate Differential Privacy

Ganzhao Yuan; Zhenjie Zhang; Marianne Winslett; Xiaokui Xiao; Yin Yang; Zhifeng Hao

Differential privacy is a promising privacy-preserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result such that it is provably hard for the adversary to infer the presence or absence of any individual record from the published noisy results. The main objective in differentially private query processing is to maximize the accuracy of the query results while satisfying the privacy guarantees. Previous work, notably Li et al. [2010], has suggested that, with an appropriate strategy, processing a batch of correlated queries as a whole achieves considerably higher accuracy than answering them individually. However, to our knowledge there is currently no practical solution to find such a strategy for an arbitrary query batch; existing methods either return strategies of poor quality (often worse than naive methods) or require prohibitively expensive computations for even moderately large domains. Motivated by this, we propose a low-rank mechanism (LRM), the first practical differentially private technique for answering batch linear queries with high accuracy. LRM works for both exact (i.e., ε-) and approximate (i.e., (ε, Δ)-) differential privacy definitions. We derive the utility guarantees of LRM and provide guidance on how to set the privacy parameters, given the users utility expectation. Extensive experiments using real data demonstrate that our proposed method consistently outperforms state-of-the-art query processing solutions under differential privacy, by large margins.


Neurocomputing | 2013

Low-rank quadratic semidefinite programming

Ganzhao Yuan; Zhenjie Zhang; Bernard Ghanem; Zhifeng Hao

Low rank matrix approximation is an attractive model in large scale machine learning problems, because it can not only reduce the memory and runtime complexity, but also provide a natural way to regularize parameters while preserving learning accuracy. In this paper, we address a special class of nonconvex quadratic matrix optimization problems, which require a low rank positive semidefinite solution. Despite their non-convexity, we exploit the structure of these problems to derive an efficient solver that converges to their local optima. Furthermore, we show that the proposed solution is capable of dramatically enhancing the efficiency and scalability of a variety of concrete problems, which are of significant interest to the machine learning community. These problems include the Top-k Eigenvalue problem, Distance learning and Kernel learning. Extensive experiments on UCI benchmarks have shown the effectiveness and efficiency of our proposed method.


computer vision and pattern recognition | 2017

A Matrix Splitting Method for Composite Function Minimization

Ganzhao Yuan; Wei-Shi Zheng; Bernard Ghanem

Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.


Neurocomputing | 2014

BILGO: Bilateral greedy optimization for large scale semidefinite programming

Zhifeng Hao; Ganzhao Yuan; Bernard Ghanem

Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.


knowledge discovery and data mining | 2016

Convex Optimization for Linear Query Processing under Approximate Differential Privacy

Ganzhao Yuan; Yin Yang; Zhenjie Zhang; Zhifeng Hao


arXiv: Optimization and Control | 2016

Sparsity Constrained Minimization via Mathematical Programming with Equilibrium Constraints

Ganzhao Yuan; Bernard Ghanem


national conference on artificial intelligence | 2017

An Exact Penalty Method for Binary Optimization Based on MPEC Formulation.

Ganzhao Yuan; Bernard Ghanem


national conference on artificial intelligence | 2016

A proximal Alternating Direction Method for semi-definite rank minimization

Ganzhao Yuan; Bernard Ghanem

Collaboration


Dive into the Ganzhao Yuan's collaboration.

Top Co-Authors

Avatar

Bernard Ghanem

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhifeng Hao

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaokui Xiao

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge