Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhixun Su is active.

Publication


Featured researches published by Zhixun Su.


computer vision and pattern recognition | 2012

Fixed-rank representation for unsupervised visual learning

Risheng Liu; Zhouchen Lin; Fernando De la Torre; Zhixun Su

Subspace clustering and feature extraction are two of the most commonly used unsupervised learning techniques in computer vision and pattern recognition. State-of-the-art techniques for subspace clustering make use of recent advances in sparsity and rank minimization. However, existing techniques are computationally expensive and may result in degenerate solutions that degrade clustering performance in the case of insufficient data sampling. To partially solve these problems, and inspired by existing work on matrix factorization, this paper proposes fixed-rank representation (FRR) as a unified framework for unsupervised visual learning. FRR is able to reveal the structure of multiple subspaces in closed-form when the data is noiseless. Furthermore, we prove that under some suitable conditions, even with insufficient observations, FRR can still reveal the true subspace memberships. To achieve robustness to outliers and noise, a sparse regularizer is introduced into the FRR framework. Beyond subspace clustering, FRR can be used for unsupervised feature extraction. As a non-trivial byproduct, a fast numerical solver is developed for FRR. Experimental results on both synthetic data and real applications validate our theoretical analysis and demonstrate the benefits of FRR for unsupervised visual learning.


computer vision and pattern recognition | 2014

Deblurring Text Images via L0-Regularized Intensity and Gradient Prior

Jinshan Pan; Zhe Hu; Zhixun Su; Ming-Hsuan Yang

We propose a simple yet effective L0-regularized prior based on intensity and gradient for text image deblurring. The proposed image prior is motivated by observing distinct properties of text images. Based on this prior, we develop an efficient optimization method to generate reliable intermediate results for kernel estimation. The proposed method does not require any complex filtering strategies to select salient edges which are critical to the state-of-the-art deblurring algorithms. We discuss the relationship with other deblurring algorithms based on edge selection and provide insight on how to select salient edges in a more principled way. In the final latent image restoration step, we develop a simple method to remove artifacts and render better deblurred images. Experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art text image deblurring methods. In addition, we show that the proposed method can be effectively applied to deblur low-illumination images.


IEEE Transactions on Neural Networks | 2014

Structure-Constrained Low-Rank Representation

Kewei Tang; Risheng Liu; Zhixun Su; Jie Zhang

Benefiting from its effectiveness in subspace segmentation, low-rank representation (LRR) and its variations have many applications in computer vision and pattern recognition, such as motion segmentation, image segmentation, saliency detection, and semisupervised learning. It is known that the standard LRR can only work well under the assumption that all the subspaces are independent. However, this assumption cannot be guaranteed in real-world problems. This paper addresses this problem and provides an extension of LRR, named structure-constrained LRR (SC-LRR), to analyze the structure of multiple disjoint subspaces, which is more general for real vision data. We prove that the relationship of multiple linear disjoint subspaces can be exactly revealed by SC-LRR, with a predefined weight matrix. As a nontrivial byproduct, we also illustrate that SC-LRR can be applied for semisupervised learning. The experimental results on different types of vision problems demonstrate the effectiveness of our proposed method.


european conference on computer vision | 2014

Deblurring Face Images with Exemplars

Jinshan Pan; Zhe Hu; Zhixun Su; Ming-Hsuan Yang

The human face is one of the most interesting subjects involved in numerous applications. Significant progress has been made towards the image deblurring problem, however, existing generic deblurring methods are not able to achieve satisfying results on blurry face images. The success of the state-of-the-art image deblurring methods stems mainly from implicit or explicit restoration of salient edges for kernel estimation. When there is not much texture in the blurry image (e.g., face images), existing methods are less effective as only few edges can be used for kernel estimation. Moreover, recent methods are usually jeopardized by selecting ambiguous edges, which are imaged from the same edge of the object after blur, for kernel estimation due to local edge selection strategies. In this paper, we address these problems of deblurring face images by exploiting facial structures. We propose a maximum a posteriori (MAP) deblurring algorithm based on an exemplar dataset, without using the coarse-to-fine strategy or ad-hoc edge selections. Extensive evaluations against state-of-the-art methods demonstrate the effectiveness of the proposed algorithm for deblurring face images. We also show the extendability of our method to other specific deblurring tasks.


Signal Processing-image Communication | 2013

Kernel estimation from salient structure for robust motion deblurring

Jinshan Pan; Risheng Liu; Zhixun Su; Xianfeng Gu

Blind image deblurring algorithms have been improving steadily in the past years. Most state-of-the-art algorithms, however, still cannot perform perfectly in challenging cases, especially in large blur setting. In this paper, we focus on how to estimate a good blur kernel from a single blurred image based on the image structure. We found that image details caused by blur could adversely affect the kernel estimation, especially when the blur kernel is large. One effective way to remove these details is to apply image denoising model based on the total variation (TV). First, we developed a novel method for computing image structures based on the TV model, such that the structures undermining the kernel estimation will be removed. Second, we applied a gradient selection method to mitigate the possible adverse effect of salient edges and improve the robustness of kernel estimation. Third, we proposed a novel kernel estimation method, which is capable of removing noise and preserving the continuity in the kernel. Finally, we developed an adaptive weighted spatial prior to preserve sharp edges in latent image restoration. Extensive experiments testify to the effectiveness of our method on various kinds of challenging examples.


european conference on computer vision | 2010

Learning PDEs for image restoration via optimal control

Risheng Liu; Zhouchen Lin; Wei Zhang; Zhixun Su

Partial differential equations (PDEs) have been successfully applied to many computer vision and image processing problems. However, designing PDEs requires high mathematical skills and good insight into the problems. In this paper, we show that the design of PDEs could be made easier by borrowing the learning strategy from machine learning. In our learning-based PDE (L-PDE) framework for image restoration, there are two terms in our PDE model: (i) a regularizer which encodes the prior knowledge of the image model and (ii) a linear combination of differential invariants, which is data-driven and can effectively adapt to different problems and complex conditions. The L-PDE is learnt from some input/output pairs of training samples via an optimal control technique. The effectiveness of our L-PDE framework for image restoration is demonstrated with two exemplary applications: image denoising and inpainting, where the PDEs are obtained easily and the produced results are comparable to or better than those of traditional PDEs, which were elaborately designed.


Pattern Recognition | 2010

Feature extraction by learning Lorentzian metric tensor and its extensions

Risheng Liu; Zhouchen Lin; Zhixun Su; Kewei Tang

We develop a supervised dimensionality reduction method, called Lorentzian discriminant projection (LDP), for feature extraction and classification. Our method represents the structures of sample data by a manifold, which is furnished with a Lorentzian metric tensor. Different from classic discriminant analysis techniques, LDP uses distances from points to their within-class neighbors and global geometric centroid to model a new manifold to detect the intrinsic local and global geometric structures of data set. In this way, both the geometry of a group of classes and global data structures can be learnt from the Lorentzian metric tensor. Thus discriminant analysis in the original sample space reduces to metric learning on a Lorentzian manifold. We also establish the kernel, tensor and regularization extensions of LDP in this paper. The experimental results on benchmark databases demonstrate the effectiveness of our proposed method and the corresponding extensions.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2012

Empirical mode decomposition on surfaces

Hui Wang; Zhixun Su; Junjie Cao; Ye Wang; Hao Zhang

Empirical Mode Decomposition (EMD) is a powerful tool for analysing non-linear and non-stationary signals, and has drawn a great deal of attentions in various areas. In this paper, we generalize the classical EMD from Euclidean space to the setting of surfaces represented as triangular meshes. Inspired by the EMD, we also propose a feature-preserving smoothing method based on extremal envelopes. The core of our generalized EMD on surfaces is an envelope computation method that solves a bi-harmonic field with Dirichlet boundary conditions. Experimental results show that the proposed generalization of EMD on surfaces works well. We also demonstrate that the generalized EMD can be effectively utilized in filtering scalar functions defined over surfaces and surfaces themselves.


international conference on image processing | 2011

Robust head pose estimation via Convex Regularized Sparse Regression

Hao Ji; Risheng Liu; Fei Su; Zhixun Su; Yan Tian

This paper studies the problem of learning robust regression for real world head pose estimation. The performance and applicability of traditional regression methods in real world head pose estimation are limited by a lack of robustness to outlying or corrupted observations. By introducing low-rank and sparse regularizations, we propose a novel regression method, named Convex Regularized Sparse Regression (CRSR), for simultaneously removing the noise and outliers from the training data and learning the regression between image features and pose angles. We verify the efficiency of the proposed robust regression method with extensive experiments on real data, demonstrating lower error rates and efficiency than existing methods.


Neurocomputing | 2008

Letters: A new fuzzy approach for handling class labels in canonical correlation analysis

Yanyan Liu; Xiuping Liu; Zhixun Su

Canonical correlation analysis (CCA) can extract more discriminative features by utilizing class labels, especially the ones that can reflect the sample distribution appropriately. In this paper, a new fuzzy approach for handling class labels in the form of fuzzy membership degrees is proposed. We elaborately design a novel fuzzy membership function to represent the distribution of image samples. These fuzzy class labels promote the classification performances of CCA and kernel CCA (KCCA) through incorporating distribution information into the process of feature extraction. Comprehensive experimental results on face recognition demonstrate the effectiveness and feasibility of the proposed method.

Collaboration


Dive into the Zhixun Su's collaboration.

Top Co-Authors

Avatar

Risheng Liu

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jinshan Pan

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Junjie Cao

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiuping Liu

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yiyang Wang

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jiangxin Dong

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kewei Tang

Liaoning Normal University

View shared research outputs
Top Co-Authors

Avatar

Hui Wang

Dalian University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge