Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deming Zhai is active.

Publication


Featured researches published by Deming Zhai.


IEEE Transactions on Image Processing | 2014

Progressive Image Denoising Through Hybrid Graph Laplacian Regularization: A Unified Framework

Xianming Liu; Deming Zhai; Debin Zhao; Guangtao Zhai; Wen Gao

Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms.


ACM Transactions on Intelligent Systems and Technology | 2012

Multiview Metric Learning with Global Consistency and Local Smoothness

Deming Zhai; Hong Chang; Shiguang Shan; Xilin Chen; Wen Gao

In many real-world applications, the same object may have different observations (or descriptions) from multiview observation spaces, which are highly related but sometimes look different from each other. Conventional metric-learning methods achieve satisfactory performance on distance metric computation of data in a single-view observation space, but fail to handle well data sampled from multiview observation spaces, especially those with highly nonlinear structure. To tackle this problem, we propose a new method called Multiview Metric Learning with Global consistency and Local smoothness (MVML-GL) under a semisupervised learning setting, which jointly considers global consistency and local smoothness. The basic idea is to reveal the shared latent feature space of the multiview observations by embodying global consistency constraints and preserving local geometric structures. Specifically, this framework is composed of two main steps. In the first step, we seek a global consistent shared latent feature space, which not only preserves the local geometric structure in each space but also makes those labeled corresponding instances as close as possible. In the second step, the explicit mapping functions between the input spaces and the shared latent space are learned via regularized locally linear regression. Furthermore, these two steps both can be solved by convex optimizations in closed form. Experimental results with application to manifold alignment on real-world datasets of pose and facial expression demonstrate the effectiveness of the proposed method.


british machine vision conference | 2010

Manifold Alignment via Corresponding Projections.

Deming Zhai; Bo Li; Hong Chang; Shiguang Shan; Xilin Chen; Wen Gao

In this paper, we propose a novel manifold alignment method by learning the underlying common manifold with supervision of corresponding data pairs from different observation sets. Different from the previous algorithms of semi-supervised manifold alignment, our method learns the explicit corresponding projections from each original observation space to the common embedding space everywhere. Benefiting from this property, our method could process new test data directly rather than re-alignment. Furthermore, our approach doesn’t have any assumption on the data structures, thus it could handle more complex cases and get better results compared with previous work. In the proposed algorithm, manifold alignment is formulated as a minimization problem with proper constraints, which could be solved in an analytical manner with closed-form solution. Experimental results on pose manifold alignment of different objects and faces demonstrate the effectiveness of our proposed method.


data compression conference | 2013

Image Super-Resolution via Hierarchical and Collaborative Sparse Representation

Xianming Liu; Deming Zhai; Debin Zhao; Wen Gao

In this paper, we propose an efficient image super-resolution algorithm based on hierarchical and collaborative sparse representation (HCSR). Motivated by the observation that natural images typically exhibit multi-modal statistics, we propose a hierarchical sparse coding model which includes two layers: the first layer encodes individual patches, and the second layer jointly encodes the set of patches that belong to the same homogeneous subset of image space. We further present a simple alternative to achieve such target by identifying optimal sparse representation that is adaptive to specific statistics of images. Specially, we cluster images from the offline training set into regions of similar geometric structure, and model each region (cluster) by learning adaptive bases describing the patches within that cluster using principal component analysis (PCA). This cluster-specific dictionary is then exploited to optimally estimate the underlying HR pixel values using the idea of collaborative sparse coding, in which the similarity between patches in the same cluster is further considered. It conceptually and computationally remedies the limitation of many existing algorithms based on standard sparse coding, in which patches are independently encoded. Experimental results demonstrate the proposed method appears to be competitive with state-of-the-art algorithms.


IEEE Transactions on Image Processing | 2016

Compressive Sampling-Based Image Coding for Resource-Deficient Visual Communication

Xianming Liu; Deming Zhai; Jiantao Zhou; Xinfeng Zhang; Debin Zhao; Wen Gao

In this paper, a new compressive sampling-based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of the local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering and 2) remain a conventional image and can therefore be coded by any standardized codec to remove the statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as the multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.In this paper, a new compressive sampling-based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of the local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering and 2) remain a conventional image and can therefore be coded by any standardized codec to remove the statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as the multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.


IEEE Transactions on Image Processing | 2017

Sparsity-Based Image Error Concealment via Adaptive Dual Dictionary Learning and Regularization

Xianming Liu; Deming Zhai; Jiantao Zhou; Shiqi Wang; Debin Zhao; Huijun Gao

In this paper, we propose a novel sparsity-based image error concealment (EC) algorithm through adaptive dual dictionary learning and regularization. We define two feature spaces: the observed space and the latent space, corresponding to the available regions and the missing regions of image under test, respectively. We learn adaptive and complete dictionaries individually for each space, where the training data are collected via an adaptive template matching mechanism. Based on the piecewise stationarity of natural images, a local correlation model is learned to bridge the sparse representations of the aforementioned dual spaces, allowing us to transfer the knowledge of the available regions to the missing regions for EC purpose. Eventually, the EC task is formulated as a unified optimization problem, where the sparsity of both spaces and the learned correlation model are incorporated. Experimental results show that the proposed method outperforms the state-of-the-art techniques in terms of both objective and perceptual metrics.In this paper, we propose a novel sparsity-based image error concealment (EC) algorithm through adaptive dual dictionary learning and regularization. We define two feature spaces: the observed space and the latent space, corresponding to the available regions and the missing regions of image under test, respectively. We learn adaptive and complete dictionaries individually for each space, where the training data are collected via an adaptive template matching mechanism. Based on the piecewise stationarity of natural images, a local correlation model is learned to bridge the sparse representations of the aforementioned dual spaces, allowing us to transfer the knowledge of the available regions to the missing regions for EC purpose. Eventually, the EC task is formulated as a unified optimization problem, where the sparsity of both spaces and the learned correlation model are incorporated. Experimental results show that the proposed method outperforms the state-of-the-art techniques in terms of both objective and perceptual metrics.


Pattern Recognition | 2018

Parametric local multiview hamming distance metric learning

Deming Zhai; Xianming Liu; Hong Chang; Yi Zhen; Xilin Chen; Maozu Guo; Wen Gao

Abstract Learning an appropriate distance metric is a crucial problem in pattern recognition. To confront with the scalability issue of massive data, hamming distance on binary codes is advocated since it permits exact sub-linear kNN search and meanwhile shares the advantage of efficient storage. In this paper, we study hamming metric learning in the context of multimodal data for cross-view similarity search. We present a new method called Parametric Local Multiview Hamming metric (PLMH), which learns multiview metric based on a set of local hash functions to locally adapt to the data structure of each modality. To balance locality and computational efficiency, the hash projection matrix of each instance is parameterized, with guaranteed approximation error bound, as a linear combination of basis hash projections associated with a small set of anchor points. The weak-supervisory information (side information) provided by pairwise and triplet constraints are incorporated in a coherent way to achieve semantically effective hash codes. A local optimal conjugate gradient algorithm with orthogonal rotations is designed to learn the hash functions for each bit, and the overall hash codes are learned in a sequential manner to progressively minimize the bias. Experimental evaluations on cross-media retrieval tasks demonstrate that PLMH performs competitively against the state-of-the-art methods.


data compression conference | 2013

Progressive Image Restoration through Hybrid Graph Laplacian Regularization

Deming Zhai; Xianming Liu; Debin Zhao; Hong Chang; Wen Gao

In this paper, we propose a unified framework to perform progressive image restoration based on hybrid graph Laplacian regularized regression. We first construct a multi-scale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space by exploring non-local self-similarity. In this procedure, the intrinsic manifold structure is considered by using both measured and unmeasured samples. On the other hand, between two scales, the proposed model is extended to the parametric manner through explicit kernel mapping to model the inter-scale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art image restoration algorithms.


british machine vision conference | 2009

Semi-Supervised Discriminant Analysis via Spectral Transduction

Deming Zhai; Hong Chang; Bo Li; Shiguang Shan; Xilin Chen; Wen Gao

Linear Discriminant Analysis (LDA) is a popular method for dimensionality reduction and classification. In real-world applications when there is no sufficient labeled data, LDA suffers from serious performance drop or even fails to work. In this paper, we propose a novel method called Spectral Transduction Semi-Supervised Discriminant Analysis (STSDA), which can alleviate such problem by utilizing both labeled and unlabeled data. Our method takes into consideration both label augmenting and local structure preserving. First, we formulate label transduction with labeled and unlabeled data as a constrained convex optimization problem and solve it efficiently with a closed-form solution by using orthogonal projector matrices. Then, unlabeled data with reliable class estimations are selected with a balanced strategy to augment the original labeled data set. At last, LDA with manifold regularization is performed. Experimental results on face recognition demonstrate the effectiveness of our proposed method.


IEEE Transactions on Multimedia | 2018

Supervised Distributed Hashing for Large-Scale Multimedia Retrieval

Deming Zhai; Xianming Liu; Xiangyang Ji; Debin Zhao; Shin'ichi Satoh; Wen Gao

Recent years have witnessed the growing popularity of hashing for large-scale multimedia retrieval. Extensive hashing methods have been designed for data stored in a single machine, that is, centralized hashing . In many real-world applications, however, the large-scale data are often distributed across different locations, servers, or sites. Although hashing for distributed data can be implemented by assembling all distributed data together as a whole dataset in theory, it usually leads to prohibitive computation, communication, and storage costs in practice. Up to now, only a few methods were tailored for distributed hashing, which are all unsupervised approaches. In this paper, we propose an efficient and effective method called supervised distributed hashing (SupDisH), which learns discriminative hash functions by leveraging the semantic label information in a distributed manner. Specifically, we cast the distributed hashing problem into the framework of classification, where the learned binary codes are expected to be distinct enough for semantic retrieval. By introducing auxiliary variables, the distributed model is then separated into a set of decentralized subproblems with consistency constraints, which can be solved in parallel on each vertex of the distributed network. As such, we can obtain high-quality distinctive unbiased binary codes and consistent hash functions with low computational complexity, which facilitate tackling large-scale multimedia retrieval tasks involving distributed datasets. Experimental evaluations on three large-scale datasets show that SupDisH is competitive to centralized hashing methods and outperforms the state-of-the-art unsupervised distributed method significantly.

Collaboration


Dive into the Deming Zhai's collaboration.

Top Co-Authors

Avatar

Xianming Liu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Debin Zhao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hong Chang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xilin Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Guangtao Zhai

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Rong Chen

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maozu Guo

Beijing University of Civil Engineering and Architecture

View shared research outputs
Top Co-Authors

Avatar

Shiguang Shan

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge