Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jiayi Ma is active.

Publication


Featured researches published by Jiayi Ma.


IEEE Transactions on Image Processing | 2014

Robust Point Matching via Vector Field Consensus

Jiayi Ma; Ji Zhao; Jinwen Tian; Alan L. Yuille; Zhuowen Tu

In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.


IEEE Transactions on Geoscience and Remote Sensing | 2015

Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming

Jiayi Ma; Huabing Zhou; Ji Zhao; Yuan Gao; Junjun Jiang; Jinwen Tian

Feature matching, which refers to establishing reliable correspondence between two sets of features (particularly point features), is a critical prerequisite in feature-based registration. In this paper, we propose a flexible and general algorithm, which is called locally linear transforming (LLT), for both rigid and nonrigid feature matching of remote sensing images. We start by creating a set of putative correspondences based on the feature similarity and then focus on removing outliers from the putative set and estimating the transformation as well. We formulate this as a maximum-likelihood estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. To ensure the well-posedness of the problem, we develop a local geometrical constraint that can preserve local structures among neighboring feature points, and it is also robust to a large number of outliers. The problem is solved by using the expectation-maximization algorithm (EM), and the closed-form solutions of both rigid and nonrigid transformations are derived in the maximization step. In the nonrigid case, we model the transformation between images in a reproducing kernel Hilbert space (RKHS), and a sparse approximation is applied to the transformation that reduces the method computation complexity to linearithmic. Extensive experiments on real remote sensing images demonstrate accurate results of LLT, which outperforms current state-of-the-art methods, particularly in the case of severe outliers (even up to 80%).


IEEE Transactions on Signal Processing | 2015

Robust

Jiayi Ma; Weichao Qiu; Ji Zhao; Yong Ma; Alan L. Yuille; Zhuowen Tu

We introduce a new transformation estimation algorithm using the L2E estimator and apply it to non-rigid registration for building robust sparse and dense correspondences. In the sparse point case, our method iteratively recovers the point correspondence and estimates the transformation between two point sets. Feature descriptors such as shape context are used to establish rough correspondence. We then estimate the transformation using our robust algorithm. This enables us to deal with the noise and outliers which arise in the correspondence step. The transformation is specified in a functional space, more specifically a reproducing kernel Hilbert space. In the dense point case for nonrigid image registration, our approach consists of matching both sparsely and densely sampled SIFT features, and it has particular advantages in handling significant scale changes and rotations. The experimental results show that our approach greatly outperforms state-of-the-art methods, particularly when the data contains severe outliers.


Information Fusion | 2016

L_{2}E

Jiayi Ma; Chen Chen; Chang Li; Jun Huang

We propose a new IR/visible fusion method based on gradient transfer and TV minimization.It can keep both the thermal radiation and the appearance information in the source images.We generalize the proposed method to fuse image pairs without pre-registration.Our fusion results look like sharpened IR images with highlighted target and abundant textures.To the best of our knowledge, the proposed fusion strategy has not yet been studied. In image fusion, the most desirable information is obtained from multiple images of the same scene and merged to generate a composite image. This resulting new image is more appropriate for human visual perception and further image-processing tasks. Existing methods typically use the same representations and extract the similar characteristics for different source images during the fusion process. However, it may not be appropriate for infrared and visible images, as the thermal radiation in infrared images and the appearance in visible images are manifestations of two different phenomena. To keep the thermal radiation and appearance information simultaneously, in this paper we propose a novel fusion algorithm, named Gradient Transfer Fusion (GTF), based on gradient transfer and total variation (TV) minimization. We formulate the fusion problem as an ?1-TV minimization problem, where the data fidelity term keeps the main intensity distribution in the infrared image, and the regularization term preserves the gradient variation in the visible image. We also generalize the formulation to fuse image pairs without pre-registration, which greatly enhances its applicability as high-precision registration is very challenging for multi-sensor data. The qualitative and quantitative comparisons with eight state-of-the-art methods on publicly available databases demonstrate the advantages of GTF, where our results look like sharpened infrared images with more appearance details.


Pattern Recognition | 2015

Estimation of Transformation for Non-Rigid Registration

Jiayi Ma; Ji Zhao; Yong Ma; Jinwen Tian

Registration of multi-sensor data (particularly visible color sensors and infrared sensors) is a prerequisite for multimodal image analysis such as image fusion. Typically, the relationships between image pairs are modeled by rigid or affine transformations. However, this cannot produce accurate alignments when the scenes are not planar, for example, face images. In this paper, we propose a regularized Gaussian fields criterion for non-rigid registration of visible and infrared face images. The key idea is to represent an image by its edge map and align the edge maps by a robust criterion with a non-rigid model. We model the transformation between images in a reproducing kernel Hilbert space and a sparse approximation is applied to the transformation to avoid high computational complexity. Moreover, a coarse-to-fine strategy by applying deterministic annealing is used to overcome local convergence problems. The qualitative and quantitative comparisons on two publicly available databases demonstrate that our method significantly outperforms the state-of-the-art method with an affine model. As a result, our method will be beneficial for fusion-based face recognition. HighlightsWe analyze the robustness of Gaussian fields criterion both in theory and experiment.The Gaussian fields criterion is generalized from rigid to the non-rigid case.We propose a new non-rigid registration method to deal with more real-world problems.A sparse approximation on the transformation is applied to speed up the method.We customize and apply the proposed method to visible/thermal IR face registration.


computer vision and pattern recognition | 2013

Infrared and visible image fusion via gradient transfer and total variation minimization

Jiayi Ma; Ji Zhao; Jinwen Tian; Zhuowen Tu; Alan L. Yuille

We present a new point matching algorithm for robust nonrigid registration. The method iteratively recovers the point correspondence and estimates the transformation between two point sets. In the first step of the iteration, feature descriptors such as shape context are used to establish rough correspondence. In the second step, we estimate the transformation using a robust estimator called L_2E. This is the main novelty of our approach and it enables us to deal with the noise and outliers which arise in the correspondence step. The transformation is specified in a functional space, more specifically a reproducing kernel Hilbert space. We apply our method to nonrigid sparse image feature correspondence on 2D images and 3D surfaces. Our results quantitatively show that our approach outperforms state-of-the-art methods, particularly when there are a large number of outliers. Moreover, our method of robustly estimating transformations from correspondences is general and has many other applications.


IEEE Transactions on Image Processing | 2016

Non-rigid visible and infrared face registration via regularized Gaussian fields criterion

Jiayi Ma; Ji Zhao; Alan L. Yuille

In previous work on point registration, the input point sets are often represented using Gaussian mixture models and the registration is then addressed through a probabilistic approach, which aims to exploit global relationships on the point sets. For non-rigid shapes, however, the local structures among neighboring points are also strong and stable and thus helpful in recovering the point correspondence. In this paper, we formulate point registration as the estimation of a mixture of densities, where local features, such as shape context, are used to assign the membership probabilities of the mixture model. This enables us to preserve both global and local structures during matching. The transformation between the two point sets is specified in a reproducing kernel Hilbert space and a sparse approximation is adopted to achieve a fast implementation. Extensive experiments on both synthesized and real data show the robustness of our approach under various types of distortions, such as deformation, noise, outliers, rotation, and occlusion. It greatly outperforms the state-of-the-art methods, especially when the data is badly degraded.


Pattern Recognition | 2013

Robust Estimation of Nonrigid Transformation for Point Set Registration

Jiayi Ma; Ji Zhao; Jinwen Tian; Xiang Bai; Zhuowen Tu

In vector field learning, regularized kernel methods such as regularized least-squares require the number of basis functions to be equivalent to the training sample size, N. The learning process thus has O(N^3) and O(N^2) in the time and space complexity, respectively. This poses significant burden on the vector learning problem for large datasets. In this paper, we propose a sparse approximation to a robust vector field learning method, sparse vector field consensus (SparseVFC), and derive a statistical learning bound on the speed of the convergence. We apply SparseVFC to the mismatch removal problem. The quantitative results on benchmark datasets demonstrate the significant speed advantage of SparseVFC over the original VFC algorithm (two orders of magnitude faster) without much performance degradation; we also demonstrate the large improvement by SparseVFC over traditional methods like RANSAC. Moreover, the proposed method is general and it can be applied to other applications in vector field learning.


computer vision and pattern recognition | 2011

Non-Rigid Point Set Registration by Preserving Global and Local Structures

Ji Zhao; Jiayi Ma; Jinwen Tian; Jie Ma; Dazhi Zhang

We propose a method for vector field learning with outliers, called vector field consensus (VFC). It could distinguish inliers from outliers and learn a vector field fitting for the inliers simultaneously. A prior is taken to force the smoothness of the field, which is based on the Tiknonov regularization in vector-valued reproducing kernel Hilbert space. Under a Bayesian framework, we associate each sample with a latent variable which indicates whether it is an inlier, and then formulate the problem as maximum a posteriori problem and use Expectation Maximization algorithm to solve it. The proposed method possesses two characteristics: 1) robust to outliers, and being able to tolerate 90% outliers and even more, 2) computationally efficient. As an application, we apply VFC to solve the problem of mismatch removing. The results demonstrate that our method outperforms many state-of-the-art methods, and it is very robust.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

Regularized vector field learning with sparse approximation for mismatch removal

Junjun Jiang; Ruimin Hu; Zhongyuan Wang; Zhen Han; Jiayi Ma

As the facial image captured by a low-cost camera is typically very low resolution (LR), blurring, and noisy, traditional neighbor-embedding-based facial image hallucination methods from one single manifold (i.e., the LR image manifold) fail to reliably estimate the intention geometrical structure, consequently leading to a bias to the image reconstruction result. In this paper, we introduce the notion of neighbor embedding (NE) from the LR and the high-resolution (HR) image manifolds simultaneously and propose a novel NE model, termed the coupled-layer NE (CLNE), for facial image hallucination. CLNE differs substantially from other NE models in that it has two layers: the LR and the HR layers. The LR layer in this model is the local geometrical structure of the LR patch manifold, which is characterized by the reconstruction weights of the LR patches; the HR layer is the intrinsic geometry that can geometrically constrain the reconstruction weights. With this coupled-constraint paradigm between the adaptation of the LR layer and the HR one, CLNE can achieve a more robust NE through iteratively updating the LR patch reconstruction weights and the estimated HR patch. The experimental results in simulation and real conditions confirm that the proposed method outperforms the related state-of-the-art methods in both quantitative and visual comparisons.

Collaboration


Dive into the Jiayi Ma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junjun Jiang

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Ji Zhao

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chang Li

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jinwen Tian

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Huabing Zhou

Wuhan Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chengyin Liu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Chen

China University of Geosciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge