Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xinghao Ding is active.

Publication


Featured researches published by Xinghao Ding.


IEEE Transactions on Image Processing | 2011

Bayesian Robust Principal Component Analysis

Xinghao Ding; Lihan He; Lawrence Carin

A hierarchical Bayesian model is considered for decomposing a matrix into low-rank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. We compare the Bayesian model to a state-of-the-art optimization-based implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.


IEEE Transactions on Image Processing | 2014

Bayesian Nonparametric Dictionary Learning for Compressed Sensing MRI

Yue Huang; John Paisley; Qin Lin; Xinghao Ding; Xueyang Fu; Xiao-Ping Zhang

We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled \(k \) -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.


IEEE Transactions on Image Processing | 2017

Clearing the Skies: A Deep Network Architecture for Single-Image Rain Removal

Xueyang Fu; Jiabin Huang; Xinghao Ding; Yinghao Liao; John Paisley

We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.


IEEE Transactions on Intelligent Transportation Systems | 2015

Vehicle Logo Recognition System Based on Convolutional Neural Networks With a Pretraining Strategy

Yue Huang; Ruiwen Wu; Ye Sun; Wei Wang; Xinghao Ding

Since a vehicle logo is the clearest indicator of a vehicle manufacturer, most vehicle manufacturer recognition (VMR) methods are based on vehicle logo recognition. Logo recognition can be still a challenge due to difficulties in precisely segmenting the vehicle logo in an image and the requirement for robustness against various imaging situations simultaneously. In this paper, a convolutional neural network (CNN) system has been proposed for VMR that removes the requirement for precise logo detection and segmentation. In addition, an efficient pretraining strategy has been introduced to reduce the high computational cost of kernel training in CNN-based systems to enable improved real-world applications. A data set containing 11 500 logo images belonging to 10 manufacturers, with 10 000 for training and 1500 for testing, is generated and employed to assess the suitability of the proposed system. An average accuracy of 99.07% is obtained, demonstrating the high classification potential and robustness against various poor imaging situations.


IEEE Transactions on Image Processing | 2015

A Probabilistic Method for Image Enhancement With Simultaneous Illumination and Reflectance Estimation

Xueyang Fu; Yinghao Liao; Delu Zeng; Yue Huang; Xiao-Ping Zhang; Xinghao Ding

In this paper, a new probabilistic method for image enhancement is presented based on a simultaneous estimation of illumination and reflectance in the linear domain. We show that the linear domain model can better represent prior information for better estimation of reflectance and illumination than the logarithmic domain. A maximum a posteriori (MAP) formulation is employed with priors of both illumination and reflectance. To estimate illumination and reflectance effectively, an alternating direction method of multipliers is adopted to solve the MAP problem. The experimental results show the satisfactory performance of the proposed method to obtain reflectance and illumination with visually pleasing enhanced results and a promising convergence rate. Compared with other testing methods, the proposed method yields comparable or better results on both subjective and objective assessments.


Signal Processing | 2016

A fusion-based enhancing method for weakly illuminated images

Xueyang Fu; Delu Zeng; Yue Huang; Yinghao Liao; Xinghao Ding; John Paisley

We propose a straightforward and efficient fusion-based method for enhancing weakly illumination images that uses several mature image processing techniques. First, we employ an illumination estimating algorithm based on morphological closing to decompose an observed image into a reflectance image and an illumination image. We then derive two inputs that represent luminance-improved and contrast-enhanced versions of the first decomposed illumination using the sigmoid function and adaptive histogram equalization. Designing two weights based on these inputs, we produce an adjusted illumination by fusing the derived inputs with the corresponding weights in a multi-scale fashion. Through a proper weighting and fusion strategy, we blend the advantages of different techniques to produce the adjusted illumination. The final enhanced image is obtained by compensating the adjusted illumination back to the reflectance. Through this synthesis, the enhanced image represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image. In the proposed fusion-based framework, images under different weak illumination conditions such as backlighting, non-uniform illumination and nighttime can be enhanced. HighlightsA fusion-based method for enhancing various weakly illuminated images is proposed.The proposed method requires only one input to obtain the enhanced image.Different mature image processing techniques can be blended in our framework.Our method has an efficient computation time for practical applications.


international conference on image processing | 2014

A retinex-based enhancing approach for single underwater image

Xueyang Fu; Peixian Zhuang; Yue Huang; Yinghao Liao; Xiao-Ping Zhang; Xinghao Ding

Since the light is absorbed and scattered while traveling in water, color distortion, under-exposure and fuzz are three major problems of underwater imaging. In this paper, a novel retinex-based enhancing approach is proposed to enhance single underwater image. The proposed approach has mainly three steps to solve the problems mentioned above. First, a simple but effective color correction strategy is adopted to address the color distortion. Second, a variational framework for retinex is proposed to decompose the reflectance and the illumination, which represent the detail and brightness respectively, from single underwater image. An effective alternating direction optimization strategy is adopted to solve the proposed model. Third, the reflectance and the illumination are enhanced by different strategies to address the under-exposure and fuzz problem. The final enhanced image is obtained by combining use the enhanced reflectance and illumination. The enhanced result is improved by color correction, lightens dark regions, naturalness preservation, and well enhanced edges and details. Moreover, the proposed approach is a general method that can enhance other kinds of degraded image, such as sandstorm image.


computer vision and pattern recognition | 2017

Removing Rain from Single Images via a Deep Detail Network

Xueyang Fu; Jiabin Huang; Delu Zeng; Yue Huang; Xinghao Ding; John Paisley

We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN). Inspired by the deep residual network (ResNet) that simplifies the learning process by changing the mapping form, we propose a deep detail network to directly reduce the mapping range from input to output, which makes the learning process easier. To further improve the de-rained result, we use a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images. This demonstrates that a deep architecture not only has benefits for high-level vision tasks but also can be used to solve low-level imaging problems. Though we train the network on synthetic data, we find that the learned network generalizes well to real-world test images. Experiments show that the proposed method significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures. We discuss applications of this structure to denoising and JPEG artifact reduction at the end of the paper.


IEEE Geoscience and Remote Sensing Letters | 2015

Remote Sensing Image Enhancement Using Regularized-Histogram Equalization and DCT

Xueyang Fu; Jiye Wang; Delu Zeng; Yue Huang; Xinghao Ding

In this letter, an effective enhancement method for remote sensing images is introduced to improve the global contrast and the local details. The proposed method constitutes an empirical approach by using the regularized-histogram equalization (HE) and the discrete cosine transform (DCT) to improve the image quality. First, a new global contrast enhancement method by regularizing the input histogram is introduced. More specifically, this technique uses the sigmoid function and the histogram to generate a distribution function for the input image. The distribution function is then used to produce a new image with improved global contrast by adopting the standard lookup table-based HE technique. Second, the DCT coefficients of the previous contrast improved image are automatically adjusted to further enhance the local details of the image. Compared with conventional methods, the proposed method can generate enhanced remote sensing images with higher contrast and richer details without introducing saturation artifacts.


computer vision and pattern recognition | 2016

A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation

Xueyang Fu; Delu Zeng; Yue Huang; Xiao-Ping Zhang; Xinghao Ding

We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image. We show that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ideal. Based on the previous investigation of the logarithmic transformation, a new weighted variational model is proposed for better prior representation, which is imposed in the regularization terms. Different from conventional variational models, the proposed model can preserve the estimated reflectance with more details. Moreover, the proposed model can suppress noise to some extent. An alternating minimization scheme is adopted to solve the proposed model. Experimental results demonstrate the effectiveness of the proposed model with its algorithm. Compared with other variational methods, the proposed method yields comparable or better results on both subjective and objective assessments.

Collaboration


Dive into the Xinghao Ding's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge