Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xueyang Fu is active.

Publication


Featured researches published by Xueyang Fu.


IEEE Transactions on Image Processing | 2014

Bayesian Nonparametric Dictionary Learning for Compressed Sensing MRI

Yue Huang; John Paisley; Qin Lin; Xinghao Ding; Xueyang Fu; Xiao-Ping Zhang

We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled \(k \) -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.


IEEE Transactions on Image Processing | 2017

Clearing the Skies: A Deep Network Architecture for Single-Image Rain Removal

Xueyang Fu; Jiabin Huang; Xinghao Ding; Yinghao Liao; John Paisley

We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.


IEEE Transactions on Image Processing | 2015

A Probabilistic Method for Image Enhancement With Simultaneous Illumination and Reflectance Estimation

Xueyang Fu; Yinghao Liao; Delu Zeng; Yue Huang; Xiao-Ping Zhang; Xinghao Ding

In this paper, a new probabilistic method for image enhancement is presented based on a simultaneous estimation of illumination and reflectance in the linear domain. We show that the linear domain model can better represent prior information for better estimation of reflectance and illumination than the logarithmic domain. A maximum a posteriori (MAP) formulation is employed with priors of both illumination and reflectance. To estimate illumination and reflectance effectively, an alternating direction method of multipliers is adopted to solve the MAP problem. The experimental results show the satisfactory performance of the proposed method to obtain reflectance and illumination with visually pleasing enhanced results and a promising convergence rate. Compared with other testing methods, the proposed method yields comparable or better results on both subjective and objective assessments.


Signal Processing | 2016

A fusion-based enhancing method for weakly illuminated images

Xueyang Fu; Delu Zeng; Yue Huang; Yinghao Liao; Xinghao Ding; John Paisley

We propose a straightforward and efficient fusion-based method for enhancing weakly illumination images that uses several mature image processing techniques. First, we employ an illumination estimating algorithm based on morphological closing to decompose an observed image into a reflectance image and an illumination image. We then derive two inputs that represent luminance-improved and contrast-enhanced versions of the first decomposed illumination using the sigmoid function and adaptive histogram equalization. Designing two weights based on these inputs, we produce an adjusted illumination by fusing the derived inputs with the corresponding weights in a multi-scale fashion. Through a proper weighting and fusion strategy, we blend the advantages of different techniques to produce the adjusted illumination. The final enhanced image is obtained by compensating the adjusted illumination back to the reflectance. Through this synthesis, the enhanced image represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image. In the proposed fusion-based framework, images under different weak illumination conditions such as backlighting, non-uniform illumination and nighttime can be enhanced. HighlightsA fusion-based method for enhancing various weakly illuminated images is proposed.The proposed method requires only one input to obtain the enhanced image.Different mature image processing techniques can be blended in our framework.Our method has an efficient computation time for practical applications.


international conference on image processing | 2014

A retinex-based enhancing approach for single underwater image

Xueyang Fu; Peixian Zhuang; Yue Huang; Yinghao Liao; Xiao-Ping Zhang; Xinghao Ding

Since the light is absorbed and scattered while traveling in water, color distortion, under-exposure and fuzz are three major problems of underwater imaging. In this paper, a novel retinex-based enhancing approach is proposed to enhance single underwater image. The proposed approach has mainly three steps to solve the problems mentioned above. First, a simple but effective color correction strategy is adopted to address the color distortion. Second, a variational framework for retinex is proposed to decompose the reflectance and the illumination, which represent the detail and brightness respectively, from single underwater image. An effective alternating direction optimization strategy is adopted to solve the proposed model. Third, the reflectance and the illumination are enhanced by different strategies to address the under-exposure and fuzz problem. The final enhanced image is obtained by combining use the enhanced reflectance and illumination. The enhanced result is improved by color correction, lightens dark regions, naturalness preservation, and well enhanced edges and details. Moreover, the proposed approach is a general method that can enhance other kinds of degraded image, such as sandstorm image.


computer vision and pattern recognition | 2017

Removing Rain from Single Images via a Deep Detail Network

Xueyang Fu; Jiabin Huang; Delu Zeng; Yue Huang; Xinghao Ding; John Paisley

We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN). Inspired by the deep residual network (ResNet) that simplifies the learning process by changing the mapping form, we propose a deep detail network to directly reduce the mapping range from input to output, which makes the learning process easier. To further improve the de-rained result, we use a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images. This demonstrates that a deep architecture not only has benefits for high-level vision tasks but also can be used to solve low-level imaging problems. Though we train the network on synthetic data, we find that the learned network generalizes well to real-world test images. Experiments show that the proposed method significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures. We discuss applications of this structure to denoising and JPEG artifact reduction at the end of the paper.


IEEE Geoscience and Remote Sensing Letters | 2015

Remote Sensing Image Enhancement Using Regularized-Histogram Equalization and DCT

Xueyang Fu; Jiye Wang; Delu Zeng; Yue Huang; Xinghao Ding

In this letter, an effective enhancement method for remote sensing images is introduced to improve the global contrast and the local details. The proposed method constitutes an empirical approach by using the regularized-histogram equalization (HE) and the discrete cosine transform (DCT) to improve the image quality. First, a new global contrast enhancement method by regularizing the input histogram is introduced. More specifically, this technique uses the sigmoid function and the histogram to generate a distribution function for the input image. The distribution function is then used to produce a new image with improved global contrast by adopting the standard lookup table-based HE technique. Second, the DCT coefficients of the previous contrast improved image are automatically adjusted to further enhance the local details of the image. Compared with conventional methods, the proposed method can generate enhanced remote sensing images with higher contrast and richer details without introducing saturation artifacts.


computer vision and pattern recognition | 2016

A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation

Xueyang Fu; Delu Zeng; Yue Huang; Xiao-Ping Zhang; Xinghao Ding

We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image. We show that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ideal. Based on the previous investigation of the logarithmic transformation, a new weighted variational model is proposed for better prior representation, which is imposed in the regularization terms. Different from conventional variational models, the proposed model can preserve the estimated reflectance with more details. Moreover, the proposed model can suppress noise to some extent. An alternating minimization scheme is adopted to solve the proposed model. Experimental results demonstrate the effectiveness of the proposed model with its algorithm. Compared with other variational methods, the proposed method yields comparable or better results on both subjective and objective assessments.


international conference on acoustics, speech, and signal processing | 2014

A novel retinex based approach for image enhancement with illumination adjustment

Xueyang Fu; Ye Sun; Minghui Liwang; Yue Huang; Xiao-Ping Zhang; Xinghao Ding

Retinex based algorithms have been widely used among in image enhancement. Since many retinex based algorithms remove illumination and regard the reflectance as enhancement, over-enhancement and unnaturalness are inevitable. In this paper, a novel retinex based image enhancement using illumination adjustment is proposed. Different from existing variational retinex models, a new model without the logarithmic transformation is established and can well preserve the edge. A fast alternating direction optimization method is used to solve this problem. After the decomposition of illumination and reflectance, a simple and effective post-processing method for illumination adjustment is adopted for the enhancement to make the result more natural. The proposed method can deal with many kinds of image, such as high dynamic range (HDR) images and non-uniform illumination images. Experimental results illustrate that the naturalness can be preserved while details are enhanced by the presented new approach.


international conference on neural information processing | 2013

Single-Image-Based Rain and Snow Removal Using Multi-guided Filter

Xianhui Zheng; Yinghao Liao; Wei Guo; Xueyang Fu; Xinghao Ding

In this paper, we propose a new rain and snow removal method through using low frequency part of a single image. It is based on a key difference between clear background edges and rain streaks or snowflakes, low frequency part can obviously distinguish the different properties of them. Low frequency part is the non-rain or non-snow component. We modify it as a guidance image, the high frequency part as input image of guided filter, so we get a non-rain or non-snow component of high frequency part and add the low frequency part is the restored image. We further make it more clear based on the properties of clear background edges. Our results show that it has good performance in rain removal and snow removal.

Collaboration


Dive into the Xueyang Fu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge