Yudong Liang
Xi'an Jiaotong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yudong Liang.
Neurocomputing | 2016
Sanping Zhou; Jinjun Wang; Shun Zhang; Yudong Liang; Yihong Gong
This paper proposes a novel region-based active contour model in the level set formulation for medical image segmentation. We define a unified fitting energy framework based on Gaussian probability distributions to obtain the maximum a posteriori probability (MAP) estimation. The energy term consists of a global energy term to characterize the fitting of global Gaussian distribution according to the intensities inside and outside the evolving curve, as well as a local energy term to characterize the fitting of local Gaussian distribution based on the local intensity information. In the resulting contour evolution that minimizes the associated energy, the global energy term accelerates the evolution of the evolving curve far away from the objects, while the local energy term guides the evolving curve near the objects to stop on the boundaries. In addition, a weighting function between the local energy term and the global energy term is proposed by using the local and global variances information, which enables the model to select the weights adaptively in segmenting images with intensity inhomogeneity. Extensive experiments on both synthetic and real medical images are provided to evaluate our method, show significant improvements on both efficiency and accuracy, as compared with the popular methods. HighlightsBoth global and local intensity information are incorporated into our method to segment images with intensity inhomogeneity.Size information of local neighborhood partition is used to build the a priori probability model.A weight function between local energy term and global energy term is proposed.
Neurocomputing | 2016
Yudong Liang; Jinjun Wang; Sanping Zhou; Yihong Gong; Nanning Zheng
Deep convolutional neural network has been applied for single image super-resolution problem and demonstrated state-of-the-art quality. This paper presents several prior information that could be utilized during the training process of the deep convolutional neural network. The first type of prior focuses on edges and texture restoration in the output, and the second type of prior utilizes multiple upscaling factors to consider the structure recurrence across different scales. As demonstrated by our experimental results, the proposed framework could significantly accelerate the training speed for more than ten times and at the same time lead to better image quality. The generated super-resolution image achieves visually sharper and more pleasant restoration as well as superior objectively evaluation results compared to state-of-the-art methods.
european conference on computer vision | 2016
Yudong Liang; Jinjun Wang; Xingyu Wan; Yihong Gong; Nanning Zheng
Most of Image Quality Assessment (IQA) methods require the reference image to be pixel-wise aligned with the distorted image, and thus limiting the application of reference image based IQA methods. In this paper, we show that non-aligned image with similar scene could be well used for reference, using a proposed Dual-path deep Convolutional Neural Network (DCNN). Analysis indicates that the model captures the scene structural information and non-structural information “naturalness” between the pair for quality assessment. As shown in the experiments, our proposed DCNN model handles the IQA problem well. With an aligned reference image, our predictions outperform many state-of-the-art methods. And in more general case where the reference image contains the similar scene but is not aligned with the distorted one, DCNN could still achieve superior consistency with subjective evaluation than many existing methods that even use aligned reference images.
conference on multimedia modeling | 2017
Ze Yang; Kai Zhang; Yudong Liang; Jinjun Wang
Recent years have witnessed great success of convolutional neural network (CNN) for various problems both in low and high level visions. Especially noteworthy is the residual network which was originally proposed to handle high-level vision problems and enjoys several merits. This paper aims to extend the merits of residual network, such as skip connection induced fast training, for a typical low-level vision problem, i.e., single image super-resolution. In general, the two main challenges of existing deep CNN for supper-resolution lie in the gradient exploding/vanishing problem and large amount of parameters or computational cost as CNN goes deeper. Correspondingly, the skip connections or identity mapping shortcuts are utilized to avoid gradient exploding/vanishing problem. To tackle with the second problem, a parameter economic CNN architecture which has carefully designed width, depth and skip connections was proposed. Experimental results have demonstrated that the proposed CNN model can not only achieve state-of-the-art PSNR and SSIM results for single image super-resolution but also produce visually pleasant results.
international conference on image processing | 2015
Yudong Liang; Jinjun Wang; Shizhou Zhang; Yihong Gong
Learning the non-linear image upscaling process has previously been considered as a simple regression process, where various models have been utilized to describe the correlations between high-resolution (HR) and low-resolution (LR) images/patches. In this paper, we present a multitask learning framework based on deep neural network for image super-resolution, where we jointly consider the image super-resolution process and the image degeneration process. By sharing parameters between the two highly relevant tasks, the proposed framework could effectively improve the obtained neural network based mapping model between HR and LR image patches. Experimental results have demonstrated clear visual improvement and high computational efficiency, especially with large magnification factors.
asia pacific signal and information processing association annual summit and conference | 2014
Yudong Liang; Jinjun Wang; Shizhou Zhang; Yihong Gong
This paper proposes a novel neural network learning the essential mapping function between the low resolution and high resolution image for Image superresolution problem. In our approach, patch recurrence property of small patches in natural image are utilized as a prior to train the network. An autoencoder neutral network is designed to reconstruct the high resolution patches. The constraint that the output of the coding part should be similar as the corresponding high resolution patches is imposed to ameliorate the illness nature of the superresolution problem. In fact, the degeneration mapping from the high resolution image to the low resolution image is also integrated in the network. Both visual improvements and objective assessments are demonstrated on true images.
pacific rim conference on multimedia | 2016
Yudong Liang; Jinjun Wang; Ze Yang; Yihong Gong; Nanning Zheng
Quantitative human evaluations give a much finer description while qualitative human evaluations are more stable, consistent and can be much easier to be obtained. Quantitative assessments have been widely explored, while the interaction between qualitative and quantitative evaluations has barely been exploited. A deep convolutional neural network with multi-task learning framework was utilized to perform quantitative evaluations and qualitative evaluations at the same time. The supervision of qualitative evaluations could help the model overcome the inconsistency existed in quantitative evaluations. Further, multi-task learning gives more information to facilitate the learning of discriminative features to describe image quality. As shown in the experiments, referring to qualitative evaluations has boosted the performance of quantitative assessments and the state of art performance has been achieved.
international conference on multimedia and expo | 2015
Shizhou Zhang; Jinjun Wang; Yudong Liang; Yihong Gong; Nanning Zheng
Recently, the sparse coding based image representation has achieved state-of-the-art recognition results on many benchmarks. In this paper, we propose Multi-cue Normalized Non-Negative Sparse Encoder (MN3SE) which enforces both the non-negative constraint and the shift-invariant constraint on top of the traditional sparse coding criteria, and takes multi-cue to further boost the performance. The former constraint reduces information loose by the negative coefficients and improves the coding stability, and the latter allows the sparseness to be self-adaptive to the local feature. The proposed coding scheme is then approximated by an neural network based encoder for speed-up. More importantly, the multi-layer neural network architecture allows us to apply a multi-task learning strategy to fuse information from multi-cue. Specifically, we take one type of descriptor, such as SIFT as the input, and enforce the learned encoder to produce sparse code that can reconstruct not only SIFT but also other types of descriptors such as color moments. In this way, we could achieve not only 10 to 33 times speed up for sparse-coding, the multi-cue enforced learning strategy gives the image feature extracted by MN3SE superior image classification accuracy.
arXiv: Computer Vision and Pattern Recognition | 2017
Yudong Liang; Radu Timofte; Jinjun Wang; Yihong Gong; Nanning Zheng
arXiv: Computer Vision and Pattern Recognition | 2017
Yudong Liang; Ze Yang; Kai Zhang; Yihui He; Jinjun Wang; Nanning Zheng