Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shuying Huang is active.

Publication


Featured researches published by Shuying Huang.


IEEE Sensors Journal | 2016

Multimodal Sensor Medical Image Fusion Based on Type-2 Fuzzy Logic in NSCT Domain

Yong Yang; Yue Que; Shuying Huang; Pan Lin

Multimodal medical image fusion plays a vital role in different clinical imaging sensor applications. This paper presents a novel multimodal medical image fusion method that adopts a multiscale geometric analysis of the nonsubsampled contourlet transform (NSCT) with type-2 fuzzy logic techniques. First, the NSCT was performed on preregistered source images to obtain their high- and low-frequency subbands. Next, an effective type-2 fuzzy logic-based fused rule is proposed for fusion of the high-frequency subbands. In the presented fusion approach, the local type-2 fuzzy entropy is introduced to automatically select high-frequency coefficients. However, for the low-frequency subbands, they were fused by a local energy algorithm based on the corresponding images local features. Finally, the fused image was constructed by the inverse NSCT with all composite subbands. Both subjective and objective evaluations showed better contrast, accuracy, and versatility in the proposed approach compared with state-of-the-art methods. Besides, an effective color medical image fusion scheme is also given in this paper that can inhibit color distortion to a large extent and produce an improved visual effect.


IEEE Access | 2016

Remote Sensing Image Fusion Based on Adaptive IHS and Multiscale Guided Filter

Yong Yang; Weiguo Wan; Shuying Huang; Feiniu Yuan; Shouyuan Yang; Yue Que

The purpose of remote sensing image fusion is to sharpen a low spatial resolution multispectral (MS) image by injecting the detail map extracted from a panchromatic (PAN) image. In this paper, a novel remote sensing image fusion method based on adaptive intensity-hue-saturation (IHS) and multiscale guided filter is presented. In the proposed method, the intensity component is obtained adaptively from the upsampled MS image at first. Different from traditional IHS-based methods, we subsequently propose a multiscale guided filter strategy to filter the PAN image to achieve more detail information. Finally, the total detail map is injected into each band of the upsampled MS image to obtain the fused image by a model-based algorithm, in which an improved injection gains approach is proposed to control the quantity of the injected detail information. Experimental results demonstrated that the proposed method can provide more spatial information and preserve more spectral information compared with several state-of-the-art fusion methods in both subjective and objective evaluations.


Signal, Image and Video Processing | 2017

Technique for multi-focus image fusion based on fuzzy-adaptive pulse-coupled neural network

Yong Yang; Yue Que; Shuying Huang; Pan Lin

Multi-focus image fusion technique can solve the problem that not all the targets in an image are clear in case of imaging in the same scene. In this paper, a novel multi-focus image fusion technique is presented, which is developed by using the nonsubsampled contourlet transform (NSCT) and a proposed fuzzy logic based adaptive pulse-coupled neural network (PCNN) model. In our method, sum-modified Laplacian (SML) is calculated as the motivation for PCNN neurons in NSCT domain. Since the linking strength plays an important role in PCNN, we propose an adaptively fuzzy way to determine it by computing each coefficient’s importance relative to the surrounding coefficients. Combined with human visual perception characteristics, the fuzzy membership value is employed to automatically achieve the degree of importance of each coefficient, which is utilized as the linking strength in PCNN model. Experimental results on simulated and real multi-focus images show that the proposed technique has a superior performance to series of exist fusion methods.


Remote Sensing | 2017

A Novel Pan-Sharpening Framework Based on Matting Model and Multiscale Transform

Yong Yang; Weiguo Wan; Shuying Huang; Pan Lin; Yue Que

Pan-sharpening aims to sharpen a low spatial resolution multispectral (MS) image by combining the spatial detail information extracted from a panchromatic (PAN) image. An effective pan-sharpening method should produce a high spatial resolution MS image while preserving more spectral information. Unlike traditional intensity-hue-saturation (IHS)- and principal component analysis (PCA)-based multiscale transform methods, a novel pan-sharpening framework based on the matting model (MM) and multiscale transform is presented in this paper. First, we use the intensity component (I) of the MS image as the alpha channel to generate the spectral foreground and background. Then, an appropriate multiscale transform is utilized to fuse the PAN image and the upsampled I component to obtain the fused high-resolution gray image. In the fusion, two preeminent fusion rules are proposed to fuse the low- and high-frequency coefficients in the transform domain. Finally, the high-resolution sharpened MS image is obtained by linearly compositing the fused gray image with the upsampled foreground and background images. The proposed framework is the first work in the pan-sharpening field. A large number of experiments were tested on various satellite datasets; the subjective visual and objective evaluation results indicate that the proposed method performs better than the IHS- and PCA-based frameworks, as well as other state-of-the-art pan-sharpening methods both in terms of spatial quality and spectral maintenance.


IEEE Transactions on Instrumentation and Measurement | 2017

Multiple Visual Features Measurement With Gradient Domain Guided Filtering for Multisensor Image Fusion

Yong Yang; Yue Que; Shuying Huang; Pan Lin

Multisensor image fusion technologies, which convey image information from different sensor modalities to a single image, have been a growing interest in recent research. In this paper, we propose a novel multisensor image fusion method based on multiple visual features measurement with gradient domain guided filtering. First, a Gaussian smoothing filter is employed to decompose each source image into two components: approximate component formed by homogeneous regions and detail component with sharp edges. Second, an effective decision map construction model is presented by measuring three key visual features of the input sensor image: contrast saliency, sharpness, and structure saliency. Third, a gradient domain guided filtering-based decision map optimization technique is proposed to make full use of spatial consistency and generate weight maps. Finally, the resultant image is fused with the weight maps and then is experimentally verified through multifocus image, multimodal medical image, and infrared-visible image fusion. The experimental results demonstrate that the proposed method can achieve better performance than state-of-the-art methods in terms of subjective visual effect and objective evaluation.


IEEE Access | 2017

Multifocus Image Fusion Based on Extreme Learning Machine and Human Visual System

Yong Yang; Mei Yang; Shuying Huang; Yue Que; Min Ding; Jun Sun

Multifocus image fusion generates a single image by combining redundant and complementary information of multiple images coming from the same scene. The combination includes more information of the scene than any of the individual source images. In this paper, a novel multifocus image fusion method based on extreme learning machine (ELM) and human visual system is proposed. Three visual features that reflect the clarity of a pixel are first extracted and used to train the ELM to judge which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Second, we measure the similarity between the source image and the initial fused image and perform morphological opening and closing operations to obtain the focused regions. Lastly, the final fused image is achieved by employing a fusion rule in the focus regions and the initial fused image. Experimental results indicate that the proposed method is more effective and better than other series of existing popular fusion methods in terms of both subjective and objective evaluations.


IEEE Access | 2017

A Hybrid Method for Multi-Focus Image Fusion Based on Fast Discrete Curvelet Transform

Yong Yang; Song Tong; Shuying Huang; Pan Lin; Yuming Fang

This paper presents a fast discrete Curvelet transform (FDCT)-based technique for multi-focus image fusion to address two problems: texture selection in FDCT domain and block effect in spatial-based fusion. First, we present a frequency-based model by performing FDCT on the input images. Considering the human visual system characteristics, a union of pulse coupled neural network and sum-modified-Laplacian algorithms are proposed to extract the detailed information of frequencies. Then, we construct a hybrid spatial-based model. Unlike other spatial-based methods, we combine the image difference and the detailed information extracted from input images to detect the focused region. Finally, to evaluate the robustness of proposed method, we design a completed evaluation process considering the misregistration, noise error, and conditional focus situations. Experimental results indicate that the proposed method improves the fusion performance and has less computational complexity compared with various exiting frequency-based and spatial-based fusion methods.


IEEE Access | 2017

Multi-Focus Image Fusion via Clustering PCA Based Joint Dictionary Learning

Yong Yang; Min Ding; Shuying Huang; Yue Que; Weiguo Wan; Mei Yang; Jun Sun

This paper presents a novel framework based on the non-subsampled contourlet transform (NSCT) and sparse representation (SR) to fuse the multi-focus images. In the proposed fusion method, each source image is first decomposed with NSCT to obtain one low-pass sub-image and a number of high-pass sub-images. Second, an SR-based scheme is put forward to fuse the low-pass sub-images of multiple source images. In the SR-based scheme, a joint dictionary is constructed by integrating many informative and compact sub-dictionaries, in which each sub-dictionary is learned by extracting a few principal component analysis bases from the jointly clustered patches obtained from the low-pass sub-images. Thirdly, we design a multi-scale morphology focus-measure (MSMF) to synthesize the high-pass sub-images. The MSMF is constructed based on the multi-scale morphology structuring elements and the morphology gradient operators, so that it can effectively extract the comprehensive gradient features from the sub-images. The “Max-MSMF” is then defined as the fusion rule to fuse the high-pass sub-images. Finally, the fused image is reconstructed by performing the inverse NSCT on the merged low-pass and high-pass sub-images, respectively. The proposed method is tested on a series of multi-focus images and compared with several well-known fusion methods. Experimental results and analyses indicate that the proposed method is effective and outperforms some existing state-of-the-art methods.


IEEE Access | 2017

Multi-Frame Super-Resolution Reconstruction Based on Gradient Vector Flow Hybrid Field

Shuying Huang; Jun Sun; Yong Yang; Yuming Fang; Pan Lin

In this paper, we propose a novel multi-frame super-resolution (SR) method, which is developed by considering image enhancement and denoising into the SR processing. For image enhancement, a gradient vector flow hybrid field (GVFHF) algorithm, which is robust to noise is first designed to capture the image edges more accurately. Then, through replacing the gradient of anisotropic diffusion shock filter (ADSF) by GVFHF, a GVFHF-based ADSF (GVFHF-ADSF) model is proposed, which can effectively achieve image denoising and enhancement. In addition, a difference curvature-based spatial weight factor is defined in the GVFHF-ADSF model to obtain an adaptive weight between denoising and enhancement in the flat and edge regions. Finally, a GVFHF-ADSF-based multi-frame SR method is presented by employing the GVFHF-ADSF model as a regularization term and the steepest descent algorithm is adopted to solve the inverse SR problem. Experimental results and comparisons with existing methods demonstrate that the proposed GVFHF-ADSF-based SR algorithm can effectively suppress both Gaussian and salt-and-pepper noise, meanwhile enhance edges of the reconstructed image.


IEEE Transactions on Image Processing | 2018

Robust Single-Image Super-Resolution Based on Adaptive Edge-Preserving Smoothing Regularization

Shuying Huang; Jun Sun; Yong Yang; Yuming Fang; Pan Lin; Yue Que

Collaboration


Dive into the Shuying Huang's collaboration.

Top Co-Authors

Avatar

Yong Yang

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Yue Que

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Pan Lin

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jun Sun

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Yuming Fang

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Weiguo Wan

Chonbuk National University

View shared research outputs
Top Co-Authors

Avatar

Lei Wu

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Mei Yang

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Min Ding

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Jiahua Wu

Jiangxi University of Finance and Economics

View shared research outputs
Researchain Logo
Decentralizing Knowledge