Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yong Gan is active.

Publication


Featured researches published by Yong Gan.


Neurocomputing | 2015

Perceptual image quality assessment by independent feature detector

Huawen Chang; Qiuwen Zhang; Qinggang Wu; Yong Gan

Abstract The development of image processing technology has triggered the increasing demand for accurate methods of image quality assessment (IQA). Thus, creating reliable and accurate image quality metrics (IQMs) that are consistent with subjective human evaluation is an intense focus of research. Because the human visual system (HVS) is the ultimate receiver of images, modeling of the HVS has been regarded as the most suitable way to achieve perceptual quality predictions. In fact, independent component analysis (ICA) can provide a very good description for the receptive fields of neurons in the primary visual cortex which is the most important part of the HVS. Inspired by this fact, a novel independent feature similarity (IFS) index is proposed for full-reference IQA. Moreover, ICA can simulate the color-opponent mechanism of the HVS. Thus IFS can effectively predict the quality of an image with color distortion. Because IFS uses only a part of the reference image information, it can also be considered as a reduced-reference IQM. The proposed method is based on independent features that are acquired from a feature detector which is trained on samples of natural images by ICA. The computation of IFS consists of two components: feature component and luminance component. The feature component measures the structure and texture differences between two images, while the luminance component evaluates brightness distortions. Experimental results show that IFS has relatively low computational complexity and high correlation with subjective quality evaluation.


Neurocomputing | 2015

An active contour model based on fused texture features for image segmentation

Qinggang Wu; Yong Gan; Bin Lin; Qiuwen Zhang; Huawen Chang

Abstract Texture image segmentation plays an important role in various computer vision tasks. In this paper, a convex texture image segmentation model is proposed. First, the texture features of Gabor and GLCM (gray level co-occurrence matrix) are extracted for original image. Then, the two kinds of texture features are fused together to effectively construct a discriminative feature space by concatenating with each other. In the image segmentation step, a convex energy function is defined by taking the non-convex vector-valued model of Active Contour without Edges (ACWE) into a global minimization framework (GMAC). The proposed global minimization energy function with fused textures (GMFT) can avoid the existence of local minima in the minimization of the vector-valued ACWE model. In addition, a fast dual formulation is adopted to achieve the efficient contour evolution. The experimental results on synthetic and natural animal images demonstrate that the proposed GMFT model obtains more satisfactory segmentation results compared to two state-of-the-art methods in terms of segmentation accuracy and efficiency.


Multidimensional Systems and Signal Processing | 2017

Fast intra mode decision for depth coding in 3D-HEVC

Qiuwen Zhang; Yongshuang Yang; Huawen Chang; Weiwei Zhang; Yong Gan

The emergent 3D High Efficiency Video Coding (3D-HEVC) is an extension of the High Efficiency Video Coding (HEVC) standard for the compression of the multi-view texture videos plus depth maps format. Since depth maps have different statistical properties compared to texture video, various new intra tools have been added to 3D-HEVC depth coding. In current 3D-HEVC, new intra tools are utilized together with the conventional HEVC intra prediction modes for depth coding. This technique achieves the highest possible coding efficiency, but leads to an extremely high computational complexity which limits 3D-HEVC from practical applications. In this paper, we propose a fast intra mode decision algorithm for depth coding in 3D-HEVC. The basic idea of the proposed algorithm is to utilize the depth map characteristics to predict the current depth prediction mode and skip some specific depth intra modes rarely used in 3D-HEVC depth coding. Based on this analysis, two fast intra mode decision strategies are proposed including reduction of the number of conventional intra prediction modes, and simplification of depth modeling modes (DMMs). Experimental results demonstrate that the proposed algorithm can save 30xa0% coding runtime on average while maintaining almost the same rate-distortion (RD) performance as the original 3D-HEVC encoder.


Journal of Electronic Imaging | 2014

Early SKIP mode decision for three-dimensional high efficiency video coding using spatial and interview correlations

Qiuwen Zhang; Qinggang Wu; Xiaobing Wang; Yong Gan

Abstract. In the test model of high efficiency video coding (HEVC) standard–based three-dimensional (3-D) video coding (3-D-HEVC), the variable size motion estimation (ME) and disparity estimation (DE) have been employed to select the best coding mode for each treeblock in the encoding process. This technique achieves the highest possible coding efficiency, but it brings extremely high computational complexity that limits 3-D-HEVC from practical applications. An early SKIP mode decision algorithm based on spatial and interview correlations is proposed to reduce the computational complexity of the ME/DE procedures. The basic idea of the method is to utilize the spatial and interview properties of coding information in previous coded frames to predict the current treeblock prediction mode and early skip unnecessary variable-size ME and DE. Experimental results show that the proposed algorithm can significantly reduce computational complexity of 3-D-HEVC while maintaining nearly the same rate distortion performance as the original encoder.


Multidimensional Systems and Signal Processing | 2016

Fast motion and disparity estimation for HEVC based 3D video coding

Qiuwen Zhang; Huawen Chang; Qinggang Wu; Yong Gan

The emerging international standard for high efficiency video coding (HEVC) based 3D video coding (3D-HEVC) is an extension of HEVC. In the test model of 3D-HEVC, variable size motion estimation (ME) and disparity estimation (DE) are both employed to select the best coding mode for each treeblock in the encoding process. This technique achieves the highest possible coding efficiency, but it brings extremely high computational complexity which limits 3D-HEVC from practical applications. In this paper, a fast ME/DE algorithm based on inter-view and spatial correlations is proposed to reduce 3D-HEVC computational complexity. Since the multi-view videos represent the same scene with similar characteristic, there is a high correlation among the coding information from inter-view prediction. Besides, the homogeneous regions in texture video have a strong spatial correlation, and thus spatially neighboring treeblocks have similar coding information. Therefore, we can determine ME search range and skip some specific ME and DE rarely used in the previously coded view frames and spatially neighboring coding unit. Experimental results demonstrate that the proposed algorithm can significantly reduce computational complexity of 3D-HEVC encoding while maintaining almost the same rate-distortion performance.


Signal, Image and Video Processing | 2017

Fast inter-prediction mode decision algorithm for HEVC

Xinpeng Huang; Qiuwen Zhang; Xiaoxin Zhao; Weiwei Zhang; Yan Zhang; Yong Gan

The high efficiency video coding (HEVC) is superior to the previous video coding standards in compression performance, while the computational complexity is introduced simultaneously. The complexity increases mainly due to the novel flexible partitioning scheme that allows the inter-prediction mode partition split via exhaustive rate-distortion optimization (RDO). In this paper, a fast inter-prediction mode decision algorithm is proposed, which contains adaptive threshold determination based on quantization parameter and fast inter-prediction mode partition decision. The proposed algorithm utilizes the edge information of the partition to simplify the RDO and then accelerates coding time of inter-prediction mode decision structure for the original HEVC encoder. The experimental results show that the proposed algorithm can achieve 39.5xa0% coding time reduction with just 1.97xa0% bitrate increase on average under random access condition and 35.2xa0% coding time reduction with just 1.89xa0% bitrate increase on average under low-delay B condition, compared to the original HEVC encoder.


Journal of Visual Communication and Image Representation | 2017

Fast depth map mode decision based on depthtexture correlation and edge classification for 3D-HEVC

Qiuwen Zhang; Na Zhang; Tao Wei; Kunqiang Huang; Xiaoliang Qian; Yong Gan

A fast depth map mode decision algorithm for 3D-HEVC is proposed.The depth map-texture video correlation is exploited throughout fast mode detection.The edge classification is employed in the procedure of intra/inter prediction.Experimental results demonstrate the effectiveness of the proposed algorithm. The 3D extension of High Efficiency Video Coding (3D-HEVC) has been adopted as the emerging 3D video coding standard to support the multi-view video plus depth map (MVD) compression. In the joint model of 3D-HEVC design, the exhaustive mode decision is required to be checked all the possible prediction modes and coding levels to find the one with least rate distortion cost in depth map coding. Furthermore, new coding tools (such as depth-modeling mode (DMM) and segment-wise depth coding (SDC)) are exploited for the characteristics of depth map to improve the coding efficiency. These achieve the highest possible coding efficiency to code depth map, but also bring a significant computational complexity which limits 3D-HEVC from real-time applications. In this paper, we propose a fast depth map mode decision algorithm for 3D-HEVC by jointly using the correlation of depth map-texture video and the edge information of depth map. Since the depth map and texture video represent the same scene at the same time instant (they have the same motion characteristics), it is not efficient to use all the prediction modes and coding levels in depth map coding. Therefore, we can skip some specific prediction modes and depth coding levels rarely used in corresponding texture video. Meanwhile, the depth map is mainly characterized by sharp object edges and large areas of nearly constant regions. By fully exploiting these characteristics, we can skip some prediction modes which are rarely used in homogeneity regions based on the edge classification. Experimental results show that the proposed algorithm achieves considerable encoding time saving while maintaining almost the same rate-distortion (RD) performance as the original 3D-HEVC encoder.


Neurocomputing | 2016

An efficient depth map filtering based on spatial and texture features for 3D video coding

Qiuwen Zhang; Ming Chen; Haodong Zhu; Xiaobing Wang; Yong Gan

Abstract A depth map is used to synthesize virtual views in the 3D video depth-enhanced format. In contrast to texture video, depth map is characterized by piecewise smooth regions bounded by sharp object boundaries. Conventional video coding standards compress depth map often introduces coding artifacts along the depth map boundary, which severely affects the rendered view quality. To address this problem, we propose an efficient depth map filtering for 3D depth map compression in the high efficiency video coding (HEVC) process. The proposed depth map filtering is designed considering spatial resolution, texture boundary similarity, and coding artifacts features. It consists of a newly designed nonlinear down/up sampling filtering and a depth reconstruction multilateral filtering. Firstly, the depth map is down-sampled and coded with 3D video HEVC. Then in the decoding side, the depth map is up-sampled. Finally, a depth reconstruction multilateral filtering is used to align the object boundaries for the coding artifacts in edges of the decoded depth image. Experimental results demonstrate that the proposed depth map filtering can significantly reduce the bit rate as well as achieving a better quality of the rendered view in comparison with 3D video HEVC test model (3DV-HTM).


Journal of Real-time Image Processing | 2017

Efficient multiview video plus depth coding for 3D-HEVC based on complexity classification of the treeblock

Qiuwen Zhang; Kunqiang Huang; Xiao Wang; Bin Jiang; Yong Gan

High efficiency video coding standard-based 3D video (3D-HEVC) has been extended from HEVC to improve the coding efficiency of multiview video plus depth (MVD). Similar to the joint model of HEVC, a computationally expensive exhaustive mode decision is performed to find the least rate-distortion cost for each treeblock in 3D-HEVC. Furthermore, additional coding tools have been added to 3D-HEVC for improving the coding efficiency of the dependent texture video and depth map. Those tools achieve the highest possible coding efficiency, but also bring a significant computational complexity which limits 3D-HEVC from real-time applications. In order to reduce computational complexity, we propose an efficient multiview video plus depth coding algorithm for 3D-HEVC that adaptively utilizes the complexity classification of the treeblock. The coding complexity model of a treeblock is first analyzed according to the prediction mode and coding mode from the corresponding treeblocks in the reference views. Based on the complexity classification model of the treeblock, we propose two efficient low-complexity approaches, including fast mode size decision and adaptive motion search range selection. Extensive experimental results demonstrate that the proposed MVD coding algorithm can achieve the average computational saving about 60.1% with negligible rate-distortion performance loss in comparison with the original 3D-HEVC encoder.


Computational Intelligence and Neuroscience | 2017

MultiP-Apo: A Multilabel Predictor for Identifying Subcellular Locations of Apoptosis Proteins

Xiao Wang; Hui Li; Rong Wang; Qiuwen Zhang; Weiwei Zhang; Yong Gan

Apoptosis proteins play an important role in the mechanism of programmed cell death. Predicting subcellular localization of apoptosis proteins is an essential step to understand their functions and identify drugs target. Many computational prediction methods have been developed for apoptosis protein subcellular localization. However, these existing works only focus on the proteins that have one location; proteins with multiple locations are either not considered or assumed as not existing when constructing prediction models, so that they cannot completely predict all the locations of the apoptosis proteins with multiple locations. To address this problem, this paper proposes a novel multilabel predictor named MultiP-Apo, which can predict not only apoptosis proteins with single subcellular location but also those with multiple subcellular locations. Specifically, given a query protein, GO-based feature extraction method is used to extract its feature vector. Subsequently, the GO feature vector is classified by a new multilabel classifier based on the label-specific features. It is the first multilabel predictor ever established for identifying subcellular locations of multilocation apoptosis proteins. As an initial study, MultiP-Apo achieves an overall accuracy of 58.49% by jackknife test, which indicates that our proposed predictor may become a very useful high-throughput tool in this area.

Collaboration


Dive into the Yong Gan's collaboration.

Top Co-Authors

Avatar

Qiuwen Zhang

Zhengzhou University of Light Industry

View shared research outputs
Top Co-Authors

Avatar

Huawen Chang

Zhengzhou University of Light Industry

View shared research outputs
Top Co-Authors

Avatar

Qinggang Wu

Zhengzhou University of Light Industry

View shared research outputs
Top Co-Authors

Avatar

Kunqiang Huang

Zhengzhou University of Light Industry

View shared research outputs
Top Co-Authors

Avatar

Xiao Wang

Zhengzhou University of Light Industry

View shared research outputs
Top Co-Authors

Avatar

Xinpeng Huang

Zhengzhou University of Light Industry

View shared research outputs
Top Co-Authors

Avatar

Ming Chen

Zhengzhou University of Light Industry

View shared research outputs
Top Co-Authors

Avatar

Weiwei Zhang

Zhengzhou University of Light Industry

View shared research outputs
Top Co-Authors

Avatar

Yongshuang Yang

Zhengzhou University of Light Industry

View shared research outputs
Top Co-Authors

Avatar

Xiaoxin Zhao

Zhengzhou University of Light Industry

View shared research outputs
Researchain Logo
Decentralizing Knowledge