Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhenkun Wen is active.

Publication


Featured researches published by Zhenkun Wen.


Physics in Medicine and Biology | 2014

Midsagittal plane extraction from brain images based on 3D SIFT.

Huisi Wu; Defeng Wang; Lin Shi; Zhenkun Wen; Zhong Ming

Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.


International Journal of Intelligent Systems | 2015

Automatic Leaf Recognition from a Big Hierarchical Image Database

Huisi Wu; Lei Wang; Feng Zhang; Zhenkun Wen

Automatic plant recognition has become a research focus and received more and more attentions recently. However, existing methods usually only focused on leaf recognition from small databases that usually only contain no more than hundreds of species, and none of them reported a stable performance in either recognition accuracy or recognition speed when compared with a big image database. In this paper, we present a novel method for leaf recognition from a big hierarchical image database. Unlike the existing approaches, our method combines the textural gradient histogram with the shape context to form a more distinctive feature for leaf recognition. To achieve efficient leaf image retrieval, we divided the big database into a set of subsets based on mean‐shift clustering on the extracted features and build hierarchical k‐dimensional trees (KD‐trees) to index each cluster in parallel. Finally, the proposed parallel indexing and searching schemes are implemented with MapReduce architectures. Our method is evaluated with extensive experiments on different databases with different sizes. Comparisons to state‐of‐the‐art techniques were also conducted to validate the proposed method. Both visual results and statistical results are shown to demonstrate its effectiveness.


Archive | 2014

Image Retrieval Based on Saliency Attention

Zhenkun Wen; Jinhua Gao; Ruijie Luo; Huisi Wu

The feature extraction is the most critical step in image retrieval. Among various local feature extraction methods, scale-invariant feature transform (SIFT) has been proven to be the most robust local invariant feature descriptor, which is widely used in the field of image matching and retrieval. However, the SIFT algorithm has a disadvantage that the algorithm will produce a large number of feature points and is not suited for widely using in the field of image retrieval. Firstly, a novel significant measure algorithm is proposed in this paper, and the regions of interest in images are obtained. Then, SIFT features are extracted from salient regions, reducing the number of SIFT features. Our algorithm also abstracts color features from salient regions, and this method overcomes SIFT algorithm’s drawback that could not reflect image’s color information. The experiments demonstrate that the integrated visual saliency analysis-based feature selection algorithm provides significant benefits both in retrieval accuracy and in speed.


ieee international conference on signal and image processing | 2016

Cartoon image segmentation based on improved SLIC superpixels and adaptive region propagation merging

Huisi Wu; Yilin Wu; Shenglong Zhang; Ping Li; Zhenkun Wen

This paper present a novel algorithm for cartoon image segmentation based on the simple linear iterative clustering (SLIC) superpixels and adaptive region propagation merging. To break the limitation of the original SLIC algorithm in confirming to image boundaries, this paper proposed to improve the quality of the superpixels generation based on the connectivity constraint. To achieve efficient segmentation from the superpixels, this paper employed an adaptive region propagation merging algorithm to obtain independent segmented object. Compared with the pixel-based segmentation algorithms and other superpixel-based segmentation methods, the method proposed in this paper is more effective and more efficient by determining the propagation center adaptively. Experiments on abundant cartoon images showed that our algorithm outperforms classical segmentation algorithms with the boundary-based and region-based criteria. Furthermore, the final cartoon image segmentation results are also well consistent with the human visual perception.


Archive | 2014

Texture Smoothing Based on Adaptive Total Variation

Huisi Wu; Yilin Wu; Zhenkun Wen

Textures are ubiquitous and usually have fine details as well as meaningful structures on the surfaces. Many algorithms have been proposed for texture smoothing and structure extraction, but none of them obtained a satisfactory effect because the optimization procedure is very challenging. In this paper, we present a texture smoothing method based on a novel adaptive total variation framework. We propose using absolute variation to separate the important structures and the fine details of a texture. Then, a sharp total variation (STV) based on absolute variation and inherent variation is used to reinforce the structure edges during the smoothing process. Finally, by integrating our proposed STV and the existing relative total variation (RTV), we can not only smooth the fine detail of the textures, but also maintain the salient structures. Experiments show that our method outperforms the existing methods in terms of detail smoothing and salient structures preserving.


International Journal of Imaging Systems and Technology | 2013

Fast and Robust Symmetry Detection for Brain Images Based on Parallel Scale-Invariant Feature Transform Matching and Voting

Huisi Wu; Defeng Wang; Lin Shi; Zhenkun Wen; Zhong Ming

Symmetry analysis for brain images has been considered as a promising technique for automatically extracting the pathological brain slices in conventional scanning. In this article, we present a fast and robust symmetry detection method for automatically extracting symmetry axis (fissure line) from a brain image. Unlike the existing brain symmetry detection methods which mainly rely on the intensity or edges to determine the symmetry axis, our proposed method is based on a set of scale‐invariant feature transform (SIFT) features, where the symmetry axis is determined by parallel matching and voting of distinctive features within the brain image. By clustering and indexing the extracted SIFT features using a GPU KD‐tree, we can match multiple pairs of features in parallel based on a novel symmetric similarity metric, which combines the relative scales, orientations, and flipped descriptors to measure the magnitude of symmetry between each pair of features. Finally, the dominant symmetry axis presented in the brain image is determined using a parallel voting algorithm by accumulating the pair‐wise symmetry score in a Hough space. Our method was evaluated on both synthetic and in vivo datasets, including both normal and pathological cases. Comparisons with state‐of‐the‐art methods were also conducted to validate the proposed method. Experimental results demonstrated that our method achieves a real‐time performance and with a higher accuracy than previous methods, yielding an average polar angle error within 0.69° and an average radius error within 0.71 mm.


pacific rim conference on multimedia | 2018

Learning Affective Features Based on VIP for Video Affective Content Analysis

Yingying Zhu; Min Tong; Tinglin Huang; Zhenkun Wen; Qi Tian

Video affective computing aims to recognize, interpret, process, and simulate human affective of videos from visual, textual, and auditory sources. An intrinsic challenge is how to extract effective representations to analyze affection. In view of this problem, we propose a new video affective content analysis framework. In this paper, we observe the fact that only a few actors play an important role in video, leading the trend of video emotional developments. We provide a novel solution to distinguish the important one and call it the very important person (VIP). Meanwhile, we design a novel keyframes selection strategy to select the keyframes including the VIPs. Furthermore, scale invariant feature transform (SIFT) features corresponding to a set of patches are first extracted from each VIP keyframe, which forms a SIFT feature matrix. Next, the feature matrix is fed to a convolutional neural network (CNN) to learn discriminative representations, which make CNN and SIFT complement each other. Experimental results on two public audio-visual emotional datasets, including the classical LIRIS-ACCEDE and the PMSZU dataset we built, demonstrate the promising performance of the proposed method and achieve better performance than other compared methods.


Computational Visual Media | 2018

Automatic texture exemplar extraction based on global and local textureness measures

Huisi Wu; Xiaomeng Lyu; Zhenkun Wen

Texture synthesis is widely used for modeling the appearance of virtual objects. However, traditional texture synthesis techniques emphasize creation of optimal target textures, and pay insufficient attention to choice of suitable input texture exemplars. Currently, obtaining texture exemplars from natural images is a labor intensive task for the artists, requiring careful photography and significant postprocessing. In this paper, we present an automatic texture exemplar extraction method based on global and local textureness measures. To improve the efficiency of dominant texture identification, we first perform Poisson disk sampling to randomly and uniformly crop patches from a natural image. For global textureness assessment, we use a GIST descriptor to distinguish textured patches from non-textured patches, in conjunction with SVM prediction. To identify real texture exemplars consisting solely of the dominant texture, we further measure the local textureness of a patch by extracting and matching the local structure (using binary Gabor pattern (BGP)) and dominant color features (using color histograms) between a patch and its sub-regions. Finally, we obtain optimal texture exemplars by scoring and ranking extracted patches using these global and local textureness measures. We evaluate our method on a variety of images with different kinds of textures. A convincing visual comparison with textures manually selected by an artist and a statistical study demonstrate its effectiveness.


pacific rim conference on multimedia | 2017

Automatic Texture Exemplar Extraction Based on a Novel Textureness Metric.

Huisi Wu; Junrong Jiang; Ping Li; Zhenkun Wen

Traditional texture synthesis methods usually emphasized the final effect of the target textures. However, none of them focus on auto-extraction of the source texture exemplar. In this paper, we present a novel textureness metric based on Gist descriptor to accurately extract texture exemplar from an arbitrary image including texture regions. Our method emphasizes the importance of the exemplar for the example-based texture synthesis and focus on ideal texture exemplar auto-extraction. To improve the efficiency of the texture patch searching, we perform a Poisson disk sampling to crop exemplar randomly and uniformly from images. To improve the accuracy of texture recognition, we also use a SVM for the UIUC database to distinguish the texture regions and non-texture regions. The proposed method is evaluated on a variety of images with different kinds of textures. Convincing visual and statistics results demonstrated its effectiveness.


pacific rim conference on multimedia | 2017

Repetitiveness Metric of Exemplar for Texture Synthesis

Lulu Yin; Hui Lai; Huisi Wu; Zhenkun Wen

Texture synthesis has become a well-established area. However, researchers are mostly concerned with learning the algorithm of texture synthesis to achieve higher quality and better efficiency. We hereby propose a repetitiveness metric method to pick out an optimal texture exemplar which is used to synthesize texture. Different from conventional methods of texture analysis that emphasize on texture feature analysis for the target textures, our method focuses on repetitiveness metric of texture exemplar. To achieve a more efficient method, we firstly perform a Poisson disk sampling to extract unordered texture exemplars from the input image. Using normalized cross correlation (NCC) based on fast Fourier transformation (FFT) for each exemplar, we can get some matrices. Based on repetitiveness metric, we can assign each exemplar a score. Our method can satisfy visual requirement and accomplish high-quality work in a shorter time due to FFT. Compelling visual results and computational complexity analyses prove the validity of our work.

Collaboration


Dive into the Zhenkun Wen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ping Li

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinhua Gao

Shenzhen Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Defeng Wang

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge