Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaonan Luo is active.

Publication


Featured researches published by Xiaonan Luo.


IEEE Transactions on Multimedia | 2013

Edge-Preserving Texture Suppression Filter Based on Joint Filtering Schemes

Zhuo Su; Xiaonan Luo; Zhengjie Deng; Yun Liang; Zhen Ji

Obtaining a texture-smoothing and edge-preserving filtered output is significant to image decomposition. Although the edge and the texture have salient difference in human vision, automatically distinguishing them is a difficult task, for they have similar intensity difference or gradient response. The state-of-the-art edge-preserving smoothing (EPS) based decomposition approaches are hard to obtain a satisfactory result. We propose a novel edge-preserving texture suppression filter, exploiting the joint bilateral filter as a bridge to achieve the purpose of both properties of texture-smoothing and edge-preserving. We develop the iterative asymmetric sampling and the local linear model to produce the degenerative image to suppress the texture, and apply the edge correction operator to achieve edge-preserving. An efficient accelerating implementation is introduced to improve the performance of filtering response. The experiments demonstrate that our filter produces satisfactory outputs with both properties of texture-smoothing and edge-preserving, while compared with the results of other popular EPS approaches in signal, visual and time analysis. Finally, we extend our filter to a variety of image processing applications.


international conference on computer graphics and interactive techniques | 2008

Deducing interpolating subdivision schemes from approximating subdivision schemes

Shujin Lin; Fang You; Xiaonan Luo; Zheng Li

In this paper we describe a method for directly deducing new interpolating subdivision masks for meshes from corresponding approximating subdivision masks. The purpose is to avoid complex computation for producing interpolating subdivision masks on extraordinary vertices. The method can be applied to produce new interpolating subdivision schemes, solve some limitations in existing interpolating subdivision schemes and satisfy some application needs. As cases, in this paper a new interpolating subdivision scheme for polygonal meshes is produced by deducing from the Catmull-Clark subdivision scheme. It can directly operate on polygonal meshes, which solves the limitation of Kobbelts interpolating subdivision scheme. A new √3 interpolating subdivision scheme for triangle meshes and a new √2 interpolating subdivision scheme for quadrilateral meshes are also presented in the paper by deducing from √3 subdivision schemes and 4-8 subdivision schemes respectively. They both produce C1 continuous limit surfaces and avoid the blemish in the existing interpolating √3 and √2 subdivision masks where the weight coefficients on extraordinary vertices can not be described by formulation explicitly. In addition, by adding a parameter to control the transition from approximation to interpolation, they can produce surfaces intervening between approximating and interpolating which can be used to solve the popping effect problem when switching between meshes at different levels of resolution. They can also force surfaces to interpolate chosen vertices.


Computer Graphics Forum | 2008

Progressive Interpolation based on Catmull-Clark Subdivision Surfaces

Zhongxian Chen; Xiaonan Luo; Le Tan; Binghong Ye; Jiapeng Chen

We introduce a scheme for constructing a Catmull‐Clark subdivision surface that interpolates the vertices of a quadrilateral mesh with arbitrary topology. The basic idea here is to progressively modify the vertices of an original mesh to generate a new control mesh whose limit surface interpolates all vertices in the original mesh. The scheme is applicable to meshes with any size and any topology, and it has the advantages of both a local scheme and a global scheme.


IEEE Transactions on Multimedia | 2014

Corruptive Artifacts Suppression for Example-Based Color Transfer

Zhuo Su; Kun Zeng; Li Liu; Bo Li; Xiaonan Luo

Example-based color transfer is a critical operation in image editing but easily suffers from some corruptive artifacts in the mapping process. In this paper, we propose a novel unified color transfer framework with corruptive artifacts suppression, which performs iterative probabilistic color mapping with self-learning filtering scheme and multiscale detail manipulation scheme in minimizing the normalized Kullback-Leibler distance. First, an iterative probabilistic color mapping is applied to construct the mapping relationship between the reference and target images. Then, a self-learning filtering scheme is applied into the transfer process to prevent from artifacts and extract details. The transferred output and the extracted multi-levels details are integrated by the measurement minimization to yield the final result. Our framework achieves a sound grain suppression, color fidelity and detail appearance seamlessly. For demonstration, a series of objective and subjective measurements are used to evaluate the quality in color transfer. Finally, a few extended applications are implemented to show the applicability of this framework.


Computer Graphics Forum | 2008

Interpolatory and Mixed Loop Schemes

Zhuo Shi; Shujin Lin; Xiaonan Luo; Renhong Wang

This paper presents a new interpolatory Loop scheme and an unified and mixed interpolatory and approximation subdivision scheme for triangular meshes. The former which is C1 continuous as same as the modified Butterfly scheme has better effect in some complex models. The latter can be used to solve the “popping effect” problem when switching between meshes at different levels of resolution. The scheme generates surfaces coincident with the Loop subdivision scheme in the limit condition having the coefficient k equal 0. When k equal 1, it will be changed into a new interpolatory subdivision scheme. Eigen‐structure analysis demonstrates that subdivision surfaces generated using the new scheme are C1 continuous. All these are achieved only by changing the value of a parameter k. The method is a completely simple one without constructing and solving equations. It can achieve local interpolation and solve the “popping effect” problem which are the methods advantages over the modified Butterfly scheme.


Multimedia Tools and Applications | 2014

Mesh-based anisotropic cloth deformation for virtual fitting

Li Liu; Ruomei Wang; Zhuo Su; Xiaonan Luo; Chengying Gao

According to the anisotropic property in most real-world cloth for virtual fitting, this paper proposes a novel dynamic cloth simulation method via geometric deformation energy model that preserves geometric features well to achieve cloth behaviors with various material effects. We first construct an objective deformation energy with the terms including vertex position, edge length, dihedral angle, and gravitation, then we conduct a numerical solution in the least square sense. In order to establish the dynamic cloth deformation solution, we further analyze the corresponding relationship between different weights in front of geometric energy terms and material properties by comparison with the real photographs of typical real fabrics. Establishing a dynamic weight-regulation measure can model similar cloth anisotropic behaviors for virtual fitting applications in digital home. The experiments show that our approach effectively provide more rich cloth deformation results with distinctive material effects.


Iet Image Processing | 2013

Optimised image retargeting using aesthetic-based cropping and scaling

Yun Liang; Zhuo Su; Chuntao Wang; Dong Wang; Xiaonan Luo

Image retargeting is a critical technique in displaying images on devices with different resolutions. This study presents a new image retargeting algorithm based on aesthetic-based cropping and scaling. A composite measurement is first constructed under the guidelines of composition aesthetics in photographing. An aesthetic-based cropping is proposed to yield an optimal candidate retargeted image with maximum aesthetic value computed via a constructed composite measurement. The optimal candidate is uniformly scaled to obtain the retargeted image of target size. Some subjective and objective assessments demonstrate that the proposed scheme significantly improves the aesthetics of retargeted images while preserving the important objects. It also achieves better performance in terms of aesthetics than a number of conventional image retargeting approaches.


The Visual Computer | 2016

A 3D model perceptual feature metric based on global height field

Yihui Guo; Shujin Lin; Zhuo Su; Xiaonan Luo; Ruomei Wang; Yang Kang

Human visual attention system tends to be attracted to perceptual feature points on 3D model surfaces. However, purely geometric-based feature metrics may be insufficient to extract perceptual features, because they tend to detect local structure details. Intuitively, the perceptual importance degree of vertex is associated with the height of its geometry position between original model and a datum plane. So, we propose a novel and straightforward method to extract perceptually important points based on global height field. Firstly, we construct spectral domain using Laplace–Beltrami operator, and we perform spectral synthesis to reconstruct a rough approximation of the original model by adopting low-frequency coefficients, and make it as the 3D datum plane. Then, to build global height field, we calculate the Euclidean distance between vertex geometry position on original surface and the one on 3D datum plane. Finally, we set a threshold to extract perceptual feature vertices. We implement our technique on several 3D mesh models and compare our algorithm to six state-of-the-art interest points detection approaches. Experimental results demonstrate that our algorithm can accurately capture perceptually important points on arbitrary topology 3D model.


Journal of the American Medical Informatics Association | 2016

A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

Baoquan Zhao; Songhua Xu; Shujin Lin; Xiaonan Luo; Lian Duan

OBJECTIVEnBiomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of todays keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly.nnnMATERIALS AND METHODSnThe authors collected and processed around 25u2009000 YouTube videos, which collectively last for a total length of about 4000u2009h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos.nnnCONCLUSIONnUsing the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos.


Journal of Computational and Applied Mathematics | 2018

Relative reductive structure-aware regression filter

Zhuo Su; Biyi Zeng; Jiaxin Miao; Xiaonan Luo; Baocai Yin; Qiang Chen

Abstract Structure-aware image smoothing is a challenging and significant technique to remedy the limitation of current edge-preserving smoothing filters for extracting the prominent structures. To improve the technique, we propose a novel structure-aware filter via bilateral kernel regression with a variational structure-kernel descriptor. First, the relative reductive texture decomposition is applied to construct the structure-kernel descriptor. Then, the descriptor is incorporated into the bilateral kernel regression to achieve an expected structure preservation output. Algorithmically, a close-form numerically iterative solver is exploited to achieve the efficient and effective implementation. At last, some experimental self-evaluations and visual applications are presented to demonstrate that our method leads to better performance than the state-of-the-art solutions.

Collaboration


Dive into the Xiaonan Luo's collaboration.

Top Co-Authors

Avatar

Shujin Lin

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Zhuo Su

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Ruomei Wang

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Li Liu

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Fan Zhou

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Hefeng Wu

Guangdong University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuhui Hu

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Yun Liang

South China Agricultural University

View shared research outputs
Top Co-Authors

Avatar

Qiang Chen

University of Education

View shared research outputs
Researchain Logo
Decentralizing Knowledge