Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where emiao Xu is active.

Publication


Featured researches published by emiao Xu.


international conference on computer graphics and interactive techniques | 2010

Structure-based ASCII art

Xuemiao Xu; Linling Zhang; Tien-Tsin Wong

The wide availability and popularity of text-based communication channels encourage the usage of ASCII art in representing images. Existing tone-based ASCII art generation methods lead to halftone-like results and require high text resolution for display, as higher text resolution offers more tone variety. This paper presents a novel method to generate structure-based ASCII art that is currently mostly created by hand. It approximates the major line structure of the reference image content with the shape of characters. Representing the unlimited image content with the extremely limited shapes and restrictive placement of characters makes this problem challenging. Most existing shape similarity metrics either fail to address the misalignment in real-world scenarios, or are unable to account for the differences in position, orientation and scaling. Our key contribution is a novel alignment-insensitive shape similarity (AISS) metric that tolerates misalignment of shapes while accounting for the differences in position, orientation and scaling. Together with the constrained deformation approach, we formulate the ASCII art generation as an optimization that minimizes shape dissimilarity and deformation. Convincing results and user study are shown to demonstrate its effectiveness.


Neurocomputing | 2017

-agree AdaBoost stacked autoencoder for short-term traffic flow forecasting

Teng Zhou; Guoqiang Han; Xuemiao Xu; Zhizhe Lin; Chu Han; Yuchang Huang; Jing Qin

Accurate and timely traffic flow forecasting is critical for the successful deployment of intelligent transportation systems. However, it is quite challenging to develop an efficient and robust forecasting model due to the inherent randomness and large variations of traffic flow. Recently, the stacked autoencoder has been proven promising for traffic flow forecasting but still exists some drawbacks in certain conditions. In this paper, a training samples replication strategy is introduced to train a series of stacked autoencoders and an adaptive boosting scheme is proposed to ensemble the trained stacked autoencoders to improve the accuracy of traffic flow forecasting. Furthermore, sufficient experiments have been conducted to demonstrate the superior performance of the proposal.


The Visual Computer | 2016

Text-aware balloon extraction from manga

Xueting Liu; Chengze Li; Haichao Zhu; Tien-Tsin Wong; Xuemiao Xu

Manga, a Japanese word for comics, is a worldwide popular visual entertainment. Nowadays, electronic devices boost the fast development of motion manga for the purpose of visual richness and manga promotion. To convert static manga images into motion mangas, text balloons are usually animated individually for better story telling. This needs the artists to cut out each text balloon meticulously, and therefore it is quite labor-intensive and time-consuming. In this paper, we propose an automatic approach that can extract text balloons from manga images both accurately and effectively. Our approach starts by extracting white areas that contain texts as text blobs. Different from existing text blob extraction methods that only rely on shape properties, we incorporate text properties in order to differentiate text blobs from texture blobs. Instead of heuristic parameter thresholding, we achieve text blob classification via learning-based classifiers. Along with the novel text blob classification method, we also make the first attempt in trying to tackle the boundary issue in balloon extraction. We apply our method on various styles of mangas and comics with texts in different languages, and convincing results are obtained in all cases.


Computational Visual Media | 2015

Region-based structure line detection for cartoons

Xiangyu Mao; Xueting Liu; Tien-Tsin Wong; Xuemiao Xu

Cartoons are a worldwide popular visual entertainment medium with a long history. Nowadays, with the boom of electronic devices, there is an increasing need to digitize old classic cartoons as a basis for further editing, including deformation, colorization, etc. To perform such editing, it is essential to extract the structure lines within cartoon images. Traditional edge detection methods are mainly based on gradients. These methods perform poorly in the face of compression artifacts and spatially-varying line colors, which cause gradient values to become unreliable. This paper presents the first approach to extract structure lines in cartoons based on regions. Our method starts by segmenting an image into regions, and then classifies them as edge regions and non-edge regions. Our second main contribution comprises three measures to estimate the likelihood of a region being a non-edge region. These measure darkness, local contrast, and shape. Since the likelihoods become unreliable as regions become smaller, we further classify regions using both likelihoods and the relationships to neighboring regions via a graph-cut formulation. Our method has been evaluated on a wide variety of cartoon images, and convincing results are obtained in all cases.


IEEE Transactions on Visualization and Computer Graphics | 2017

A Unified Detail-Preserving Liquid Simulation by Two-Phase Lattice Boltzmann Modeling

Yulong Guo; Xiaopei Liu; Xuemiao Xu

Traditional methods in graphics to simulate liquid-air dynamics under different scenarios usually employ separate approaches with sophisticated interface tracking/reconstruction techniques. In this paper, we propose a novel unified approach which is easy and effective to produce a variety of liquid-air interface phenomena. These phenomena, such as complex surface splashes, bubble interactions, as well as surface tension effects, can co-exist in one single simulation, and are created within the same computational framework. Such a framework is unique in that it is free from any complicated interface tracking/reconstruction procedures. Our approach is developed from the two-phase lattice Boltzmann method with the mean field model, which provides a unified framework for interface dynamics but is numerically unstable under turbulent conditions. Considering the drawbacks of the existing approaches, we propose techniques to suppress oscillations for significant stability enhancement, as well as derive a new subgrid-scale model to further improve stability, faithfully preserving liquid-air interface details without excessive diffusion by taking into account the density variation. The whole framework is highly parallel, enabling very efficient implementation. Comparisons with the related approaches show superiority on stable simulations with detail preservation and multiphase phenomena simultaneously involved. A set of animation results demonstrate the effectiveness of our method.


Sensors | 2015

Illumination-invariant and deformation-tolerant inner knuckle print recognition using portable devices.

Xuemiao Xu; Qiang Jin; Le Zhou; Jing Qin; Tien-Tsin Wong; Guoqiang Han

We propose a novel biometric recognition method that identifies the inner knuckle print (IKP). It is robust enough to confront uncontrolled lighting conditions, pose variations and low imaging quality. Such robustness is crucial for its application on portable devices equipped with consumer-level cameras. We achieve this robustness by two means. First, we propose a novel feature extraction scheme that highlights the salient structure and suppresses incorrect and/or unwanted features. The extracted IKP features retain simple geometry and morphology and reduce the interference of illumination. Second, to counteract the deformation induced by different hand orientations, we propose a novel structure-context descriptor based on local statistics. To our best knowledge, we are the first to simultaneously consider the illumination invariance and deformation tolerance for appearance-based low-resolution hand biometrics. Settings in previous works are more restrictive. They made strong assumptions either about the illumination condition or the restrictive hand orientation. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of recognition accuracy, especially under uncontrolled lighting conditions and the flexible hand orientation requirement.


IEEE Transactions on Visualization and Computer Graphics | 2018

Towards High-Quality Visualization of Superfluid Vortices

Yulong Guo; Xiaopei Liu; Chi Xiong; Xuemiao Xu; Chi-Wing Fu

Superfluidity is a special state of matter exhibiting macroscopic quantum phenomena and acting like a fluid with zero viscosity. In such a state, superfluid vortices exist as phase singularities of the model equation with unique distributions. This paper presents novel techniques to aid the visual understanding of superfluid vortices based on the state-of-the-art non-linear Klein-Gordon equation, which evolves a complex scalar field, giving rise to special vortex lattice/ring structures with dynamic vortex formation, reconnection, and Kelvin waves, etc. By formulating a numerical model with theoretical physicists in superfluid research, we obtain high-quality superfluid flow data sets without noise-like waves, suitable for vortex visualization. By further exploring superfluid vortex properties, we develop a new vortex identification and visualization method: a novel mechanism with velocity circulation to overcome phase singularity and an orthogonal-plane strategy to avoid ambiguity. Hence, our visualizations can help reveal various superfluid vortex structures and enable domain experts for related visual analysis, such as the steady vortex lattice/ring structures, dynamic vortex string interactions with reconnections and energy radiations, where the famous Kelvin waves and decaying vortex tangle were clearly observed. These visualizations have assisted physicists to verify the superfluid model, and further explore its dynamic behavior more intuitively.


IEEE Transactions on Visualization and Computer Graphics | 2017

ASCII Art Synthesis from Natural Photographs

Xuemiao Xu; Linyuan Zhong; Minshan Xie; Xueting Liu; Jing Qin; Tien-Tsin Wong

While ASCII art is a worldwide popular art form, automatic generating structure-based ASCII art from natural photographs remains challenging. The major challenge lies on extracting the perception-sensitive structure from the natural photographs so that a more concise ASCII art reproduction can be produced based on the structure. However, due to excessive amount of texture in natural photos, extracting perception-sensitive structure is not easy, especially when the structure may be weak and within the texture region. Besides, to fit different target text resolutions, the amount of the extracted structure should also be controllable. To tackle these challenges, we introduce a visual perception mechanism of non-classical receptive field modulation (non-CRF modulation) from physiological findings to this ASCII art application, and propose a new model of non-CRF modulation which can better separate the weak structure from the crowded texture, and also better control the scale of texture suppression. Thanks to our non-CRF model, more sensible ASCII art reproduction can be obtained. In addition, to produce more visually appealing ASCII arts, we propose a novel optimization scheme to obtain the optimal placement of proportional-font characters. We apply our method on a rich variety of images, and visually appealing ASCII art can be obtained in all cases.


International Journal of Distributed Sensor Networks | 2015

A novel IKP-based biometric recognition using mobile phone camera

Xuemiao Xu; Xiao-Zheng Lai; Qiang Jin; Xue-Han Yuan; Sheng-Li Lai; Yan-Wen Lin; Jian-Wen Huang

This paper explores an inner-knuckle-print (IKP) biometric recognition, based on mobile phone. Since IKP characteristics are captured by using mobile phone camera, the greatest challenge is that IKP images from the same hand have different illumination, posture, and background. In order to construct autonomous and robust recognition, we present a range of techniques as follows. Firstly, the hand region is preprocessed by using mean shift (MS) and K-means clustering. Secondly, the region of interest (ROI) of IKP is segmented and normalized. Thirdly, the IKP feature is extracted by using 2D Gabor filter with proper orientation and frequency. Finally, histogram of orientation gradient (HOG) algorithm is applied for matching. According to the experimental results, the proposed method is capable of achieving considerable recognition accuracy.


medical image computing and computer assisted intervention | 2018

Deep Attentional Features for Prostate Segmentation in Ultrasound

Yi Wang; Zijun Deng; Xiaowei Hu; Lei Zhu; Xin Yang; Xuemiao Xu; Pheng-Ann Heng; Dong Ni

Automatic prostate segmentation in transrectal ultrasound (TRUS) is of essential importance for image-guided prostate biopsy and treatment planning. However, developing such automatic solutions remains very challenging due to the ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS. This paper develops a novel deep neural network equipped with deep attentional feature (DAF) modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our DAF utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. We evaluate the efficacy of the proposed network on challenging prostate TRUS images, and the experimental results demonstrate that our network outperforms state-of-the-art methods by a large margin.

Collaboration


Dive into the emiao Xu's collaboration.

Top Co-Authors

Avatar

Tien-Tsin Wong

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jing Qin

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Guoqiang Han

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Pheng-Ann Heng

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xiaowei Hu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Qiang Jin

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zijun Deng

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaopei Liu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Lei Zhu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xueting Liu

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge