Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaoyan Zhang is active.

Publication


Featured researches published by Xiaoyan Zhang.


International Journal of Oral and Maxillofacial Surgery | 2015

Algorithm for planning a double-jaw orthognathic surgery using a computer-aided surgical simulation (CASS) protocol. Part 1: planning sequence

James J. Xia; Jaime Gateno; John F. Teichgraeber; Peng Yuan; Ken Chung Chen; Jianfu Li; Xiaoyan Zhang; Zhen Tang; D.M. Alfi

The success of craniomaxillofacial (CMF) surgery depends not only on the surgical techniques, but also on an accurate surgical plan. The adoption of computer-aided surgical simulation (CASS) has created a paradigm shift in surgical planning. However, planning an orthognathic operation using CASS differs fundamentally from planning using traditional methods. With this in mind, the Surgical Planning Laboratory of Houston Methodist Research Institute has developed a CASS protocol designed specifically for orthognathic surgery. The purpose of this article is to present an algorithm using virtual tools for planning a double-jaw orthognathic operation. This paper will serve as an operation manual for surgeons wanting to incorporate CASS into their clinical practice.


Annals of Biomedical Engineering | 2016

An eFace-Template Method for Efficiently Generating Patient-Specific Anatomically-Detailed Facial Soft Tissue FE Models for Craniomaxillofacial Surgery Simulation.

Xiaoyan Zhang; Zhen Tang; Michael A. K. Liebschner; Daeseung Kim; Shunyao Shen; Chien-Ming Chang; Peng Yuan; Guangming Zhang; Jaime Gateno; Xiaobo Zhou; Shao-Xiang Zhang; James J. Xia

Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft-tissue changes following osteotomy. This can only be accomplished on an anatomically-detailed facial soft tissue model. However, current anatomically-detailed facial soft tissue model generation is not appropriate for clinical applications due to the time intensive nature of manual segmentation and volumetric mesh generation. This paper presents a novel semi-automatic approach, named eFace-template method, for efficiently and accurately generating a patient-specific facial soft tissue model. Our novel approach is based on the volumetric deformation of an anatomically-detailed template to be fitted to the shape of each individual patient. The adaptation of the template is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. This methodology was validated using 4 visible human datasets (regarded as gold standards) and 30 patient models. The results indicated that our approach can accurately preserve the internal anatomical correspondence (i.e., muscles) for finite element modeling. Additionally, our hybrid approach was able to achieve an optimal balance among the patient shape fitting accuracy, anatomical correspondence and mesh quality. Furthermore, the statistical analysis showed that our hybrid approach was superior to two previously published methods: mesh-matching and landmark-based transformation. Ultimately, our eFace-template method can be directly and effectively used clinically to simulate the facial soft tissue changes in the clinical application.


IEEE Transactions on Multimedia | 2014

Atmospheric Perspective Effect Enhancement of Landscape Photographs Through Depth-Aware Contrast Manipulation

Xiaoyan Zhang; Kap Luk Chan; Martin Constable

The atmospheric perspective effect is a physical phenomenon relating to the effect that atmosphere has on distant objects, causing them to be lighter and less distinct. The exaggeration of this effect by artists in 2D images increases the illusion of depth thereby making the image more interesting. This paper addresses the enhancement of the atmospheric perspective effect in landscape photographs by the manipulation of depth-aware lightness and saturation contrast values. The form of this manipulation follows the organization of such contrasts in landscape paintings. The objective of this manipulation is based on a statistical study which has clearly shown that the saturation and lightness contrasts inter- and intra- depth planes in paintings are more purposefully organized than those in photographs. This contrast organization in paintings respects the existing contrast relationships within a natural scene guided by the atmospheric perspective effect and also exaggerates them sufficiently with a view to improving the visual appeal of the painting and the illusion of depth within it. The depth-aware lightness and saturation contrasts revealed in landscape paintings guide the mapping of such contrasts in photographs. This contrast mapping is formulated as an optimization problem that simultaneously considers the desired inter-contrast, intra-contrast, and specified gradient constraints. Experimental results demonstrate that by using this proposed approach, both the visual appeal and the illusion of depth in the photographs are effectively improved.


international conference on image processing | 2011

Aesthetic enhancement of landscape photographs as informed by paintings across depth layers

Xiaoyan Zhang; Martin Constable; Kap Luk Chan

This paper addresses the aesthetic enhancement of contrasts in saturation and luminance in a landscape photograph to follow the organization of such contrasts in landscape paintings. Different from many existing work on similar topics that mainly emulate the surface characteristics of a painting, this paper presents a technique that transfer the contrast organization in landscape paintings inter and intra four depth layers onto landscape photographs. The contrasts in saturation and luminance revealed in landscape paintings along the scene depth provide the references to guide the mapping of such contrasts in photographs. A novel inter and intra depth layer luminance and saturation contrast mapping algorithm using gradient histogram matching is developed. Experimental results demonstrate the effectiveness of the proposed method.


computational intelligence and security | 2008

Automatic Detection and Tracking of Maneuverable Birds in Videos

Xiaoyan Zhang; Xiaojuan Wu; Xin Zhou; Xiao-gang Wang; Yuan-yuan Zhang

In this paper, we try to detect and track maneuverable birds in captured videos for further automatic research. To avoid abrupt scene change, videos are captured by pointing the camera upwards and making the image against the sky. Two levels (pixel level and frame level) background update algorithm is used to get the foreground in real time. In frame level, three thresholds are used to update the background to achieve fast background update under abrupt scene changing. Targets are detected from the binary foreground after retrieving contours, size filter and margin measure. The tracking of maneuverable birds is achieved by using a Markov chain Monte Carlo (MCMC) filter with no move types. The experiment results show that multiple maneuverable birds are detected and tracked accurately in real time, and the size of the tracking box adjust fast to cover the true area of birds.


Neurocomputing | 2018

Haze removal method for natural restoration of images with sky

Yingying Zhu; Gaoyang Tang; Xiaoyan Zhang; Jianmin Jiang; Qi Tian

Abstract Most haze removal methods fail to restore long-shot images naturally, especially for the sky region. To solve this problem, we proposed a Fusion of Luminance and Dark Channel Prior (F-LDCP) method to effectively restore long-shot images with sky. The transmission values estimated based on a luminance model and dark channel prior model are fused together based on a soft segmentation. The transmission estimated from the luminance model mainly contributes to the sky region, while that from the dark channel prior for the foreground region. The airlight also is adjusted to adapt to real light by sky region detection. A user study and objective assessment comparison with a variety of methods on long-shot haze images demonstrate that our method retains visual truth and removes the haze effectively.


pacific-rim symposium on image and video technology | 2010

On the Transfer of Painting Style to Photographic Images through Attention to Colour Contrast

Xiaoyan Zhang; Martin Constable; Ying He

This paper proposes a way to transfer the visual style of a painting as characterised by colour contrast to a photographic image by manipulating the visual attributes in terms of hue, saturation and lightness. We first extract the visual characteristics in hue, saturation and lightness from a painting. Then, these characteristics are transferred to a photographic image by histogram matching in saturation and lightness and dominant hue spread and relative position mapping along the RYB colour wheel. We evaluated the proposed transfer method on a number of paintings and photographs. The results are encouraging.


world congress on intelligent control and automation | 2016

Simulink comparison of varying-parameter convergent-differential neural-network and gradient neural network for solving online linear time-varying equations

Zhijun Zhang; Siwei Li; Xiaoyan Zhang

A novel kind of recurrent neural network (called varying-parameter convergent-differential neural-network, VP-CDNN) is proposed in this paper for online solving linear time-varying equations. Different from traditional gradient-base neural network (called GNN) with scalar-valued error functions, such new VP-CDNN is designed based on matrix-valued or vector-valued error functions and their coefficients related to convergence is time-varying. In other words, the coefficients are the functions of time t. In addition, this kind of VP-CDNN is depicted in implicit dynamics but not explicit dynamics. To illustrate the effectiveness of the new neural network, the comparative simulations with MATLAB Simulink of the proposed VP-CDNN and GNN for solving online linear time-varying equations are implemented and presented. Computer-simulation results verify the fast convergence and good robustness.


pacific-rim symposium on image and video technology | 2010

Depth-based Analyses of Landscape Paintings and Photographs According to Itten's Contrasts

Martin Constable; Xiaoyan Zhang

Using Itten’s Color Contrasts as a starting point, we performed a depth-based analysis of a set of paintings by the Hudson River school of landscape painters. This was compared to a similar analyses of a collection of contemporary ‘snap shot’ landscape photographs. Differences between the two groups were observed with the paintings being clearly more organized. This organization of contrasts can be considered a style that is representative of a school of painters. Photographs or optically acquired imagery can be rendered according to this organization to acquire an aesthetic that has been informed by this style.


2011 IEEE 5th International Conference on Cybernetics and Intelligent Systems (CIS) | 2011

Depth-based reference portrait painting selection for example-based rendering

Xiaoyan Zhang; Kap Luk Chan; Martin Constable

The task objective concerned in this paper is to preserve the natural attributes of a photograph and only enhance its aesthetic perceptual feeling. The paper proposed to select reference portrait paintings based on depth for example-based photograph rendering to improve its aesthetic appeal. Hence, the rendered photograph acquires the aesthetic style as informed by the selected reference paintings. The depth attributes are based on the notions of foreground/background or figure/non-figure relationship. The analysis on portrait paintings suggests that the natural attributes can be measured by the lightness/saturation/hue distributions or contrasts within and between foreground and background. By segmenting the photograph and paintings based on the depth information, and computing the intra layer distributions and inter layer contrasts as the features for the similarity measurement, references are then selected from the top ranked paintings. Some rendering examples are presented in this paper for evaluating the performance of the selection method.

Collaboration


Dive into the Xiaoyan Zhang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kap Luk Chan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wang Junyan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

James J. Xia

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Xiaobo Zhou

Wake Forest University

View shared research outputs
Top Co-Authors

Avatar

Jaime Gateno

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Peng Yuan

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Shunyao Shen

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Daeseung Kim

Houston Methodist Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge