Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ying-Qing Xu is active.

Publication


Featured researches published by Ying-Qing Xu.


ACM Transactions on Graphics | 2001

Real-time texture synthesis by patch-based sampling

Lin Liang; Ce Liu; Ying-Qing Xu; Baining Guo; Heung-Yeung Shum

We present an algorithm for synthesizing textures from an input sample. This patch-based sampling algorithm is fast and it makes high-quality texture synthesis a real-time process. For generating textures of the same size and comparable quality, patch-based sampling is orders of magnitude faster than existing algorithms. The patch-based sampling algorithm works well for a wide variety of textures ranging from regular to stochastic. By sampling patches according to a nonparametric estimation of the local conditional MRF density function, we avoid mismatching features across patch boundaries. We also experimented with documented cases for which pixel-based nonparametric sampling algorithms cease to be effective but our algorithm continues to work well.


international conference on computer graphics and interactive techniques | 2004

Video tooning

Jue Wang; Ying-Qing Xu; Heung-Yeung Shum; Michael F. Cohen

We describe a system for transforming an input video into a highly abstracted, spatio-temporally coherent cartoon animation with a range of styles. To achieve this, we treat video as a space-time volume of image data. We have developed an anisotropic kernel mean shift technique to segment the video data into contiguous volumes. These provide a simple cartoon style in themselves, but more importantly provide the capability to semi-automatically rotoscope semantically meaningful regions.In our system, the user simply outlines objects on keyframes. A mean shift guided interpolation algorithm is then employed to create three dimensional semantic regions by interpolation between the keyframes, while maintaining smooth trajectories along the time dimension. These regions provide the basis for creating smooth two dimensional edge sheets and stroke sheets embedded within the spatio-temporal video volume. The regions, edge sheets, and stroke sheets are rendered by slicing them at particular times. A variety of styles of rendering are shown. The temporal coherence provided by the smoothed semantic regions and sheets results in a temporally consistent non-photorealistic appearance.


european conference on computer vision | 2004

Image and Video Segmentation by Anisotropic Kernel Mean Shift

Jue Wang; Bo Thiesson; Ying-Qing Xu; Michael F. Cohen

Mean shift is a nonparametric estimator of density which has been applied to image and video segmentation. Traditional mean shift based segmentation uses a radially symmetric kernel to estimate local density, which is not optimal in view of the often structured nature of image and more particularly video data. In this paper we present an anisotropic kernel mean shift in which the shape, scale, and orientation of the kernels adapt to the local structure of the image or video. We decompose the anisotropic kernel to provide handles for modifying the segmentation based on simple heuristics. Experimental results show that the anisotropic kernel mean shift outperforms the original mean shift on image and video segmentation in the following aspects: 1) it gets better results on general images and video in a smoothness sense; 2) the segmented results are more consistent with human visual saliency; 3) the algorithm is robust to initial parameters.


international conference on computer graphics and interactive techniques | 2008

Sketch-based tree modeling using Markov random field

Xuejin Chen; Boris Neubert; Ying-Qing Xu; Oliver Deussen; Sing Bing Kang

In this paper, we describe a new system for converting a users freehand sketch of a tree into a full 3D model that is both complex and realistic-looking. Our system does this by probabilistic optimization based on parameters obtained from a database of tree models. The best matching model is selected by comparing its 2D projections with the sketch. Branch interaction is modeled by a Markov random field, subject to the constraint of 3D projection to sketch. Our system then uses the notion of self-similarity to add new branches before finally populating all branches with leaves of the users choice. We show a variety of natural-looking tree models generated from freehand sketches with only a few strokes.


pacific rim conference on multimedia | 2001

Emotion Detection from Speech to Enrich Multimedia Content

Feng Yu; Eric Chang; Ying-Qing Xu; Heung-Yeung Shum

This paper describes an experimental study on the detection of emotion from speech. As computer-based characters such as avatars and virtual chat faces become more common, the use of emotion to drive the expression of the virtual characters becomes more important. This study utilizes a corpus containing emotional speech with 721 short utterances expressing four emotions: anger, happiness, sadness, and the neutral (unemotional) state, which were captured manually from movies and teleplays. We introduce a new concept to evaluate emotions in speech. Emotions are so complex that most speech sentences cannot be precisely assigned to a particular emotion category; however, most emotional states nevertheless can be described as a mixture of multiple emotions. Based on this concept we have trained SVMs (support vector machines) to recognize utterances within these four categories and developed an agent that can recognize and express emotions.


non-photorealistic animation and rendering | 2004

Example-based composite sketching of human portraits

Hong Chen; Ziqiang Liu; Chuck Rose; Ying-Qing Xu; Heung-Yeung Shum; David Salesin

Creating a portrait in the style of a particular artistic tradition or a particular artist is a difficult problem. Elusive to codify algorithmically, the nebulous qualities which combine to form artwork are often well captured using example-based approaches. These methods place the artist in the process, often during system training, in the hope that their talents may be tapped.Example based methods do not make this problem easy, however. Examples are precious, so training sets are small, reducing the number of techniques which may be employed. We propose a system which combines two separate but similar subsystems, one for the face and another for the hair, each of which employs a global and a local model. Facial exaggeration to achieve the desired stylistic look is handled during the global face phase. Each subsystem uses a divide-and-conquer approach, but while the face subsystem decomposes into separable subproblems for the eyes, mouth, nose, etc., the hair needs to be subdivided in a relatively arbitrary way, making the hair subproblem decomposition an important step which must be handled carefully with a structured model and a detailed model.


eurographics symposium on rendering techniques | 2007

Natural image colorization

Qing Luan; Fang Wen; Daniel Cohen-Or; Lin Liang; Ying-Qing Xu; Heung-Yeung Shum

In this paper, we present an interactive system for users to easily colorize the natural images of complex scenes. In our system, colorization procedure is explicitly separated into two stages: Color labeling and Color mapping. Pixels that should roughly share similar colors are grouped into coherent regions in the color labeling stage, and the color mapping stage is then introduced to further fine-tune the colors in each coherent region. To handle textures commonly seen in natural images, we propose a new color labeling scheme that groups not only neighboring pixels with similar intensity but also remote pixels with similar texture. Motivated by the insight into the complementary nature possessed by the highly contrastive locations and the smooth locations, we employ a smoothness map to guide the incorporation of intensity-continuity and texture-similarity constraints in the design of our labeling algorithm. Within each coherent region obtained from the color labeling stage, the color mapping is applied to generate vivid colorization effect by assigning colors to a few pixels in the region. A set of intuitive interface tools is designed for labeling, coloring and modifying the result. We demonstrate compelling results of colorizing natural images using our system, with only a modest amount of user input.


international conference on computer graphics and interactive techniques | 2010

Data-driven image color theme enhancement

Baoyuan Wang; Yizhou Yu; Tien-Tsin Wong; Chun Chen; Ying-Qing Xu

It is often important for designers and photographers to convey or enhance desired color themes in their work. A color theme is typically defined as a template of colors and an associated verbal description. This paper presents a data-driven method for enhancing a desired color theme in an image. We formulate our goal as a unified optimization that simultaneously considers a desired color theme, texture-color relationships as well as automatic or user-specified color constraints. Quantifying the difference between an image and a color theme is made possible by color mood spaces and a generalization of an additivity relationship for two-color combinations. We incorporate prior knowledge, such as texture-color relationships, extracted from a database of photographs to maintain a natural look of the edited images. Experiments and a user study have confirmed the effectiveness of our method.


pacific conference on computer graphics and applications | 2002

Example-based caricature generation with exaggeration

Lin Liang; Hong Chen; Ying-Qing Xu; Heung-Yeung Shum

In this paper, we present a system that automatically generates caricatures from input face images. From example caricatures drawn by an artist, our caricature system learns how an artist draws caricatures. In our approach, we decouple the process of caricature generation into two parts, i.e., shape exaggeration and texture style transferring. The exaggeration of a caricature is accomplished by a prototype-based method that captures the artists understanding of what are distinctive features of a face and the exaggeration style. Such prototypes are learnt by analyzing the correlation between the image caricature pairs using partial least-squares (PLS). Experimental results demonstrate the effectiveness of our system.


ACM Transactions on Graphics | 2008

Sketching reality: Realistic interpretation of architectural designs

Xuejin Chen; Sing Bing Kang; Ying-Qing Xu; Julie Dorsey; Heung-Yeung Shum

In this article, we introduce sketching reality, the process of converting a freehand sketch into a realistic-looking model. We apply this concept to architectural designs. As the sketch is being drawn, our system periodically interprets its 2.5D-geometry by identifying new junctions, edges, and faces, and then analyzing the extracted topology. The user can add detailed geometry and textures through sketches as well. This is possible through the use of databases that match partial sketches to models of detailed geometry and textures. The final product is a realistic texture-mapped 2.5D-model of the building. We show a variety of buildings that have been created using this system.

Collaboration


Dive into the Ying-Qing Xu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge