Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xingyu Gao is active.

Publication


Featured researches published by Xingyu Gao.


ubiquitous computing | 2013

Inferring social contextual behavior from bluetooth traces

Zhenyu Chen; Yiqiang Chen; Shuangquan Wang; Junfa Liu; Xingyu Gao; Andrew T. Campbell

Context-aware computing is increasingly paid much attention, especially makes the peoples social contextual behavior very crucial for user-centric dynamic behavior inference. At present, extensive work has focused on detecting specific places inferred by static radio signals like GPS, GSM and WiFi, and recognizing mobility modes inferred by embedded sensor components like accelerometer. This paper proposes a distinct feature based classification approach and context restraint based majority vote rule to infer social contextual behavior in dynamic surroundings. Experimental results indicate that our proposed method can achieve high accuracy for inferring social contextual behavior through the real-life Bluetooth traces.


conference on multimedia modeling | 2009

Semi-supervised Learning of Caricature Pattern from Manifold Regularization

Junfa Liu; Yiqiang Chen; Jinjing Xie; Xingyu Gao; Wen Gao

Automatic caricature synthesis is to transform the input face to an exaggerated one. It is becoming an interesting research topic, but it remains an open issue to specify the caricatures pattern for the input face. This paper proposed a novel pattern prediction method based on MR (manifold regularization), which comprises three steps. Firstly, we learn the caricature pattern by manifold dimension reduction, and select some low dimensional caricature pattern as the labels for corresponsive true faces. Secondly, manifold regularization is performed to build a semi-supervised regression between true faces and the pattern labels. In the third step of offline phase, the input face is mapped to a pattern label by the learnt regressive model, and the pattern label is further transformed to caricature parameters by a locally linear reconstruction algorithm. This approach takes advantage of manifold structure lying in both true faces and caricatures. Experiments show that, low dimensional manifold represents the caricature pattern well and the semi-supervised regressive model from manifold regularization can predict the target caricature pattern successfully.


Computer Graphics Forum | 2009

Semi‐Supervised Learning in Reconstructed Manifold Space for 3D Caricature Generation

Junfa Liu; Yiqiang Chen; Chunyan Miao; Jinjing Xie; Charles X. Ling; Xingyu Gao; Wen Gao

Recently, automatic 3D caricature generation has attracted much attention from both the research community and the game industry. Machine learning has been proven effective in the automatic generation of caricatures. However, the lack of 3D caricature samples makes it challenging to train a good model. This paper addresses this problem by two steps. First, the training set is enlarged by reconstructing 3D caricatures. We reconstruct 3D caricatures based on some 2D caricature samples with a Principal Component Analysis (PCA)‐based method. Secondly, between the 2D real faces and the enlarged 3D caricatures, a regressive model is learnt by the semi‐supervised manifold regularization (MR) method. We then predict 3D caricatures for 2D real faces with the learnt model. The experiments show that our novel approach synthesizes the 3D caricature more effectively than traditional methods. Moreover, our system has been applied successfully in a massive multi‐user educational game to provide human‐like avatars.


acm multimedia | 2009

Interactive 3D caricature generation based on double sampling

Jinjing Xie; Yiqiang Chen; Junfa Liu; Chunyan Miao; Xingyu Gao

Recently, 3D caricature generation and applications have attracted wide attention from both the research community and the entertainment industry. This paper proposes a novel interactive approach for various and interesting 3D caricature generation based on double sampling. Firstly, according to users operation, we obtain a coarse 3D caricature with local features transformation by sampling in well-built principle component analysis (PCA) subspace. Secondly, to utilize information of the 2D caricature dataset, we sample in the local linear embedding (LLE) manifold subspace. Finally, we use the learned 2D caricature information to further refine the coarse caricature by applying Kriging interpolation. The experiments show that the 3D caricature generated by our method can preserve highly artistic styles and also reflect the users intention.


international conference on data mining | 2015

Unobtrusive Sensing Incremental Social Contexts Using Class Incremental Learning

Zhenyu Chen; Yiqiang Chen; Xingyu Gao; Shuangquan Wang; Lisha Hu; Chenggang Clarence Yan; Nicholas D. Lane; Chunyan Miao

By utilizing captured characteristics of surrounding contexts through widely used Bluetooth sensor, user-centric social contexts can be effectively sensed and discovered by dynamic Bluetooth information. At present, state-of-the-art approaches for building classifiers can basically recognize limited classes trained in the learning phase; however, due to the complex diversity of social contextual behavior, the built classifier seldom deals with newly appeared contexts, which results in degrading the recognition performance greatly. To address this problem, we propose, an OSELM (online sequential extreme learning machine) based class incremental learning method for continuous and unobtrusive sensing new classes of social contexts from dynamic Bluetooth data alone. We integrate fuzzy clustering technique and OSELM to discover and recognize social contextual behaviors by real-world Bluetooth sensor data. Experimental results show that our method can automatically cope with incremental classes of social contexts that appear unpredictably in the real-world. Further, our proposed method have the effective recognition capability for both original known classes and newly appeared unknown classes, respectively.


pacific rim conference on multimedia | 2008

Personalized 3D Caricature Generation Based on PCA Subspace

Xingyu Gao; Yiqiang Chen; Junfa Liu; Jingye Zhou

3D caricature generation is becoming a more interesting and challenging research topic. This paper presents personalized 3D caricature generation using PCA (Principal Component Analysis) for each 3D caricature component. First, we construct a 3D caricature dataset manually, and then we construct 3D caricature components datasets according to the defined attribute for each component. Next, PCA is employed to create each subspace for each 3D caricature component, and diverse vivid caricature component is generated by interactive mode. Finally, the generated caricature components are combined into 3D caricature based on the 3D true face. Our approach is very convenient and efficient to generate different kinds of 3D caricature by personalized thought, for example, a user just needs to drag the sliders for each principal component of the 3D caricature component.


pacific rim conference on multimedia | 2008

A Novel Method for Pencil Drawing Generation in Non-Photo-Realistic Rendering

Zhenyu Chen; Jingye Zhou; Xingyu Gao; Longsheng Li; Junfa Liu

This paper puts forward a novel method for automatically generating a pencil drawing from a real 2D color image in non-photo-realistic rendering. First, the edge of the color image is detected by Sobel operator. Next, the color image is sharpened by Unsharp Mask (USM), and then color scaling is used to get an image with radial and edge details. Furthermore, the original image is divided into many meaningful regions using an efficient method of image segmentation, then the texture direction is determined by Fourier transform and shape feature. To render better effects of illumination and local texture of pencil drawing, the line integral convolution (LIC) algorithm is applied and combined with color scaling and white noise image. Finally, the pencil drawing is created and the generation results from the superposition of the edge, the USM image, and the texture. Experimental results prove that our method could enhance the generated efficiency greatly by creating a more obvious edge, more natural tone, and more real texture than those offered by existed methods.


international conference on information and communication security | 2009

Genetic sampling in eigenspace for 3D caricature synthesis

Xingyu Gao; Yiqiang Chen; Jingye Zhou; Junfa Liu; Chunyan Miao; Zhenyu Chen

Caricature synthesis is becoming one of the most challenging research problems nowadays, especially for 3D caricature. In this paper, we present a novel approach for 3D caricature synthesis adopting genetic sampling in eigenspace. Our approach aims to generate 3D caricature which has not only exaggerated effect but also facial feature of real face. Firstly, we build 3D caricature components eigenspaces by Principal Component Analysis (PCA). Moreover, genetic algorithm (GA) is used to sample optimal projections in the eigenspaces for 3D real face components, and then we reconstruct 3D caricature components by the projections. At last, we refine the 3D caricature using PCA-based sampling approach and Kriging Interpolation. The experiment results show that our approach can effectively synthesize interesting and harmonious 3D caricature.


pacific rim conference on multimedia | 2008

AAML Based Avatar Animation with Personalized Expression for Online Chatting System

Junfa Liu; Yiqiang Chen; Xingyu Gao; Jinjing Xie; Wen Gao

This paper presents an online chatting system with automatic expression animation and animated avatar. The system is constructed upon an XML based language AAML (Affective Animation Markup Language), which describes hierarchically affective content for online communication. Once the user types a chatting sentence, the AAML produces a piece of personalized facial animation. The animation would be sent to the remote client right now to enrich the chatting process. Comparatively, our system is designed with its dedicated feature that its structure is hierarchical and open, so the users can extend the AAML, defining their own tags to realize personal affective expression. In addition, caricature generation according to an input facial photograph is embedded in the framework, so the animation is synthesized based on the caricature with entertainment effect. Successful application of the online chatting system shows that our system can enhance affective interaction by the customized facial animation and the animated avatar that is similar with the user. The function can also be widely applied in online or mobile environment such as multimedia message synthesis.


international symposium on wearable computers | 2015

Recognizing extended surrounding contexts via class incremental learning

Zhenyu Chen; Yiqiang Chen; Shuangquan Wang; Lisha Hu; Xingyu Gao; Xinlong Jiang

Benefit from widely used Bluetooth sensor, user surrounding contexts can be availably recognized leveraging Bluetooth data. Most existing studies seldom deal with newly extended surrounding contexts which results in degrading the recognition performance, in that the built classifier just basically recognizes limited classes learned in training phase. This paper proposes a fuzzy class incremental learning method based on OSELM, named FCI-ELM, for continuously recognizing extended classes of contexts. The encouraging results of our experiments show that FCI-ELM can automatically and continuously recognize newly discovered classes of contexts in the real-world, and the model can keep the same recognition ability by class incremental learning approach.

Collaboration


Dive into the Xingyu Gao's collaboration.

Top Co-Authors

Avatar

Yiqiang Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Junfa Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhenyu Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jinjing Xie

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chunyan Miao

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuangquan Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisha Hu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge