Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xinchen Yan is active.

Publication


Featured researches published by Xinchen Yan.


european conference on computer vision | 2016

Attribute2Image: Conditional Image Generation from Visual Attributes

Xinchen Yan; Jimei Yang; Kihyuk Sohn; Honglak Lee

This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.


international conference of the ieee engineering in medicine and biology society | 2015

A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks.

Changhan Wang; Xinchen Yan; Max Smith; Kanika Kochhar; Marcie S. Rubin; Stephen M. Warren; James S. Wrobel; Honglak Lee

Wound surface area changes over multiple weeks are highly predictive of the wound healing process. Furthermore, the quality and quantity of the tissue in the wound bed also offer important prognostic information. Unfortunately, accurate measurements of wound surface area changes are out of reach in the busy wound practice setting. Currently, clinicians estimate wound size by estimating wound width and length using a scalpel after wound treatment, which is highly inaccurate. To address this problem, we propose an integrated system to automatically segment wound regions and analyze wound conditions in wound images. Different from previous segmentation techniques which rely on handcrafted features or unsupervised approaches, our proposed deep learning method jointly learns task-relevant visual features and performs wound segmentation. Moreover, learned features are applied to further analysis of wounds in two ways: infection detection and healing progress prediction. To the best of our knowledge, this is the first attempt to automate long-term predictions of general wound healing progress. Our method is computationally efficient and takes less than 5 seconds per wound image (480 by 640 pixels) on a typical laptop computer. Our evaluations on a large-scale wound database demonstrate the effectiveness and reliability of the proposed system.


european conference on computer vision | 2014

Efficient Online Spatio-Temporal Filtering for Video Event Detection

Xinchen Yan; Junsong Yuan; Hui Liang

We propose a novel spatio-temporal filtering technique to improve the per-pixel prediction map, by leveraging the spatio-temporal smoothness of the video signal. Different from previous techniques that perform spatio-temporal filtering in an offline/batch mode, e.g., through graphical model, our filtering can be implemented online and in real-time, with provable lowest computational complexity. Moreover, it is compatible to any image analysis module that can produce per-pixel map of detection scores or multi-class prediction distributions. For each pixel, our filtering finds the optimal spatio-temporal trajectory in the past frames that has the maximum accumulated detection score. Pixels with small accumulated detection score will be treated as false alarm thus suppressed. To demonstrate the effectiveness of our online spatio-temporal filtering, we perform three video event tasks: salient action discovery, walking pedestrian detection, and sports event detection, all in an online/causal way. The experimental results on the three datasets demonstrate the excellent performances of our filtering scheme when compared with the state-of-the-art methods.


european conference on computer vision | 2018

MT-VAE: Learning Motion Transformations to Generate Multimodal Human Dynamics

Xinchen Yan; Akash Rastogi; Ruben Villegas; Kalyan Sunkavalli; Eli Shechtman; Sunil Hadap; Ersin Yumer; Honglak Lee

Long-term human motion can be represented as a series of motion modes—motion sequences that capture short-term temporal dynamics—with transitions between them. We leverage this structure and present a novel Motion Transformation Variational Auto-Encoders (MT-VAE) for learning motion sequence generation. Our model jointly learns a feature embedding for motion modes (that the motion sequence can be reconstructed from) and a feature transformation that represents the transition of one motion mode to the next motion mode. Our model is able to generate multiple diverse and plausible motion sequences in the future from the same input. We apply our approach to both facial and full body motion, and demonstrate applications like analogy-based motion transfer and video synthesis.


international conference on machine learning | 2016

Generative adversarial text to image synthesis

Scott E. Reed; Zeynep Akata; Xinchen Yan; Lajanugen Logeswaran; Bernt Schiele; Honglak Lee


neural information processing systems | 2015

Learning structured output representation using deep conditional generative models

Kihyuk Sohn; Xinchen Yan; Honglak Lee


neural information processing systems | 2016

Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision

Xinchen Yan; Jimei Yang; Ersin Yumer; Yijie Guo; Honglak Lee


arXiv: Learning | 2017

Deep Variational Canonical Correlation Analysis

Weiran Wang; Xinchen Yan; Honglak Lee; Karen Livescu


international conference on robotics and automation | 2018

Learning 6-DOF Grasping Interaction via Deep Geometry-Aware 3D Representations

Xinchen Yan; Jasmined Hsu; Mohammad Khansari; Yunfei Bai; Arkanath Pathak; Abhinav Gupta; James Davidson; Honglak Lee


neural information processing systems | 2018

Learning Hierarchical Semantic Image Manipulation through Structured Representations

Seunghoon Hong; Xinchen Yan; Honglak Lee; Thomas Huang

Collaboration


Dive into the Xinchen Yan's collaboration.

Top Co-Authors

Avatar

Honglak Lee

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kihyuk Sohn

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge