Xinchen Yan
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xinchen Yan.
european conference on computer vision | 2016
Xinchen Yan; Jimei Yang; Kihyuk Sohn; Honglak Lee
This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.
international conference of the ieee engineering in medicine and biology society | 2015
Changhan Wang; Xinchen Yan; Max Smith; Kanika Kochhar; Marcie S. Rubin; Stephen M. Warren; James S. Wrobel; Honglak Lee
Wound surface area changes over multiple weeks are highly predictive of the wound healing process. Furthermore, the quality and quantity of the tissue in the wound bed also offer important prognostic information. Unfortunately, accurate measurements of wound surface area changes are out of reach in the busy wound practice setting. Currently, clinicians estimate wound size by estimating wound width and length using a scalpel after wound treatment, which is highly inaccurate. To address this problem, we propose an integrated system to automatically segment wound regions and analyze wound conditions in wound images. Different from previous segmentation techniques which rely on handcrafted features or unsupervised approaches, our proposed deep learning method jointly learns task-relevant visual features and performs wound segmentation. Moreover, learned features are applied to further analysis of wounds in two ways: infection detection and healing progress prediction. To the best of our knowledge, this is the first attempt to automate long-term predictions of general wound healing progress. Our method is computationally efficient and takes less than 5 seconds per wound image (480 by 640 pixels) on a typical laptop computer. Our evaluations on a large-scale wound database demonstrate the effectiveness and reliability of the proposed system.
european conference on computer vision | 2014
Xinchen Yan; Junsong Yuan; Hui Liang
We propose a novel spatio-temporal filtering technique to improve the per-pixel prediction map, by leveraging the spatio-temporal smoothness of the video signal. Different from previous techniques that perform spatio-temporal filtering in an offline/batch mode, e.g., through graphical model, our filtering can be implemented online and in real-time, with provable lowest computational complexity. Moreover, it is compatible to any image analysis module that can produce per-pixel map of detection scores or multi-class prediction distributions. For each pixel, our filtering finds the optimal spatio-temporal trajectory in the past frames that has the maximum accumulated detection score. Pixels with small accumulated detection score will be treated as false alarm thus suppressed. To demonstrate the effectiveness of our online spatio-temporal filtering, we perform three video event tasks: salient action discovery, walking pedestrian detection, and sports event detection, all in an online/causal way. The experimental results on the three datasets demonstrate the excellent performances of our filtering scheme when compared with the state-of-the-art methods.
european conference on computer vision | 2018
Xinchen Yan; Akash Rastogi; Ruben Villegas; Kalyan Sunkavalli; Eli Shechtman; Sunil Hadap; Ersin Yumer; Honglak Lee
Long-term human motion can be represented as a series of motion modes—motion sequences that capture short-term temporal dynamics—with transitions between them. We leverage this structure and present a novel Motion Transformation Variational Auto-Encoders (MT-VAE) for learning motion sequence generation. Our model jointly learns a feature embedding for motion modes (that the motion sequence can be reconstructed from) and a feature transformation that represents the transition of one motion mode to the next motion mode. Our model is able to generate multiple diverse and plausible motion sequences in the future from the same input. We apply our approach to both facial and full body motion, and demonstrate applications like analogy-based motion transfer and video synthesis.
international conference on machine learning | 2016
Scott E. Reed; Zeynep Akata; Xinchen Yan; Lajanugen Logeswaran; Bernt Schiele; Honglak Lee
neural information processing systems | 2015
Kihyuk Sohn; Xinchen Yan; Honglak Lee
neural information processing systems | 2016
Xinchen Yan; Jimei Yang; Ersin Yumer; Yijie Guo; Honglak Lee
arXiv: Learning | 2017
Weiran Wang; Xinchen Yan; Honglak Lee; Karen Livescu
international conference on robotics and automation | 2018
Xinchen Yan; Jasmined Hsu; Mohammad Khansari; Yunfei Bai; Arkanath Pathak; Abhinav Gupta; James Davidson; Honglak Lee
neural information processing systems | 2018
Seunghoon Hong; Xinchen Yan; Honglak Lee; Thomas Huang