Sergi Pujades
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sergi Pujades.
ACM Transactions on Graphics | 2017
Gerard Pons-Moll; Sergi Pujades; Sonny Hu; Michael J. Black
Designing and simulating realistic clothing is challenging. Previous methods addressing the capture of clothing from 3D scans have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the minimally clothed body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. ClothCap is able to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes; this provides a step towards virtual try-on.
computer vision and pattern recognition | 2017
Chao Zhang; Sergi Pujades; Michael J. Black; Gerard Pons-Moll
We address the problem of estimating human pose and body shape from 3D scans over time. Reliable estimation of 3D body shape is necessary for many applications including virtual try-on, health monitoring, and avatar creation for virtual reality. Scanning bodies in minimal clothing, however, presents a practical barrier to these applications. We address this problem by estimating body shape under clothing from a sequence of 3D scans. Previous methods that have exploited body models produce smooth shapes lacking personalized details. We contribute a new approach to recover a personalized shape of the person. The estimated shape deviates from a parametric model to fit the 3D scans. We demonstrate the method using high quality 4D data as well as sequences of visual hulls extracted from multi-view images. We also make available BUFF, a new 4D dataset that enables quantitative evaluation (http://buff.is.tue.mpg.de). Our method outperforms the state of the art in both pose estimation and shape estimation, qualitatively and quantitatively.
ACM Transactions on Graphics | 2017
Meekyoung Kim; Gerard Pons-Moll; Sergi Pujades; Seungbae Bang; Jinwook Kim; Michael J. Black; Sung-Hee Lee
Data driven models of human poses and soft-tissue deformations can produce very realistic results, but they only model the visible surface of the human body and cannot create skin deformation due to interactions with the environment. Physical simulations can generalize to external forces, but their parameters are difficult to control. In this paper, we present a layered volumetric human body model learned from data. Our model is composed of a data-driven inner layer and a physics-based external layer. The inner layer is driven with a volumetric statistical body model (VSMPL). The soft tissue layer consists of a tetrahedral mesh that is driven using the finite element method (FEM). Model parameters, namely the segmentation of the body into layers and the soft tissue elasticity, are learned directly from 4D registrations of humans exhibiting soft tissue deformations. The learned two layer model is a realistic full-body avatar that generalizes to novel motions and external forces. Experiments show that the resulting avatars produce realistic results on held out sequences and react to external forces. Moreover, the model supports the retargeting of physical properties from one avatar when they share the same topology.
medical image computing and computer assisted intervention | 2018
Nikolas Hesse; Sergi Pujades; Javier Romero; Michael J. Black; Christoph Bodensteiner; Michael Arens; Ulrich G. Hofmann; Uta Tacke; Mijna Hadders-Algra; Raphael Weinberger; Wolfgang Müller-Felber; A. Sebastian Schroeder
Infant motion analysis enables early detection of neurodevelopmental disorders like cerebral palsy (CP). Diagnosis, however, is challenging, requiring expert human judgement. An automated solution would be beneficial but requires the accurate capture of 3D full-body movements. To that end, we develop a non-intrusive, low-cost, lightweight acquisition system that captures the shape and motion of infants. Going beyond work on modeling adult body shape, we learn a 3D Skinned Multi-Infant Linear body model (SMIL) from noisy, low-quality, and incomplete RGB-D data. SMIL is publicly available for research purposes at http://s.fhg.de/smil. We demonstrate the capture of shape and motion with 37 infants in a clinical environment. Quantitative experiments show that SMIL faithfully represents the data and properly factorizes the shape and pose of the infants. With a case study based on general movement assessment (GMA), we demonstrate that SMIL captures enough information to allow medical assessment. SMIL provides a new tool and a step towards a fully automatic system for GMA.
Frontiers in ICT | 2018
Anne Thaler; Ivelina V. Piryankova; Jeanine K. Stefanucci; Sergi Pujades; Stephan de la Rosa; Stephan Streuber; Javier Romero; Michael J. Black; Betty J. Mohler
The creation or streaming of photo-realistic self-avatars is important for virtual reality applications that aim for perception and action to replicate real world experience. The appearance and recognition of a digital self-avatar may be especially important for applications related to telepresence, embodied virtual reality, or immersive games. We investigated gender differences in the use of visual cues (shape, texture) of a self-avatar for estimating body weight and evaluating avatar appearance. A full-body scanner was used to capture each participants body geometry and color information and a set of 3D virtual avatars with realistic weight variations was created based on a statistical body model. Additionally, a second set of avatars was created with an average underlying body shape matched to each participant’s height and weight. In four sets of psychophysical experiments, the influence of visual cues on the accuracy of body weight estimation and the sensitivity to weight changes was assessed by manipulating body shape (own, average) and texture (own photo-realistic, checkerboard). The avatars were presented on a large-screen display, and participants responded to whether the avatars weight corresponded to their own weight. Participants also adjusted the avatars weight to their desired weight and evaluated the avatars appearance with regard to similarity to their own body, uncanniness, and their willingness to accept it as a digital representation of the self. The results of the psychophysical experiments revealed no gender difference in the accuracy of estimating body weight in avatars. However, males accepted a larger weight range of the avatars as corresponding to their own. In terms of the ideal body weight, females but not males desired a thinner body. With regard to the evaluation of avatar appearance, the questionnaire responses suggest that own photo-realistic texture was more important to males for higher similarity ratings, while own body shape seemed to be more important to females. These results argue for gender-specific considerations when creating self-avatars.
Reconnaissance de Formes et Intelligence Artificielle (RFIA) 2014 | 2014
Sergi Pujades; Frédéric Devernay
arXiv: Computer Vision and Pattern Recognition | 2018
Nikolas Hesse; Sergi Pujades; Michael J. Black; Michael Arens; Ulrich G. Hofmann; A. Sebastian Schroeder
Journal of Vision | 2018
Anne Thaler; I Bülthoff; Sergi Pujades; Michael J. Black; Betty J. Mohler
computer vision and pattern recognition | 2017
Sergi Pujades; Frédéric Devernay; Laurent Boiron; Rémi Ronfard
Orasis, Congrès des jeunes chercheurs en vision par ordinateur | 2013
Sergi Pujades; Frédéric Devernay
Collaboration
Dive into the Sergi Pujades's collaboration.
French Institute for Research in Computer Science and Automation
View shared research outputs