Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edilson de Aguiar is active.

Publication


Featured researches published by Edilson de Aguiar.


international conference on computer graphics and interactive techniques | 2008

Performance capture from sparse multi-view video

Edilson de Aguiar; Carsten Stoll; Christian Theobalt; Naveed Ahmed; Hans-Peter Seidel; Sebastian Thrun

This paper proposes a new marker-less approach to capturing human performances from multi-view video. Our algorithm can jointly reconstruct spatio-temporally coherent geometry, motion and textural surface appearance of actors that perform complex and rapid moves. Furthermore, since our algorithm is purely meshbased and makes as few as possible prior assumptions about the type of subject being tracked, it can even capture performances of people wearing wide apparel, such as a dancer wearing a skirt. To serve this purpose our method efficiently and effectively combines the power of surface- and volume-based shape deformation techniques with a new mesh-based analysis-through-synthesis framework. This framework extracts motion constraints from video and makes the laser-scan of the tracked subject mimic the recorded performance. Also small-scale time-varying shape detail is recovered by applying model-guided multi-view stereo to refine the model surface. Our method delivers captured performance data at high level of detail, is highly versatile, and is applicable to many complex types of scenes that could not be handled by alternative marker-based or marker-free recording techniques.


computer vision and pattern recognition | 2009

Motion capture using joint skeleton tracking and surface estimation

Juergen Gall; Carsten Stoll; Edilson de Aguiar; Christian Theobalt; Bodo Rosenhahn; Hans-Peter Seidel

This paper proposes a method for capturing the performance of a human or an animal from a multi-view video sequence. Given an articulated template model and silhouettes from a multi-view image sequence, our approach recovers not only the movement of the skeleton, but also the possibly non-rigid temporal deformation of the 3D surface. While large scale deformations or fast movements are captured by the skeleton pose and approximate surface skinning, true small scale deformations or non-rigid garment motion are captured by fitting the surface to the silhouette. We further propose a novel optimization scheme for skeleton-based pose estimation that exploits the skeletons tree structure to split the optimization problem into a local one and a lower dimensional global one. We show on various sequences that our approach can capture the 3D motion of animals and humans accurately even in the case of rapid movements and wide apparel like skirts.


Computer Graphics Forum | 2008

Automatic Conversion of Mesh Animations into Skeleton-based Animations

Edilson de Aguiar; Christian Theobalt; Sebastian Thrun; Hans-Peter Seidel

Recently, it has become increasingly popular to represent animations not by means of a classical skeleton‐based model, but in the form of deforming mesh sequences. The reason for this new trend is that novel mesh deformation methods as well as new surface based scene capture techniques offer a great level of flexibility during animation creation. Unfortunately, the resulting scene representation is less compact than skeletal ones and there is not yet a rich toolbox available which enables easy post‐processing and modification of mesh animations. To bridge this gap between the mesh‐based and the skeletal paradigm, we propose a new method that automatically extracts a plausible kinematic skeleton, skeletal motion parameters, as well as surface skinning weights from arbitrary mesh animations. By this means, deforming mesh sequences can be fully‐automatically transformed into fullyrigged virtual subjects. The original input can then be quickly rendered based on the new compact bone and skin representation, and it can be easily modified using the full repertoire of already existing animation tools.


international conference on computer graphics and interactive techniques | 2010

Video-based reconstruction of animatable human characters

Carsten Stoll; Juergen Gall; Edilson de Aguiar; Sebastian Thrun; Christian Theobalt

We present a new performance capture approach that incorporates a physically-based cloth model to reconstruct a rigged fully-animatable virtual double of a real person in loose apparel from multi-view video recordings. Our algorithm only requires a minimum of manual interaction. Without the use of optical markers in the scene, our algorithm first reconstructs skeleton motion and detailed time-varying surface geometry of a real person from a reference video sequence. These captured reference performance data are then analyzed to automatically identify non-rigidly deforming pieces of apparel on the animated geometry. For each piece of apparel, parameters of a physically-based real-time cloth simulation model are estimated, and surface geometry of occluded body regions is approximated. The reconstructed character model comprises a skeleton-based representation for the actual body parts and a physically-based simulation model for the apparel. In contrast to previous performance capture methods, we can now also create new real-time animations of actors captured in general apparel.


Pattern Recognition | 2017

Facial expression recognition with Convolutional Neural Networks: Coping with few data and the training sample order

Andre Teixeira Lopes; Edilson de Aguiar; Alberto F. De Souza; Thiago Oliveira-Santos

Facial expression recognition has been an active research area in the past 10 years, with growing application areas including avatar animation, neuromarketing and sociable robots. The recognition of facial expressions is not an easy problem for machine learning methods, since people can vary significantly in the way they show their expressions. Even images of the same person in the same facial expression can vary in brightness, background and pose, and these variations are emphasized if considering different subjects (because of variations in shape, ethnicity among others). Although facial expression recognition is very studied in the literature, few works perform fair evaluation avoiding mixing subjects while training and testing the proposed algorithms. Hence, facial expression recognition is still a challenging problem in computer vision. In this work, we propose a simple solution for facial expression recognition that uses a combination of Convolutional Neural Network and specific image pre-processing steps. Convolutional Neural Networks achieve better accuracy with big data. However, there are no publicly available datasets with sufficient data for facial expression recognition with deep architectures. Therefore, to tackle the problem, we apply some pre-processing techniques to extract only expression specific features from a face image and explore the presentation order of the samples during training. The experiments employed to evaluate our technique were carried out using three largely used public databases (CK+, JAFFE and BU-3DFE). A study of the impact of each image pre-processing operation in the accuracy rate is presented. The proposed method: achieves competitive results when compared with other facial expression recognition methods 96.76% of accuracy in the CK+ database it is fast to train, and it allows for real time facial expression recognition with standard computers. HighlightsA CNN based approach for facial expression recognition.A set of pre-processing steps allowing for a simpler CNN architecture.A study of the impact of each pre-processing step in the accuracy.A study for lowering the impact of the sample presentation order during training.High facial expression recognition accuracy (96.76%) with real time evaluation.


virtual reality software and technology | 2005

Automatic generation of personalized human avatars from multi-view video

Naveed Ahmed; Edilson de Aguiar; Christian Theobalt; Marcus A. Magnor; Hans-Peter Seidel

In multi-user virtual environments real-world people interact via digital avatars. In order to make the step from the real world onto the virtual stage convincing the digital equivalent of the user has to be personalized. It should reflect the shape and proportions, the kinematic properties, as well as the textural appearance of its real-world equivalent. In this paper, we present a novel spatio-temporal approach to create a personalized avatar from multi-view video data of a moving person. The avatars geometry is generated by shape-adapting a template human body model. Its surface texture is assembled from multi-view video frames showing arbitrary different body poses. consistent surface texture for the model is generated using multi-view video frames from different camera views and different body poses. With our proposed method photo-realistic human avatars can be robustly generated.


workshop on human motion | 2007

Marker-less 3D feature tracking for mesh-based human motion capture

Edilson de Aguiar; Christian Theobalt; Carsten Stoll; Hans-Peter Seidel

We present a novel algorithm that robustly tracks 3D trajectories of features on a moving human who has been recorded with multiple video cameras. Our method does so without special markers in the scene and can be used to track subjects wearing everyday apparel. By using the paths of the 3D points as constraints in a fast mesh deformation approach, we can directly animate a static human body scan such that it performs the same motion as the captured subject. Our method can therefore be used to directly animate high quality geometry models from unaltered video data which opens the door to new applications in motion capture, 3D Video and computer animation. Since our method does not require a kinematic skeleton and only employs a handful of feature trajectories to generate lifelike animations with realistic surface deformations, it can also be used to track subjects wearing wide apparel, and even animals. We demonstrate the performance of our approach using several captured real-world sequences, and also validate its accuracy.


virtual reality software and technology | 2004

Marker-free kinematic skeleton estimation from sequences of volume data

Christian Theobalt; Edilson de Aguiar; Marcus A. Magnor; Holger Theisel; Hans-Peter Seidel

For realistic animation of an artificial character a body model that represents the characters kinematic structure is required. Hierarchical skeleton models are widely used which represent bodies as chains of bones with interconnecting joints. In video motion capture, animation parameters are derived from the performance of a subject in the real world. For this acquisition procedure too, a kinematic body model is required. Typically, the generation of such a model for tracking and animation is, at best, a semi-automatic process. We present a novel approach that estimates a hierarchical skeleton model of an arbitrary moving subject from sequences of voxel data that were reconstructed from multi-view video footage. Our method does not require a-priori information about the body structure. We demonstrate its performance using synthetic and real data.


Expert Systems With Applications | 2016

Large-scale mapping in complex field scenarios using an autonomous car

Filipe Wall Mutz; Lucas de Paula Veronese; Thiago Oliveira-Santos; Edilson de Aguiar; Fernando Auat Cheein; Alberto F. De Souza

We present a mapping system for large-scale environments with changing features.We describe in a high level of detail a mapping algorithm for 3D-LiDAR.G-ICP was used for loop closure displacement calculation in GraphSLAM.Experiments were made with an autonomous vehicle in 3 real world environments. In this paper, we present an end-to-end framework for precise large-scale mapping with applications in autonomous driving. In special, the problem of mapping complex environments, with features changing from tree-lined streets to urban areas with dense traffic, is studied. The robotic car is equipped with an odometry sensor, a 3D LiDAR Velodyne HDL-32E, a IMU, and a low cost GPS, and the data generated by these sensors are integrated in a pose-based GraphSLAM estimator. A new strategy for identification and correction of odometry data using evolutionary algorithms is presented. This new strategy makes odometry data significantly more consistent with GPS. Loop closures are detected using GPS data, and GICP, a 3D point cloud registration algorithm, is used to estimate the displacement between the different travels over the same region. After path estimation, 3D LiDAR data is used to build an occupancy grid mapping of the environment. A detailed mathematical description of how occupancy evidence can be calculated from the point clouds is given, and a submapping strategy to handle memory limitations is presented as well. The proposed framework is tested in three real world environments with different sizes, and features: a parking lot, a university beltway, and a city neighborhood. In all cases, satisfactory maps were built, with precise loop closures even when the vehicle traveled long distances between them.


international symposium on visual computing | 2006

Automatic learning of articulated skeletons from 3d marker trajectories

Edilson de Aguiar; Christian Theobalt; Hans-Peter Seidel

We present a novel fully-automatic approach for estimating an articulated skeleton of a moving subject and its motion from body marker trajectories that have been measured with an optical motion capture system. Our method does not require a priori information about the shape and proportions of the tracked subject, can be applied to arbitrary motion sequences, and renders dedicated initialization poses unnecessary. To serve this purpose, our algorithm first identifies individual rigid bodies by means of a variant of spectral clustering. Thereafter, it determines joint positions at each time step of motion through numerical optimization, reconstructs the skeleton topology, and finally enforces fixed bone length constraints. Through experiments, we demonstrate the robustness and efficiency of our algorithm and show that it outperforms related methods from the literature in terms of accuracy and speed.

Collaboration


Dive into the Edilson de Aguiar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thiago Oliveira-Santos

Universidade Federal do Espírito Santo

View shared research outputs
Top Co-Authors

Avatar

Alberto F. De Souza

Universidade Federal do Espírito Santo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lucas de Paula Veronese

Universidade Federal do Espírito Santo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Holger Theisel

Otto-von-Guericke University Magdeburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge