Mario Castelán
Instituto Politécnico Nacional
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mario Castelán.
IEEE Transactions on Image Processing | 2007
Mario Castelán; William A. P. Smith; Edwin R. Hancock
We focus on the problem of developing a coupled statistical model that can be used to recover facial shape from brightness images of faces. We study three alternative representations for facial shape. These are the surface height function, the surface gradient, and a Fourier basis representation. We jointly capture variations in intensity and the surface shape representations using a coupled statistical model. The model is constructed by performing principal components analysis on sets of parameters describing the contents of the intensity images and the facial shape representations. By fitting the coupled model to intensity data, facial shape is implicitly recovered from the shape parameters. Experiments show that the coupled model is able to generate accurate shape from out-of-training-sample intensity images
computer vision and pattern recognition | 2008
Mario Castelán; J. Van Horebeek
In this paper, we apply partial least squares (PLS) regression to predict 3D face shape from a single image. PLS describes the relationship between independent (intensity images) and dependent (3D shape) variables by seeking directions in the space of the independent variables that are associated with high variations in the dependent variables. We exploit this idea to construct statistical models of intensity and 3D shape that express strongly linked variations in both spaces. The outcome of this decomposition is the construction of two different models which express coupled variations in 3D shape and intensity. Using the intensity model, a set of parameters is obtained from out-of-training intensity examples. These intensity parameters can then be used directly in the 3D shape model to approximate facial shape. Experiments show that prediction is achieved with reasonable accuracy.
Computer Vision and Image Understanding | 2006
Mario Castelán; Edwin R. Hancock
This paper describes work aimed at developing a practical scheme for face analysis using shape-from-shading. Existing methods have a tendency to recover surfaces in which convex features such as the nose are imploded. This is a result of the fact that subtle changes in the elements of the field of surface normals can cause significant changes in the corresponding integrated surface. To overcome this problem, in this paper, we describe a local shape based method for imposing convexity constraints. We show how to modify the orientations in the surface gradient field using critical points on the surface and local shape indicators. The method is applied to both surface height recovery and face re-illumination. Experiments show that altering the field of surface normals so as to impose convexity results in greatly improved height reconstructions and more realistic re-illuminations.
international conference on pattern recognition | 2006
Mario Castelán; Edwin R. Hancock
We focus on the problem of developing coupled statistical models that can be used to recover surface height from brightness images of faces. Our approach consists on using a simple model that assumes that the height eigenmodes are identical to the intensity eigenmodes. We recover the height function directly from the best-fit intensity parameters. As a result the computations involve only a straightforward matrix-vector multiplication. Experiments show that this method generate accurate height surfaces from out-of training intensity images
Computer Vision and Image Understanding | 2016
Dalila Sánchez-Escobedo; Mario Castelán; William A. P. Smith
A novel 3D face estimation method based on a regression matrix and occluding contours.3D vertices around occluding boundaries and their corresponding 2D pixel projections are highly correlated.The 3D face estimation method resembles dense surface shape recovery from missing data. This paper addresses the problem of 3D face shape approximation from occluding contours, i.e., the boundaries between the facial region and the background. To this end, a linear regression process that models the relationship between a set of 2D occluding contours and a set of 3D vertices is applied onto the corresponding training sets using Partial Least Squares. The result of this step is a regression matrix which is capable of estimating new 3D face point clouds from the out-of-training 2D Cartesian pixel positions of the selected contours. Our approach benefits from the highly correlated spaces spanned by the 3D vertices around the occluding boundaries of a face and their corresponding 2D pixel projections. As a result, the proposed method resembles dense surface shape recovery from missing data. Our technique is evaluated over four scenarios designed to investigate both the influence of the contours included in the training set and the considered number of contours. Qualitative and quantitative experiments demonstrate that using contours outperform the state of the art on the database used in this article. Even using a limited number of contours provides a useful approximation to the 3D face surface.
international symposium on 3d data processing visualization and transmission | 2004
Mario Castelán; Edwin R. Hancock
We explore how to improve the quality of the height map recovered from faces using shape-from-shading. One of the problems with reliable face surface reconstruction using shape-from-shading is that local errors in the needle map can cause implosion of facial features, and in particular the nose. To overcome this problem in this paper we develop a method for ensuring surface convexity. This is done by modifying the gradient orientations in accordance with critical points on the surface. We utilize a local shape indicator as a criteria to decide which surface normals are to be modified. Experiments show that altering the directions of a surface normal field of a face leads to a considerable improvement in its integrated height map.
international symposium on visual computing | 2009
Mario Castelán; Gustavo A. Puerto-Souza; Johan Van Horebeek
In this paper, we compare four different Subspace Multiple Linear Regression methods for 3D face shape prediction from a single 2D intensity image. This problem is situated within the low observation-to-variable ratio context, where the sample covariance matrix is likely to be singular. Lately, efforts have been directed towards latent-variable based methods to estimate a regression operator while maximizing specific criteria between 2D and 3D face subspaces. Regularization methods, on the other hand, impose a regularizing term on the covariance matrix in order to ensure numerical stability and to improve the out-of-training error. We compare the performance of three latent-variable based and one regularization approach, namely, Principal Component Regression, Partial Least Squares, Canonical Correlation Analysis and Ridge Regression. We analyze the influence of the different latent variables as well as the regularizing parameters in the regression process. Similarly, we identify the strengths and weaknesses of both regularization and latent-variable approaches for the task of 3D face prediction.
Pattern Recognition Letters | 2013
Dalila Sánchez-Escobedo; Mario Castelán
This paper addresses the problem of linearly approximating 3D shape from intensities in the context of facial analysis. In other words, given a frontal pose grayscale input face, the direct estimation of its 3D structure is sought through a regression matrix. Approaches falling into this category generally assume that both 2D and 3D features are defined under Cartesian schemes, which is not optimal for the task of novel view synthesis. The current article aims to overcome this issue by exploiting the 3D structure of faces through cylindrical coordinates, aided by the partial least squares regression. In the context of facial shape analysis, partial least squares builds a set of basis faces, for both grayscale and 3D shape spaces, seeking for maximizing shared covariance between projections of the data along the basis faces. Experimental tests show how the cylindrical representations are suitable for the purposes of linear regression, resulting in a benefit for the generation of novel facial views, showing a potential use in model based face identification.
Journal of Applied Research and Technology | 2013
Ismael Lopez-Juarez; Mario Castelán; F.J. Castro-Martínez; Mario Peña-Cabrera; R. Osorio-Comparan
Robot vision systems can differentiate parts by pattern matching irrespective of part orientation and location. Somemanufacturers offer 3D guidance systems using robust vision and laser systems so that a 3D programmed point canbe repeated even if the part is moved varying its location, rotation and orientation within the working space. Despitethese developments, current industrial robots are still unable to recognize objects in a robust manner; that is, todistinguish an object among equally shaped objects taking into account not only the object’s contour but also its formand depth information, which is precisely the major contribution of this research. Our hypothesis establishes that it ispossible to integrate a robust invariant object recognition capability into industrial robots by using image features fromthe object’s contour (boundary object information), its form (i.e., type of curvature or topographical surfaceinformation) and depth information (from stereo disparity maps). These features can be concatenated in order to forman invariant vector descriptor which is the input to an artificial neural network (ANN) for learning and recognitionpurposes. In this paper we present the recognition results under different working conditions using a KUKA KR16industrial robot, which validated our approach.
ieee-ras international conference on humanoid robots | 2009
Mario Castelán; Gustavo Arechavaleta
In this work, we aim to exhibit the geometrical shape primitives of human walking trajectories using a statistical model constructed through Principal Component Analysis. This analysis provides sufficient information to derive a linear human-like path generator based on examples. The examples are provided by a motion capture database of human walking trajectories. The proposed model captures the shape of trajectories in terms of path length and deformation. We have successfully applied our model to compute a good approximation of the reachable space of human walking. This can be done with a negligible computational cost since it is based on a linear combination of basis human paths.