Athinodoros S. Georghiades
Yale University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Athinodoros S. Georghiades.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001
Athinodoros S. Georghiades; Peter N. Belhumeur; David J. Kriegman
We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.
ieee international conference on automatic face and gesture recognition | 2000
Athinodoros S. Georghiades; Peter N. Belhumeur; David J. Kriegman
Image variability due to changes in pose and illumination can seriously impair object recognition. This paper presents appearance-based methods which, unlike previous appearance-based approaches, require only a small set of training images to generate a rich representation that models this variability. Specifically, from as few as three images of an object in fixed pose seen under slightly varying but unknown lighting, a surface and an albedo map are reconstructed. These are then used to generate synthetic images with large variations in pose and illumination and thus build a representation useful for object recognition. Our methods have been tested within the domain of face recognition on a subset of the Yale Face Database B containing 4050 images of 10 faces seen under variable pose and illumination. This database was specifically gathered for testing these generative methods. Their performance is shown to exceed that of popular existing methods.
computer vision and pattern recognition | 1998
Athinodoros S. Georghiades; David J. Kriegman; P.N. Belhurneur
Due to illumination variability, the same object can appear dramatically different even when viewed in fixed pose. To handle this variability, an object recognition system must employ a representation that is either invariant to, or models this variability. This paper presents an appearance-based method for modeling the variability due to illumination in the images of objects. The method differs from past appearance-based methods, however, in that a small set of training images is used to generate a representation-the illumination cone-which models the complete set of images of an object with Lambertian reflectance map under an arbitrary combination of point light sources at infinity. This method is both an implementation and extension (an extension in that it models cast shadows) of the illumination cone representation proposed in Belhumeur and Kriegman (1996). The method is tested on a database of 660 images of 10 faces, and the results exceed those of popular existing methods.
eurographics | 2003
Athinodoros S. Georghiades
There are computer graphics applications for which the shape and reflectance of complex objects, such as faces, cannot be obtained using specialized equipment due to cost and practical considerations. We present an image based technique that uses only a small number of example images, and assumes a parametric model of reflectance, to simultaneously and reliably recover the Bidirectional Reflectance Distributions Function (BRDF) and the 3-D shape of non-Lambertian objects. No information about the position and intensity of the light-sources or the position of the camera is required. We successfully apply this approach to human faces, accurately recovering their 3-D shape and BRDF. We use the recovered information to efficiently and accurately render photorealistic images of the faces under novel illumination conditions in which the rendered image intensity closely matches the intensity in real images. The accuracy of our technique is further demonstrated by the close resemblance of the skin BRDF recovered using our method, to the one measured with a method presented in the literature and in which a 3-D scanner was used.
ACM Transactions on Graphics | 2007
Jianye Lu; Athinodoros S. Georghiades; Andreas Glaser; Hongzhi Wu; Li-Yi Wei; Baining Guo; Julie Dorsey; Holly E. Rushmeier
Interesting textures form on the surfaces of objects as the result of external chemical, mechanical, and biological agents. Simulating these textures is necessary to generate models for realistic image synthesis. The textures formed are progressively variant, with the variations depending on the global and local geometric context. We present a method for capturing progressively varying textures and the relevant context parameters that control them. By relating textures and context parameters, we are able to transfer the textures to novel synthetic objects. We present examples of capturing chemical effects, such as rusting; mechanical effects, such as paint cracking; and biological effects, such as the growth of mold on a surface. We demonstrate a user interface that provides a method for specifying where an object is exposed to external agents. We show the results of complex, geometry-dependent textures evolving on synthetic objects.
Proceedings IEEE Workshop on Multi-View Modeling and Analysis of Visual Scenes (MVIEW'99) | 1999
Athinodoros S. Georghiades; Peter N. Belhumeur; David J. Kriegman
We present an illumination-based method for synthesizing images of an object under novel viewing conditions. Our method requires as few as three images of the object taken under variable illumination, but from a fixed viewpoint. Unlike multi-view based image synthesis, our method does not require the determination of point or line correspondences. Furthermore, our method is able to synthesize not simply novel viewpoints, but novel illumination conditions as well. We demonstrate the effectiveness of our approach by generating synthetic images of human faces.
international symposium on 3d data processing visualization and transmission | 2006
Songhua Xu; Athinodoros S. Georghiades; Holly E. Rushmeier; Julie Dorsey; Leonard McMillan
We introduce a new method for filling holes in geometry obtained from 3D range scanners. Our method makes use of 2D images of the areas where geometric data is missing. The 2D images guide the filling using the relationship between the images and geometry learned from the existing 3D scanned data. Our method builds on existing techniques for using scanned geometry and for estimating shape from shaded images. Rather than creating plausibly filled holes, we attempt to approximate the missing geometry. We present results for scanned data from both triangulation and time-of-flight scanners for various types of materials. To quantitatively validate our proposed method, we also compare the filled areas with ground-truth data.
international symposium on 3d data processing visualization and transmission | 2006
Chen Xu; Athinodoros S. Georghiades; Holly E. Rushmeier; Julie Dorsey
We consider the problem of creating integrated texture maps of large structures scanned with a time-of-flight laser scanner and imaged with a digital camera. The key issue in creating integrated textures is correcting for the spatially varying illumination across the structure. In most cases, the illumination cannot be controlled, and dense spatial estimates of illumination are not possible. We present a system for processing multiple color images into an integrated texture that makes use of the laser scanner return intensity and the captured geometry, together with color balancing and mapping of illumination-corrected images onto the target geometry after filtering into two spatial frequency bands.
Archive | 2000
David J. Kriegman; Peter N. Belhumeur; Athinodoros S. Georghiades
Due to illumination variability, the same object can appear dramatically different even when viewed in fixed pose, and this variability can confound recognition systems. This paper summarizes recent work on developing appearance-based methods for modeling the variability due to illumination in the images of objects. They differ from past appearance-based methods, however, in that a small set of training images is used to generate a representation — the illumination cone — which models the complete set of images of an object with Lambertian reflectance map under an arbitrary combination of point light sources at infinity. From a few images of an object in fixed pose but varying and unknown lighting, a surface and albedo map are reconstructed up to a family of affine (generalized bas-relief or GBR) deformations, and the cone representation is derived from this GBR surface. The methods have been tested within the domain of face recognition on two databases, one with 660 images of 10 faces in fixed pose but variable lighting, and one with 1350 images of 10 faces with variable pose and lighting; the results exceed those of popular existing methods.
international conference on computer graphics and interactive techniques | 2006
Jianye Lu; Athinodoros S. Georghiades; Holly E. Rushmeier; Julie Dorsey; Chen Xu
We consider the problem of the spatio-temporal variations in material appearance due to the wetting and drying of materials. We conducted a series of experiments that capture the appearance history of surfaces drying. We reduce this history to two parameters that control the shape of a drying curve. We relate these drying parameters to the shape of the original wetted area and the surface geometry. Using these relationships, we generate new time-varying spatial patterns of drying on synthetic shapes.