Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yvan G. Leclerc is active.

Publication


Featured researches published by Yvan G. Leclerc.


International Journal of Computer Vision | 1995

Object-centered surface reconstruction: combining multi-image stereo and shading

Pascal Fua; Yvan G. Leclerc

Our goal is to reconstruct both the shape and reflectance properties of surfaces from multiple images. We argue that an object-centered representation is most appropriate for this purpose because it naturally accommodates multiple sources of data, multiple images (including motion sequences of a rigid object), and self-occlusions. We then present a specific object-centered reconstruction method and its implementation. The method begins with an initial estimate of surface shape provided, for example, by triangulating the result of conventional stereo. The surface shape and reflectance properties are then iteratively adjusted to minimize an objective function that combines information from multiple input images. The objective function is a weighted sum of stereo, shading, and smoothness components, where the weight varies over the surface. For example, the stereo component is weighted more strongly where the surface projects onto highly textured areas in the images, and less strongly otherwise. Thus, each component has its greatest influence where its accuracy is likely to be greatest. Experimental results on both synthetic and real images are presented.


computer vision and pattern recognition | 1991

The direct computation of height from shading

Yvan G. Leclerc; Aaron F. Bobick

A method for recovering shape from shading that solves directly for the surface height is presented. By using a discrete formulation of the problem, it is possible to achieve good convergence behavior by employing numerical solution techniques more powerful than gradient descent methods derived from variational calculus. Because this method solves directly for height, it avoids the problem of finding an integrable surface maximally consistent with surface orientation. Furthermore, since additional constraints are not needed to make the problem well posed, a smoothness constraint is used only to drive the system towards a good solution; the weight of the smoothness term is eventually reduced to near zero. By solving directly for height, stereo processing may be used to provide initial and boundary conditions. The shape from shading technique, as well as its relation to stereo, is demonstrated on both synthetic and real imagery.<<ETX>>


computer vision and pattern recognition | 2000

Variable albedo surface reconstruction from stereo and shape from shading

Dimitris Samaras; Dimitris N. Metaxas; Pascal Fua; Yvan G. Leclerc

We present a multiview method for the computation of object shape and reflectance characteristics based on the integration of shape from shading (SFS) and stereo, for nonconstant albedo and non-uniformly Lambertian surfaces. First we perform stereo fitting on the input stereo pairs or image sequences. When the images are uncalibrated, we recover the camera parameters using bundle adjustment. Eased on the stereo result, we can automatically segment the albedo map (which is taken to be piece-wise constant) using a minimum description length (MDL) based metric, to identify areas suitable for SFS (typically smooth textureless areas) and to derive illumination information. The shape and the illumination parameter estimates are refined using a deformable model SFS algorithm, which iterates between computing shape and illumination parameters. Our method takes into account the viewing angle dependent for shortening and specularity effects, and compensates as much as possible by utilizing information from more than one images. We demonstrate that we can extend the applicability of SFS algorithms to real world situations when some of its traditional assumptions are violated. We demonstrate our method by applying it to face shape reconstruction. Experimental results indicate a significant improvement over SFS-only or stereo-only based reconstruction. Model accuracy and detail are improved, especially in areas of low texture detail. Albedo information is retrieved and can be used to accurately re-render the model under different illumination conditions.


IEEE Computer Graphics and Applications | 1999

TerraVision II: visualizing massive terrain databases in VRML

Martin Reddy; Yvan G. Leclerc; Lee Iverson; Nat Bletter

To disseminate 3D maps and spatial data over the Web, we designed massive terrain data sets accessible through either a VRML browser or the customized TerraVision II browser. Although not required to view the content, TerraVision II lets the user perform specialized browser level optimizations that offer increased efficiency and seamless interaction with the terrain data. We designed our framework to simplify terrain data maintenance and to let users dynamically select particular sets of geo-referenced data. Our implementation uses Java scripting to extend VRMLs base functionality and the External Authoring Interface to offer application-specific management of the virtual geographic environment.


european conference on computer vision | 1994

Using 3-Dimensional Meshes To Combine Image-Based and Geometry-Based Constraints

Pascal Fua; Yvan G. Leclerc

To recover complicated surfaces, single information sources often prove insufficient. In this paper, we present a unified framework for 3-D shape reconstruction that allows us to combine image-based constraints, such as those deriving from stereo and shape-from-shading, with geometry-based ones, provided here in the form of 3-D points, 3-D features or 2-D silhouettes.


virtual reality modeling language symposium | 2000

Under the hood of GeoVRML 1.0

Martin Reddy; Lee Iverson; Yvan G. Leclerc

GeoVRML 1.0 provides geoscientists with a rich suite of enabling capabilities that cannot be found elsewhere. That is, the ability to model dynamic 3-D geographic data that can be distributed over the web and interactively visualized using a standard browser configuration. GeoVRML includes nodes for VRML97 that perform this task; addressing issues such as coordinate systems, scalability, animation, accuracy, and preservation of the original geographic data. The implementation is released as open source and includes various tools for generating GeoVRML data. All these facilities provide geoscientists with an excellent medium to present complex 3-D geographic data in a dynamic, interactive, and web-accessible format. We illustrate these capabilities using real-world examples drawn from diverse application areas.


Computer Vision and Image Understanding | 1996

Taking Advantage of Image-Based and Geometry-Based Constraints to Recover 3-D Surfaces

Pascal Fua; Yvan G. Leclerc

A unified framework for 3-D shape reconstruction allows us to combine image-based and geometry-based information sources. The image information is akin to stereo and shape-from-shading, while the geometric information may be provided in the form of 3-D points, 3-D features, or 2-D silhouettes. A formal integration framework is critical in recovering complicated surfaces because the information from a single source is often insufficient to provide a unique answer. Our approach to shape recovery is to deform a generic object-centered 3-D representation of the surface so as to minimize an objective function. This objective function is a weighted sum of the contributions of the various information sources. We describe these various terms individually, our weighting scheme, and our optimization method. Finally, we present results on a number of difficult images of real scenes for which a single source of information would have proved insufficient.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2002

The radiometry of multiple images

Quang-Tuan Luong; Pascal Fua; Yvan G. Leclerc

We introduce a methodology for radiometric reconstruction (i.e. the simultaneous recovery of multiple illuminants and surface albedoes from multiple views), assuming that the geometry of the scene and of the cameras is known. We formulate a linear theory of multiple illuminants and show its similarity to the theory of geometric recovery of multiple views. Linear and nonlinear implementations are proposed, simulation results are discussed and, finally, results on real images are presented.


applied imagery pattern recognition workshop | 2000

Modeling the digital Earth in VRML

Martin Reddy; Yvan G. Leclerc; Lee Iverson; Nat Bletter; Kiril Vidimce

This paper describes the representation and navigation of large, multi-resolution, georeferenced datasets in VRML97. This requires resolving nontrivial issues such as how to represent deep level of detail hierarchies efficiently in VRML; how to model terrain using geographic coordinate systems instead of only VRMLs Cartesian representation; how to model georeferenced coordinates to sub-meter accuracy with only single-precision floating point support; how to enable the integration of multiple terrain datasets for a region, as well as cultural features such as buildings and rods; how to navigate efficiently around a large, global terrain dataset; and finally, how to encode metadata describing the terrain. We present solutions to all of these problems. Consequently, we are able to visualize geographic dat ain the order of terabytes or more, from the globe down to millimeter resolution, and in real-time, using standard VRML97.


Sensors, Systems, and Next-Generation Satellites V | 2001

Framework for robust 3D change detection

Aaron Heller; Yvan G. Leclerc; Quang-Tuan Luong

We present an application of our framework for 3-D object- centered change detection to combined satellite and aerial imagery. In this framework, geometry is compared to geometry, allowing us to compare image sets with different acquisition conditions and even different sensors. By working in this framework, we do not encounter the restrictions and short-comings of conventional image-based change detection, which requires that the images being compared have similar acquisition geometry, photometry, scene illumination, and so forth. The contributions of our framework are: (1) using a geometric basis for change detection, allowing image sets acquired under different conditions to be compared; (2) explicit modeling of image geometry to be able to numerically characterize significant and insignificant change. The contributions of this paper are: (1) the algorithms are embedded in an integrated cartographic modeling and image processing system, which can ingest and make use of a variety of government and commercial imagery and geospatial data products; (2) experimentation with a variety of imagery and scene content. Modifications to the algorithm specific to their use with satellite imagery are discussed and the results from several experiments with both aerial and satellite images urban domains are described and analyzed.

Collaboration


Dive into the Yvan G. Leclerc's collaboration.

Top Co-Authors

Avatar

Pascal Fua

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lee Iverson

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron F. Bobick

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge