Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ioannis Stamos is active.

Publication


Featured researches published by Ioannis Stamos.


computer vision and pattern recognition | 2000

3-D model construction using range and image data

Ioannis Stamos; Peter K. Allen

This paper deals with the automated creation of geometric and photometric correct 3-D models of the world. Those models can be used for virtual reality, tele-presence, digital cinematography and urban planning applications. The combination of range (dense depth estimates) and image sensing (color information) provides data-sets which allow us to create geometrically correct, photorealistic models of high quality. The 3-D models are first built from range data using a volumetric set intersection method previously developed by us. Photometry can be napped onto these models by registering features from both the 3-D and 2-D data sets. Range data segmentation algorithms have been developed to identify planar regions, determine linear features from planar intersections that can serve as features for registration with 2-D imagery lines, and reduce the overall complexity of the models. Results are shown for building models of large buildings on our campus using real data acquired from multiple sensors.


Computer Vision and Image Understanding | 2002

Geometry and texture recovery of scenes of large scale

Ioannis Stamos; Peter K. Allen

This paper presents a systematic approach to the problem of photorealistic 3-D model acquisition from the combination of range and image sensing. The input is a sequence of unregistered range scans of the scene and a sequence of unregistered 2-D photographs of the same scene. The output is a true texture-mapped geometric model of the scene. We believe that the developed modules are of vital importance for a flexible photorealistic 3-D model acquisition system. Segmentation algorithms simplify the dense datasets and provide stable features of interest which can be used for registration purposes. Solid modeling provides geometrically correct 3-D models. Finally, the automated range to an image registration algorithm can increase the flexibility of the system by decoupling the slow geometry recovery process from the image acquisition process; the camera does not have to be precalibrated and rigidly attached to the range sensor. The system is comprehensive in that it addresses all phases of the modeling problem with a particular emphasis on automating the entire process interaction.


International Journal of Computer Vision | 2008

Integrating Automated Range Registration with Multiview Geometry for the Photorealistic Modeling of Large-Scale Scenes

Ioannis Stamos; Lingyun Liu; Chao Chen; George Wolberg; Gene Yu; Siavash Zokai

Abstract The photorealistic modeling of large-scale scenes, such as urban structures, requires a fusion of range sensing technology and traditional digital photography. This paper presents a system that integrates automated 3D-to-3D and 2D-to-3D registration techniques, with multiview geometry for the photorealistic modeling of urban scenes. The 3D range scans are registered using our automated 3D-to-3D registration method that matches 3D features (linear or circular) in the range images. A subset of the 2D photographs are then aligned with the 3D model using our automated 2D-to-3D registration algorithm that matches linear features between the range scans and the photographs. Finally, the 2D photographs are used to generate a second 3D model of the scene that consists of a sparse 3D point cloud, produced by applying a multiview geometry (structure-from-motion) algorithm directly on a sequence of 2D photographs. The last part of this paper introduces a novel algorithm for automatically recovering the rotation, scale, and translation that best aligns the dense and sparse models. This alignment is necessary to enable the photographs to be optimally texture mapped onto the dense model. The contribution of this work is that it merges the benefits of multiview geometry with automated registration of 3D range scans to produce photorealistic models with minimal human interaction. We present results from experiments in large-scale urban scenes.


digital identity management | 2001

AVENUE: Automated site modeling in urban environments

Peter K. Allen; Ioannis Stamos; Atanas Gueorguiev; Ethan Gold; Paul S. Blaer

This paper is an overview of the AVENUE project at Columbia University. AVENUEs main goal is to automate the site modeling process in urban environments. The first component of AVENUE is a 3-D modeling system which constructs complete 3-D geometric models with photometric texture mapping acquired from different viewpoints. The second component is a planning system that plans the Next-Best-View for acquiring a model of the site. The third component is a mobile robot we have built that contains an integrated sensor suite for automatically performing the site modeling task. We present results for modeling buildings in New York City.


computer vision and pattern recognition | 2005

Automatic 3D to 2D registration for the photorealistic rendering of urban scenes

Lingyun Liu; Ioannis Stamos

This paper presents a novel and efficient algorithm for the 3D range to 2D image registration problem in urban scene settings. Our input is a set of unregistered 3D range scans and a set of unregistered and uncalibrated 2D images of the scene. The 3D range scans and 2D images capture real scenes in extremely high detail. A new automated algorithm calibrates each 2D image and computes an optimized transformation between the 2D images and 3D range scans. This transformation is based on a match of 3D with 2D features that maximizes an overlap criterion. Our algorithm attacks the hard 3D range to 2D image registration problem in a systematic, efficient, and automatic way. Images captured by a high-resolution 2D camera, that moves and adjusts freely, are mapped on a centimeter-accurate 3D model of the scene providing photorealistic renderings of high quality. We present results from experiments in three different urban settings.


computer vision and pattern recognition | 2006

Multiview Geometry for Texture Mapping 2D Images Onto 3D Range Data

Lingyun Liu; Ioannis Stamos; Gene Yu; George Wolberg; Siavash Zokai

The photorealistic modeling of large-scale scenes, such as urban structures, requires a fusion of range sensing technology and traditional digital photography. This paper presents a system that integrates multiview geometry and automated 3D registration techniques for texture mapping 2D images onto 3D range data. The 3D range scans and the 2D photographs are respectively used to generate a pair of 3D models of the scene. The first model consists of a dense 3D point cloud, produced by using a 3D-to-3D registration method that matches 3D lines in the range images. The second model consists of a sparse 3D point cloud, produced by applying a multiview geometry (structure-from-motion) algorithm directly on a sequence of 2D photographs. This paper introduces a novel algorithm for automatically recovering the rotation, scale, and translation that best aligns the dense and sparse models. This alignment is necessary to enable the photographs to be optimally texture mapped onto the dense model. The contribution of this work is that it merges the benefits of multiview geometry with automated registration of 3D range scans to produce photorealistic models with minimal human interaction. We present results from experiments in large-scale urban scenes.


international conference on robotics and automation | 2000

Integration of range and image sensing for photo-realistic 3D modeling

Ioannis Stamos; Peter K. Allen

The automated extraction of photo-realistic 3D models of the world that can be used in applications such as virtual reality, tele-presence, digital cinematography and urban planning, is the focus of this paper. The combination of range (dense depth estimates) and image sensing (color information) provides data-sets which allow us to create photo-realistic models of high quality. The challenges are the simplification of the 3D data set, the extraction of meaningful features in both the range and 2D images and the fusion of those data-sets using the extracted features. We address all these challenges and provide results on data we gathered in outdoor scenes by a range and image sensor based on a mobile robot. Our ultimate goal is an autonomous 3D model creation system which minimizes the amount of human interaction.


international conference on computer vision | 2001

Automatic registration of 2-D with 3-D imagery in urban environments

Ioannis Stamos; P.K. Alien

We are building a system that can automatically acquire 3D range scans and 2D images to build geometrically correct, texture mapped 3D models of urban environments. This paper deals with the problem of automatically registering the 3D range scans with images acquired at other times and with unknown camera calibration and location. The method involves the utilization of parallelism and orthogonality constraints that naturally exist in urban environments. We present results for building a texture mapped 3-D model of an urban building.


IEEE Computer Graphics and Applications | 2003

New methods for digital modeling of historic sites

Peter K. Allen; Alejandro J. Troccoli; Benjamin M. Smith; Stephen Murray; Ioannis Stamos; Marius Leordeanu

We discuss new methods for building 3D models of historic sites. Our algorithm automatically computes pairwise registrations between individual scans, builds a topological graph, and places the scans in the same frame of reference.


computer vision and pattern recognition | 1998

Interactive sensor planning

Ioannis Stamos; Peter K. Allen

This paper describes an interactive sensor planning system, that can be used to select viewpoints subject to camera visibility, field of view and task constraints. Application areas for this method include surveillance planning, safety monitoring, architectural site design planning, and automated site modeling. Given a description, of the sensors characteristics, the objects in the 3-D scene, and the targets to be viewed, our algorithms compute the set of admissible view points that satisfy the constraints. The system first builds topologically correct solid models of the scene from a variety of data sources. Viewing targets are then selected, and visibility volumes and field of view cones are computed and intersected to create viewing volumes where cameras can be placed. The user can interactively manipulate the scene and select multiple target features to be viewed by a camera. The user can also select candidate viewpoints within this volume to synthesize views and verify the correctness of the planning system. We present experimental results for the planning system on an actual complex city model.

Collaboration


Dive into the Ioannis Stamos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lingyun Liu

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gene Yu

City University of New York

View shared research outputs
Top Co-Authors

Avatar

George Wolberg

City College of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Siavash Zokai

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cecilia Chao Chen

City University of New York

View shared research outputs
Researchain Logo
Decentralizing Knowledge