Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel G. Aliaga is active.

Publication


Featured researches published by Daniel G. Aliaga.


eurographics | 2013

A Survey of Urban Reconstruction

Przemyslaw Musialski; Peter Wonka; Daniel G. Aliaga; Michael Wimmer; Luc Van Gool; Werner Purgathofer

This paper provides a comprehensive overview of urban reconstruction. While there exists a considerable body of literature, this topic is still under active research. The work reviewed in this survey stems from the following three research communities: computer graphics, computer vision and photogrammetry and remote sensing. Our goal is to provide a survey that will help researchers to better position their own work in the context of existing solutions, and to help newcomers and practitioners in computer graphics to quickly gain an overview of this vast field. Further, we would like to bring the mentioned research communities to even more interdisciplinary work, since the reconstruction problem itself is by far not solved.


interactive 3d graphics and games | 1999

MMR: an interactive massive model rendering system using geometric and image-based acceleration

Daniel G. Aliaga; Jon Cohen; Andy Wilson; Eric Baker; Hansong Zhang; Carl Erikson; Kenny Hoff; Thomas C. Hudson; Wolfgang Stuerzlinger; Rui Bastos; Frederick P. Brooks; Dinesh Manocha

We present a system for rendering very complex 3D models at interactive rates. We select a subset of the model as preferred viewpoints and partition the space into virtual cells. Each cell contains near geometry, rendered using levels of detail and visibility culling, and far geometry, rendered as a textured depth mesh. Our system automatically balances the screen-space errors resulting from geometric simplification with those from textureddepth-mesh distortion. We describe our prefetching and data management schemes, both crucial for models significantly larger than available system memory. We have successfully used our system to accelerate walkthroughs of a 13 million triangle model of a large coal-fired power plant and of a 1.7 million triangle architectural model. We demonstrate the walkthrough of a 1.3 GB power plant model with a 140 MB cache footprint.


international conference on computer graphics and interactive techniques | 1991

An object-oriented framework for the integration of interactive animation techniques

Robert C. Zeleznik; D. Brookshire Conner; Matthias M. Wloka; Daniel G. Aliaga; Nathan T. Huang; Philip M. Hubbard; Brian Knep; Henry Kaufman; John F. Hughes; Andries van Dam

We present an interactive modeling and animation system that facilitates the integration of a variety of simulation and animation paradigms. This system permits the modeling of diverse objects that change in shape, appearance, and behaviour over time. Our system thus extends modeling tools to include animation controls. Changes can be effected by various methods of control, including scripted, gestural, and behavioral specification. The system is an extensible testbed that supports research in the interaction of disparate control methods embodied in controller objects. This paper discusses some of the issues involved in modeling such interactions and the mechanisms implemented to provide solutions to some of these issues.The systems object-oriented architecture uses delegation hierarchies to let objects change all of their attributes dynamically. Objects include displayable objects, controllers, cameras, lights, renderers, and user interfaces. Techniques used to obtain interactive performance include the use of data-dependency networks, lazy evaluation, and extensive caching to exploit inter- and intra-frame coherency.


international conference on computer graphics and interactive techniques | 2001

Plenoptic stitching: a scalable method for reconstructing 3D interactive walk throughs

Daniel G. Aliaga; Ingrid Carlbom

Interactive walkthrough applications require detailed 3D models to give users a sense of immersion in an environment. Traditionally these models are built using computer-aided design tools to define geometry and material properties. But creating detailed models is time-consuming and it is also difficult to reproduce all geometric and photometric subtleties of real-world scenes. Computer vision attempts to alleviate this problem by extracting geometry and photogrammetry from images of the real-world scenes. However, these models are still limited in the amount of detail they recover. Image-based rendering generates novel views by resampling a set of images of the environment without relying upon an explicit geometric model. Current such techniques limit the size and shape of the environment, and they do not lend themselves to walkthrough applications. In this paper, we define a parameterization of the 4D plenoptic function that is particularly suitable for interactive walkthroughs and define a method for its sampling and reconstructing. Our main contributions are: 1) a parameterization of the 4D plenoptic function that supports walkthrough applications in large, arbitrarily shaped environments; 2) a simple and fast capture process for complex environments; and 3) an automatic algorithm for reconstruction of the plenoptic function.


computer vision and pattern recognition | 2010

Building reconstruction using manhattan-world grammars

Carlos A. Vanegas; Daniel G. Aliaga; Bedřich Beneš

We present a passive computer vision method that exploits existing mapping and navigation databases in order to automatically create 3D building models. Our method defines a grammar for representing changes in building geometry that approximately follow the Manhattan-world assumption which states there is a predominance of three mutually orthogonal directions in the scene. By using multiple calibrated aerial images, we extend previous Manhattan-world methods to robustly produce a single, coherent, complete geometric model of a building with partial textures. Our method uses an optimization to discover a 3D building geometry that produces the same set of façade orientation changes observed in the captured images. We have applied our method to several real-world buildings and have analyzed our approach using synthetic buildings.


international conference on computer vision | 2001

Accurate catadioptric calibration for real-time pose estimation in room-size environments

Daniel G. Aliaga

Omnidirectional video cameras are becoming increasingly popular in computer vision. One family of these cameras uses a catadioptric system with a paraboloidal mirror and an orthographic lens to produce an omnidirectional image with a single center-of-projection. In this paper, we develop a novel calibration model that we combine with a beacon-based pose estimation algorithm. Our approach relaxes the assumption of an ideal paraboloidal catadioptric system and achieves an order of magnitude improvement in pose estimation accuracy compared to calibration with an ideal camera model. Our complete standalone system, placed on a radio-controlled motorized cart, moves in a room-size environment, capturing high-resolution frames to disk and recovering camera pose with an average error of 0.56% in a region 15 feet in diameter.


ieee visualization | 2001

Hybrid simplification: combining multi-resolution polygon and point rendering

Jonathan D. Cohen; Daniel G. Aliaga; Weiqiang Zhang

Multi-resolution hierarchies of polygons and more recently of points are familiar and useful tools for achieving interactive rendering rates. We present an algorithm for tightly integrating the two into a single hierarchical data structure. The trade-off between rendering portions of a model with points or with polygons is made automatically. Our approach to this problem is to apply a bottom-up simplification process involving not only polygon simplification operations, but point replacement and point simplification operations as well. Given one or more surface meshes, our algorithm produces a hybrid hierarchy comprising both polygon and point primitives. This hierarchy may be optimized according to the relative performance characteristics of these primitive types on the intended rendering platform. We also provide a range of aggressiveness for performing point replacement operations. The most conservative approach produces a hierarchy that is better than a purely polygonal hierarchy in some places, and roughly equal in others. A less conservative approach can trade reduced complexity at the far viewing ranges for some increased complexity at the near viewing ranges. We demonstrate our approach on a number of input models, achieving primitive counts that are 1.3 to 4.7 times smaller than those of triangle-only simplification.


international conference on computer graphics and interactive techniques | 1999

Automatic image placement to provide a guaranteed frame rate

Daniel G. Aliaga; Anselmo Lastra

We present a preprocessing algorithm and run-time system for rendering 3D geometric models at a guaranteed frame rate. Our approach trades off space for frame rate by using images to replace distant geometry. The preprocessing algorithm automatically chooses a subset of the model to display as an image so as to render no more than a specified number of geometric primitives. We also summarize an optimized layered-depth-image warper to display images surrounded by geometry at run time. Furthermore, we show the results of applying our method to accelerate the interactive walkthrough of several complex models.


international conference on computer graphics and interactive techniques | 2008

Interactive example-based urban layout synthesis

Daniel G. Aliaga; Carlos A. Vanegas; Bedrich Benes

We present an interactive system for synthesizing urban layouts by example. Our method simultaneously performs both a structure-based synthesis and an image-based synthesis to generate a complete urban layout with a plausible street network and with aerial-view imagery. Our approach uses the structure and image data of real-world urban areas and a synthesis algorithm to provide several high-level operations to easily and interactively generate complex layouts by example. The user can create new urban layouts by a sequence of operations such as join, expand, and blend without being concerned about low-level structural details. Further, the ability to blend example urban layout fragments provides a powerful way to generate new synthetic content. We demonstrate our system by creating urban layouts using example fragments from several real-world cities, each ranging from hundreds to thousands of city blocks and parcels.


ieee visualization | 1997

Architectural walkthroughs using portal textures

Daniel G. Aliaga; Anselmo Lastra

This paper outlines a method to dynamically replace portals with textures in a cell-partitioned model. The rendering complexity is reduced to the geometry of the current cell thus increasing interactive performance. A portal is a generalization of windows and doors. It connects two adjacent cells (or rooms). Each portal of the current cell that is some distance away from the viewpoint is rendered as a texture. The portal texture (smoothly) returns to geometry when the viewpoint gets close to the portal. This way all portal sequences (not too close to the viewpoint) have a depth complexity of one. The size of each texture and distance at which the transition occurs is configurable for each portal.

Collaboration


Dive into the Daniel G. Aliaga's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anselmo Lastra

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manuel M. Oliveira

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge