Herman Towles
University of North Carolina at Chapel Hill
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Herman Towles.
International Journal of Computer Vision | 2008
Marc Pollefeys; David Nistér; Jan Michael Frahm; Amir Akbarzadeh; Philippos Mordohai; Brian Clipp; Chris Engels; David Gallup; Seon Joo Kim; Paul Merrell; C. Salmi; Sudipta N. Sinha; B. Talton; Liang Wang; Qingxiong Yang; Henrik Stewenius; Ruigang Yang; Greg Welch; Herman Towles
Abstract The paper presents a system for automatic, geo-registered, real-time 3D reconstruction from video of urban scenes. The system collects video streams, as well as GPS and inertia measurements in order to place the reconstructed models in geo-registered coordinates. It is designed using current state of the art real-time modules for all processing steps. It employs commodity graphics hardware and standard CPU’s to achieve real-time performance. We present the main considerations in designing the system and the steps of the processing pipeline. Our system extends existing algorithms to meet the robustness and variability necessary to operate out of the lab. To account for the large dynamic range of outdoor videos the processing pipeline estimates global camera gain changes in the feature tracking stage and efficiently compensates for these in stereo estimation without impacting the real-time performance. The required accuracy for many applications is achieved with a two-step stereo reconstruction process exploiting the redundancy across frames. We show results on real video sequences comprising hundreds of thousands of frames.
ieee visualization | 1999
Ramesh Raskar; Michael S. Brown; Ruigang Yang; Wei-Chao Chen; Greg Welch; Herman Towles; B. Scales; Henry Fuchs
Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.
international symposium on 3d data processing visualization and transmission | 2006
Amir Akbarzadeh; Jan Michael Frahm; Philippos Mordohai; Brian Clipp; Chris Engels; David Gallup; Paul Merrell; M. Phelps; Sudipta N. Sinha; B. Talton; Liang Wang; Qingxiong Yang; Henrik Stewenius; Ruigang Yang; Greg Welch; Herman Towles; David Nistér; Marc Pollefeys
The paper introduces a data collection system and a processing pipeline for automatic geo-registered 3D reconstruction of urban scenes from video. The system collects multiple video streams, as well as GPS and INS measurements in order to place the reconstructed models in geo- registered coordinates. Besides high quality in terms of both geometry and appearance, we aim at real-time performance. Even though our processing pipeline is currently far from being real-time, we select techniques and we design processing modules that can achieve fast performance on multiple CPUs and GPUs aiming at real-time performance in the near future. We present the main considerations in designing the system and the steps of the processing pipeline. We show results on real video sequences captured by our system.
ieee visualization | 2001
Ruigang Yang; David Gotz; Justin Hensley; Herman Towles; Michael S. Brown
This paper presents PixelFlex - a spatially reconfigurable multi-projector display system. The PixelFlex system is composed of ceiling-mounted projectors, each with computer-controlled pan, tilt, zoom and focus; and a camera for closed-loop calibration. Working collectively, these controllable projectors function as a single logical display capable of being easily modified into a variety of spatial formats of differing pixel density, size and shape. New layouts are automatically calibrated within minutes to generate the accurate warping and blending functions needed to produce seamless imagery across planar display surfaces, thus giving the user the flexibility to quickly create, save and restore multiple screen configurations. Overall, PixelFlex provides a new level of automatic reconfigurability and usage, departing from the static, one-size-fits-all design of traditional large-format displays. As a front-projection system, PixelFlex can be installed in most environments with space constraints and requires little or no post-installation mechanical maintenance because of the closed-loop calibration.
ieee visualization | 2000
Aditi Majumder; Zhu He; Herman Towles; Greg Welch
Large area tiled displays are gaining popularity for use in collaborative immersive virtual environments and scientific visualization. While recent work has addressed the issues of geometric registration, rendering architectures, and human interfaces, there has been relatively little work on photometric calibration in general, and photometric non-uniformity in particular. For example, as a result of differences in the photometric characteristics of projectors, the color and intensity of a large area display varies from place to place. Further, the imagery typically appears brighter at the regions of overlap between adjacent projectors. We analyze and classify the causes of photometric non-uniformity in a tiled display. We then propose a methodology for determining corrections designed to achieve uniformity, that can correct for the photometric variations across a tiled projector display in real time using per channel color look-up-tables (LUT).
ieee visualization | 2000
Wei-Chao Chen; Herman Towles; Lars S. Nyland; Greg Welch; Henry Fuchs
In 1998 we introduced the idea for a project we call the Office of the Future. Our long-term vision is to provide a better every-day working environment, with high-fidelity scene reconstruction for life-sized 3D tele-collaboration. In particular, we want a true sense of presence with our remote collaborator and their real surroundings. The challenges related to this vision are enormous and involve many technical tradeoffs. This is true in particular for scene reconstruction. Researchers have been striving to achieve real-time approaches, and while they have made respectable progress, the limitations of conventional technologies relegate them to relatively low resolution in a restricted volume. We present a significant step toward our ultimate goal, via a slightly different path. In lieu of low-fidelity dynamic scene modeling we present an exceedingly high fidelity reconstruction of a real but static office. By assembling the best of available hardware and software technologies in static scene acquisition, modeling algorithms, rendering, tracking and stereo projective display, we are able to demonstrate a portal to a real office, occupied today by a mannequin, and in the future by a real remote collaborator. We now have both a compelling sense of just how good it could be, and a framework into which we will later incorporate dynamic scene modeling, as we continue to head toward our ultimate goal of 3D collaborative telepresence.
IEEE Computer Graphics and Applications | 2000
Greg Welch; Henry Fuchs; Ramesh Raskar; Herman Towles; Michael S. Brown
Some day, high-resolution projected imagery will surround you in your office. The walls, your desk, and even the floor will serve as your computer desktop.
broadband communications, networks and systems | 2005
Greg Welch; Diane H. Sonnenwald; Kelly Mayer-Patel; Ruigang Yang; Andrei State; Herman Towles; Bruce A. Cairns; Henry Fuchs
Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception To address these problems we are exploring the use of a small array of cameras to reconstruct a real-time, on-line 3D computer model of the real environment and events. We call this 3D medical consultation (3DMC). The idea is to give remote users what amounts to an infinite set of stereoscopic viewpoints, simultaneously addressing the visibility and depth perception problems associated with 2D video. Here we describe our current prototype system, some of the methods we use, and some early experimental results
IEEE MultiMedia | 2005
Greg Welch; Andrei State; Adrian Ilie; Kok-Lim Low; Anselmo Lastra; Bruce A. Cairns; Herman Towles; Henry Fuchs; Ruigang Yang; Sascha Becker; Daniel Russo; Jesse Funaro; A. van Dam
Immersive electronic books (IEBooks) for surgical training will let surgeons explore previous surgical procedures in 3D. The authors describe the techniques and tools for creating a preliminary IEBook, embodying some of the basic concepts.
ieee virtual reality conference | 2006
Patrick Quirk; Tyler Johnson; Rick Skarbez; Herman Towles; Florian Gyarfas; Henry Fuchs
Using projectors to create perspectively correct imagery on arbitrary display surfaces requires geometric knowledge of the display surface shape, the projector calibration, and the user’s position in a common coordinate system. Prior solutions have most commonly modeled the display surface as a tessellated mesh derived from the 3D-point cloud acquired during system calibration. In this paper we describe a method for functional reconstruction of the display surface, which takes advantage of the knowledge that most interior display spaces (e.g. walls, floors, ceilings, building columns) are piecewise planar. Using a RANSAC algorithm to recursively fit planes to a 3D-point cloud sampling of the surface, followed by a conversion of the plane definitions into simple planar polygon descriptions, we are able to create a geometric model which is less complex than a dense tessellated mesh and offers a simple method for accurately modeling the corners of rooms. Planar models also eliminate subtle, but irritating, texture distortion often seen in tessellated mesh approximations to planar surfaces.