Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where W. Bruce Culbertson is active.

Publication


Featured researches published by W. Bruce Culbertson.


eurographics | 2001

A survey of methods for volumetric scene reconstruction from photographs

Gregory G. Slabaugh; W. Bruce Culbertson; Thomas Malzbender; Ronald W. Schafer

Scene reconstruction, the task of generating a 3D model of a scene given multiple 2D photographs taken of the scene, is an old and difficult problem in computer vision. Since its introduction, scene reconstruction has found application in many fields, including robotics, virtual reality, and entertainment. Volumetric models are a natural choice for scene reconstruction. Three broad classes of volumetric reconstruction techniques have been developed based on geometric intersections, color consistency, and pair-wise matching. Some of these techniques have spawned a number of variations and undergone considerable refinement. This paper is a survey of techniques for volumetric scene reconstruction.


international conference on computer vision | 1999

Generalized Voxel Coloring

W. Bruce Culbertson; Thomas Malzbender; Gregory G. Slabaugh

Image-based reconstruction from randomly scattered views is a challenging problem. We present a new algorithm that extends Seitz and Dyers Voxel Coloring algorithm. Unlike their algorithm, ours can use images from arbitrary camera locations. The key problem in this class of algorithms is that of identifying the images from which a voxel is visible. Unlike Kutulakos and Seitzs Space Carving technique, our algorithm solves this problem exactly and the resulting reconstructions yield better results in our application, which is synthesizing new views. One variation of our algorithm minimizes color consistency comparisons; another uses less memory and can be accelerated with graphics hardware. We present efficiency measurements and, for comparison, we present images synthesized using our algorithm and Space Carving.


International Journal of Computer Vision | 2004

Methods for Volumetric Reconstruction of Visual Scenes

Gregory G. Slabaugh; W. Bruce Culbertson; Thomas Malzbender; Mark R. Stevens; Ronald W. Schafer

In this paper, we present methods for 3D volumetric reconstruction of visual scenes photographed by multiple calibrated cameras placed at arbitrary viewpoints. Our goal is to generate a 3D model that can be rendered to synthesize new photo-realistic views of the scene. We improve upon existing voxel coloring/space carving approaches by introducing new ways to compute visibility and photo-consistency, as well as model infinitely large scenes. In particular, we describe a visibility approach that uses all possible color information from the photographs during reconstruction, photo-consistency measures that are more robust and/or require less manual intervention, and a volumetric warping method for application of these reconstruction methods to large-scale scenes.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2005

Understanding performance in coliseum, an immersive videoconferencing system

Harlyn Baker; Nina Bhatti; Donald Tanguay; Irwin Sobel; Dan Gelb; Michael E. Goss; W. Bruce Culbertson; Thomas Malzbender

Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility---participants may move around the shared space---and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents.Coliseum is a complex software system which pushes commodity computing resources to the limit. We set out to measure the different aspects of resource, network, CPU, memory, and disk usage to uncover the bottlenecks and guide enhancement and control of system performance. Latency is a key component of Quality of Experience for video conferencing. We present how each aspect of the system---cameras, image processing, networking, and display---contributes to total latency. Performance measurement is as complex as the system to which it is applied. We describe several techniques to estimate performance through direct light-weight instrumentation as well as use of realistic end-to-end measures that mimic actual user experience. We describe the various techniques and how they can be used to improve system performance for Coliseum and other network applications. This article summarizes the Coliseum technology and reports on issues related to its performance---its measurement, enhancement, and control.


IEEE Transactions on Image Processing | 2013

Fusion of Median and Bilateral Filtering for Range Image Upsampling

Qingxiong Yang; Narendra Ahuja; Ruigang Yang; Kar-Han Tan; James Davis; W. Bruce Culbertson; John G. Apostolopoulos; Gang Wang

We present a new upsampling method to enhance the spatial resolution of depth images. Given a low-resolution depth image from an active depth sensor and a potentially high-resolution color image from a passive RGB camera, we formulate it as an adaptive cost aggregation problem and solve it using the bilateral filter. The formulation synergistically combines the median and bilateral filters thus it better preserves the depth edges and is more robust to noise. Numerical and visual evaluations on a total of 37 Middlebury data sets demonstrate the effectiveness of our method. A real-time high-resolution depth capturing system is also developed using commercial active depth sensor based on the proposed upsampling method.


Proceedings of the IEEE | 2012

The Road to Immersive Communication

John G. Apostolopoulos; Philip A. Chou; W. Bruce Culbertson; Ton Kalker; Mitchell Trott; Susie Wee

Communication has seen enormous advances over the past 100 years including radio, television, mobile phones, video conferencing, and Internet-based voice and video calling. Still, remote communication remains less natural and more fatiguing than face-to-face. The vision of immersive communication is to enable natural experiences and interactions with remote people and environments in ways that suspend disbelief in being there. This paper briefly describes the current state-of-the-art of immersive communication, provides a vision of the future and the associated benefits, and considers the technical challenges in achieving that vision. The attributes of immersive communication are described, together with the frontiers of video and audio for achieving them. We emphasize that the success of these systems must be judged by their impact on the people who use them. Recent high-quality video conferencing systems are beginning to deliver a natural experience-when all participants are in custom-designed studios. Ongoing research aims to extend the experience to a broader range of environments. Augmented reality has the potential to make remote communication even better than being physically present. Future natural and effective immersive experiences will be created by drawing upon intertwined research areas including multimedia signal processing, computer vision, graphics, networking, sensors, displays and sound reproduction systems, haptics, and perceptual modeling and psychophysics.


multimedia signal processing | 2010

Fusion of active and passive sensors for fast 3D capture

Qingxiong Yang; Kar-Han Tan; W. Bruce Culbertson; John G. Apostolopoulos

We envision a conference room of the future where depth sensing systems are able to capture the 3D position and pose of users, and enable users to interact with digital media and contents being shown on immersive displays. The key technical barrier is that current depth sensing systems are noisy, inaccurate, and unreliable. It is well understood that passive stereo fails in non-textured, featureless portions of a scene. Active sensors on the other hand are more accurate in these regions and tend to be noisy in highly textured regions. We propose a way to synergistically combine the two to create a state-of-the-art depth sensing system which runs in near real time. In contrast the only known previous method for fusion is slow and fails to take advantage of the complementary nature of the two types of sensors.


acm multimedia | 2003

Computation and performance issues In coliseum: an immersive videoconferencing system

Harlyn Baker; Nina Bhatti; Donald Tanguay; Irwin Sobel; Dan Gelb; Michael E. Goss; John MacCormick; Kei Yuasa; W. Bruce Culbertson; Thomas Malzbender

Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility--participants may move around the shared space--and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents. This paper summarizes the technology, and reports on issues related to its performance.


multimedia signal processing | 2009

ConnectBoard: A remote collaboration system that supports gaze-aware interaction and sharing

Kar-Han Tan; Ian N. Robinson; Ramin Samadani; Bowon Lee; Dan Gelb; Alex Vorbau; W. Bruce Culbertson; John G. Apostolopoulos

We present ConnectBoard, a new system for remote collaboration where users experience natural interaction with one another, seemingly separated only by a vertical, transparent sheet of glass. It overcomes two key shortcomings of conventional video communication systems: the inability to seamlessly capture natural user interactions, like using hands to point and gesture at parts of shared documents, and the inability of users to look into the camera lens without taking their eyes off the display. We solve these problems by placing the camera behind the screen, where the remote user is virtually located. The camera sees through the display to capture images of the user. As a result, our setup captures natural, frontal views of users as they point and gesture at shared media displayed on the screen between them. Users also never have to take their eyes off their screens to look into the camera lens. Our novel optical solution based on wavelength multiplexing can be easily built with off-the-shelf components and does not require custom electronics for projector-camera synchronization.


computer vision and pattern recognition | 2006

Practical Methods for Geometric and Photometric Correction of Tiled Projector

Michael Harville; W. Bruce Culbertson; Irwin Sobel; Dan Gelb; Andrew E. Fitzhugh; Donald Tanguay

We describe a novel, practical method to create largescale, immersive displays by tiling multiple projectors on curved screens. Calibration is performed automatically with imagery from a single uncalibrated camera, without requiring knowledge of the 3D screen shape. Composition of 2D-mesh-based coordinate mappings, from screen-tocamera and from camera-to-projectors, allows image distortions imposed by the screen curvature and camera and projector lenses to be geometrically corrected together in a single non-parametric framework. For screens that are developable surfaces, we show that the screen-to-camera mapping can be determined without some of the complication of prior methods, resulting in a display on which imagery is undistorted, as if physically attached like wallpaper. We also develop a method of photometric calibration that unifies the geometric blending, brightness scaling, and black level offset maps of prior approaches. The functional form of the geometric blending is novel in itself. The resulting method is more tolerant of geometric correction imprecision, so that visual artifacts are significantly reduced at projector edges and overlap regions. Our efficient GPUbased implementation enables a single PC to render multiple high-resolution video streams simultaneously at frame rate to arbitrary screen locations, leaving the CPU largely free to do video decompression and other processing.

Collaboration


Dive into the W. Bruce Culbertson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge