Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel E. Crispell is active.

Publication


Featured researches published by Daniel E. Crispell.


international symposium on 3d data processing visualization and transmission | 2006

Spherical Catadioptric Arrays: Construction, Multi-View Geometry, and Calibration

Douglas Lanman; Daniel E. Crispell; Megan Wachs; Gabriel Taubin

This paper introduces a novel imaging system composed of an array of spherical mirrors and a single high-resolution digital camera. We describe the mechanical design and construction of a prototype, analyze the geometry of image formation, present a tailored calibration algorithm, and discuss the effect that design decisions had on the calibration routine. This system is presented as a unique platform for the development of efficient multi-view imaging algorithms which exploit the combined properties of camera arrays and non-central projection catadioptric systems. Initial target applications include data acquisition for image-based rendering and 3D scene reconstruction. The main advantages of the proposed system include: a relatively simple calibration procedure, a wide field of view, and a single imaging sensor which eliminates the need for color calibration and guarantees time synchronization.


digital identity management | 2007

Surround Structured Lighting for Full Object Scanning

Douglas Lanman; Daniel E. Crispell; Gabriel Taubin

This paper presents a new system for acquiring complete 3D surface models using a single structured light projector, a pair of planar mirrors, and one or more synchronized cameras. We project structured light patterns that illuminate the object from all sides (not just the side of the projector) and are able to observe the object from several vantage points simultaneously. This system requires that projected planes of light be parallel, and so we construct an orthographic projector using a Fresnel lens and a commercial DLP projector. A single Gray code sequence is used to encode a set of vertically-spaced light planes within the scanning volume, and five views of the illuminated object are obtained from a single image of the planar mirrors located behind it. Using each real and virtual camera, we then recover a dense 3D point cloud spanning the entire object surface using traditional structured light algorithms. As we demonstrate, this configuration overcomes a major hurdle to achieving full 360 degree reconstructions using a single structured light sequence by eliminating the need for merging multiple scans or multiplexing several projectors.


IEEE Transactions on Geoscience and Remote Sensing | 2012

A Variable-Resolution Probabilistic Three-Dimensional Model for Change Detection

Daniel E. Crispell; Joseph L. Mundy; Gabriel Taubin

Given a set of high-resolution images of a scene, it is often desirable to predict the scenes appearance from viewpoints not present in the original data for purposes of change detection. When significant 3-D relief is present, a model of the scene geometry is necessary for accurate prediction to determine surface visibility relationships. In the absence of an a priori high-resolution model (such as those provided by LIDAR), scene geometry can be estimated from the imagery itself. These estimates, however, cannot, in general, be exact due to uncertainties and ambiguities present in image data. For this reason, probabilistic scene models and reconstruction algorithms are ideal due to their inherent ability to predict scene appearance while taking into account such uncertainties and ambiguities. Unfortunately, existing data structures used for probabilistic reconstruction do not scale well to large and complex scenes, primarily due to their dependence on large 3-D voxel arrays. The work presented in this paper generalizes previous probabilistic 3-D models in such a way that multiple orders of magnitude savings in storage are possible, making high-resolution change detection of large-scale scenes from high-resolution aerial and satellite imagery possible. Specifically, the inherent dependence on a discrete array of uniformly sized voxels is removed through the derivation of a probabilistic model which represents uncertain geometry as a density field, allowing implementations to efficiently sample the volume in a nonuniform fashion.


international symposium on 3d data processing visualization and transmission | 2006

Beyond Silhouettes: Surface Reconstruction Using Multi-Flash Photography

Daniel E. Crispell; Douglas Lanman; Peter G. Sibley; Yong Zhao; Gabriel Taubin

This paper introduces a novel method for surface reconstruction using the depth discontinuity information captured by a multi-flash camera while the object moves along a known trajectory. Experimental results based on turntable sequences are presented. By observing the visual motion of depth discontinuities, surface points are accurately reconstructed - including many located deep inside concavities. The method extends well-established differential and global shape-from-silhouette surface reconstruction techniques by incorporating the significant additional information encoded in the depth discontinuities. The reconstruction method uses an implicit form of the epipolar parameterization and directly estimates point locations and corresponding surface normals on the surface of the object using a local temporal neighborhood of the depth discontinuities. Outliers, which correspond to the ill-conditioned cases of the reconstruction equations, are easily detected and removed by back-projection. Gaps resulting from curvature- dependent sampling and shallow concavities are filled by fitting an implicit surface to the oriented point clouds point locations and normal vectors.


Computer Vision and Image Understanding | 2009

Surround structured lighting: 3-D scanning with orthographic illumination

Douglas Lanman; Daniel E. Crispell; Gabriel Taubin

This paper presents a new system for rapidly acquiring complete 3-D surface models using a single orthographic structured light projector, a pair of planar mirrors, and one or more synchronized cameras. Using the mirrors, we project structured light patterns that illuminate the object from all sides (not just the side of the projector) and are able to observe the object from several vantage points simultaneously. This system requires that projected planes of light to be parallel, so we construct an orthographic projector using a Fresnel lens and a commercial DLP projector. A single Gray code sequence is used to encode a set of vertically-spaced light planes within the scanning volume, and five views of the illuminated object are obtained from a single image of the planar mirrors located behind it. From each real and virtual camera we recover a dense 3-D point cloud spanning the entire object surface using traditional structured light algorithms. A key benefit of this design is to ensure that each point on the object surface can be assigned an unambiguous Gray code sequence, despite the possibility of being illuminated from multiple directions. In addition to presenting a prototype implementation, we also develop a complete set of mechanical alignment and calibration procedures for utilizing orthographic projectors in computer vision applications. As we demonstrate, the proposed system overcomes a major hurdle to achieving full 360^o reconstructions using a single structured light sequence by eliminating the need for merging multiple scans or multiplexing several projectors.


geosensor networks | 2008

Data-Centric Visual Sensor Networks for 3D Sensing

Mert Akdere; Ugur Çetintemel; Daniel E. Crispell; John Jannotti; Jie Mao; Gabriel Taubin

Visual Sensor Networks (VSNs) represent a qualitative leap in functionality over existing sensornets. With high data rates and precise calibration requirements, VSNs present challenges not faced by todays sensornets. The power and bandwidth required to transmit video data from hundreds or thousands of cameras to a central location for processing would be enormous. A network of smart cameras should process video data in real time, extracting features and three-dimensional geometry from the raw images of cooperating cameras. These results should be stored and processed in the network, near their origin. New content-routing techniques can allow cameras to find common features--critical for calibration, search, and tracking. We describe a novel query mechanism to mediate access to this distributed datastore, allowing high-level features to be described as compositions in space-time of simpler features.


british machine vision conference | 2008

Parallax-Free Registration of Aerial Video.

Daniel E. Crispell; Joseph L. Mundy; Gabriel Taubin

Aerial video registration is traditionally performed using 2-d transforms in the image space. For scenes with large 3-d relief, this approach causes parallax motions which may be detrimental to image processing and vision algorithms further down the pipeline. A novel, automatic, and online video registration system is proposed which renders the scene from a fixed viewpoint, eliminating motion parallax from the registered video. The 3-d scene is represented with a probabilistic voxel model, and camera pose at each frame is estimated using an Extended Kalman Filter and a refinement procedure based on a popular visual servoing technique.


Emerging Trends in Visual Computing | 2009

Shape from Depth Discontinuities

Gabriel Taubin; Daniel E. Crispell; Douglas Lanman; Peter G. Sibley; Yong Zhao

We propose a new primal-dual framework for representation, capture, processing, and display of piecewise smooth surfaces, where the dual space is the space of oriented 3D lines, or rays , as opposed to the traditional dual space of planes. An image capture process detects points on a depth discontinuity sweep from a camera moving with respect to an object, or from a static camera and a moving object. A depth discontinuity sweep is a surface in dual space composed of the time-dependent family of depth discontinuity curves span as the camera pose describes a curved path in 3D space. Only part of this surface, which includes silhouettes, is visible and measurable from the camera. Locally convex points deep inside concavities can be estimated from the visible non-silhouette depth discontinuity points. Locally concave point laying at the bottom of concavities, which do not correspond to visible depth discontinuities, cannot be estimated, resulting in holes in the reconstructed surface. A first variational approach to fill the holes, based on fitting an implicit function to a reconstructed oriented point cloud, produces watertight models. We describe a first complete end-to-end system for acquiring models of shape and appearance. We use a single multi-flash camera and turntable for the data acquisition and represent the scanned objects as point clouds, with each point being described by a 3-D location, a surface normal, and a Phong appearance model.


international conference on computer graphics and interactive techniques | 2006

Multi-flash 3D photography: capturing shape and appearance

Douglas Lanman; Peter G. Sibley; Daniel E. Crispell; Yong Zhao; Gabriel Taubin

We describe a new 3D scanning system which exploits the depth discontinuity information captured by the multi-flash camera proposed by [Raskar et al. 2004]. In contrast to existing differential and global shape-from-silhouette algorithms, our method can reconstruct the position and orientation of points located deep inside concavities. Points which do not produce an observable depth discontinuity, however, cannot be reconstructed. We apply Sibley’s method for fitting an implicit surface to fill the resulting sampling gaps [Crispell et al. 2006]. Extending this prior work, we model the appearance of each surface point by fitting a Phong reflectance model to the BRDF samples using the visibility information provided by the implicit surface.


international conference on computer graphics and interactive techniques | 2005

Calibrating a catadioptric light field array

Megan Wachs; Daniel E. Crispell; Gabriel Taubin

We present a new method for acquiring data for image-based rendering and 3D reconstruction using an array of spherical mirrors and a single high-resolution perspective camera. The main advantage of this setup is a wider field of view, but designing calibration and reconstruction algorithms is challenging because catadioptric systems with spherical mirrors have non-central viewpoints, and do not behave as perspective cameras. A single image from our perspective camera produces sample rays from a very large number of virtual viewpoints. In this sketch we describe the construction of this system and a new procedure for calibrating it. Previous methods for acquiring a number of images from different viewpoints have included arrays of cameras [Levoy and Hanrahan 1995] and moving cameras [Gortler et al. 1995]. Another previous method was an array of lenses mounted on a flatbed scanner [Yang 2000]. The literature on catadioptric systems is extensive, but to our knowledge systems with large numbers of identical mirrors arranged in regular configurations have not been presented. The main advantages of our system are the wide field of view, the single camera capture makes the time synchronization issues associated with multi-camera systems vanish. On the other hand, frame-rate video processing is not possible with these high resolution consumergrade digital cameras. Mechanical System Design: The system, shown above, consists of a thick aluminum plate with stainless steel cylindrical pins pressed into holes. These pins hold 31 spherical mirrors. They were cut to length and inserted in the plate with high precision in our machine shop, but the inexpensive plastic mirrors were glued to the plate using a synthetic silicon rubber adhesive. As a result, the mirror parameters (location of sphere centers and radii) are not known with high precision. This plate is positioned in space to roughly fill the field of view of our Olympus C-8080 8 Mega Pixel digital camera. A structure built out of standard aluminum extrusions keeps the whole structure in place. A single image captures all 31 mirrors. The camera’s SDK provided by the manufacturer was used to automate the capture and calibration processes. Calibration: We divide the calibration process into three steps: 1) Intrinsic camera calibration, 2) Extrinsic calibration of the plate with relation to the camera, and 3) Calibration of the individual mirrors with respect to the plate. For the intrinsic camera calibration step we use well-established techniques and free high-quality software. The second step is carried out from a single input image as follows. We use an ellipse detection algorithm to locate the four corner pins close to the four corners of the image. Since we know the precise location of these four pins in the plate and their relative distances, from the four point correspondences we compute a homography, and from this homography a first estimate of the equation of the plane in space. We can now predict the locations of the rest of the pins in the image. We use these predictions to search for the remain-

Collaboration


Dive into the Daniel E. Crispell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge