Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles J. Casey is active.

Publication


Featured researches published by Charles J. Casey.


Optical Engineering | 2008

Improved composite-pattern structured-light profilometry by means of postprocessing

Chun Guan; Laurence G. Hassebrook; Daniel L. Lau; Veeraganesh Yalla; Charles J. Casey

Structured-light illumination (SLI) means projecting a series of structured or striped patterns from a projector onto an object and then using a camera, placed at an angle from the projector, to record the targets 3-D shape. For multiplexing these structured patterns in time, traditional SLI systems require the target object to remain still during the scanning process. Thus, the technique of composite-pattern design was introduced as a means of combining multiple SLI patterns, using principles of frequency modulation, into a single pattern that can be continuously projected and from which 3-D surface can be reconstructed from a single image, thereby enabling the recording of 3-D video. But the associated process of modulation and demodulation is limited by the spatial bandwidth of the projector-camera pair, which introduces distortion near surface or albedo discontinuities. Therefore, this paper introduces a postprocessing step to refine the reconstructed depth surface. Simulated experiments show an 78% reduction in depth error.


Archive | 2008

Structured Light Illumination Methods for Continuous Motion Hand and Face-computer Interaction

Charles J. Casey; Laurence G. Hassebrook; Daniel L. Lau

Traditionally, human-computer interaction (HCI) has been facilitated by the use of physical input devices. However, as the use of computers becomes more widespread and applications become increasingly diverse, the need for new methods of control becomes more pressing. Advances in computational power and image capture technology have allowed the development of video-based interaction. Existing systems have proven themselves useful for situations in which physical manipulation of a computer input device is impossible or impractical, and can restore a level of computer accessibility to the disabled (Betke et al., 2002). The next logical step is to further develop the abilities of video-based interaction. In this chapter, we consider the introduction of third dimensional data into the video-based control paradigm. The inclusion of depth information can allow enhanced feature detection ability and greatly increase the range of options for interactivity. Threedimensional control information can be collected in various ways such as stereo-vision and time-of-flight ranging. Our group specializes in Structured Light Illumination of objects in motion and believe that its advantages are simplicity, reduced cost, and accuracy. So we consider only the method of data acquisition via structured light illumination. Implementation of such a system requires only a single camera and illumination source in conjunction with a single processing computer, and can easily be constructed from readily available commodity parts. In following sections, we will explain the concept of 3D HCI using structured light, show examples of facial expression capture and demonstrate an example of a “3D virtual computer mouse” using only a human hand.


Optical Engineering | 2014

Depth matched transfer function of the modified composite pattern structured light illumination method

Charles J. Casey; Laurence G. Hassebrook; Minghao Wang

Abstract. The use of structured light illumination techniques for three-dimensional (3-D) data acquisition is, in many cases, limited to stationary objects due to the multiple pattern projections needed for depth analysis. High speed N-pattern projections require synchronization between the camera and the projector and have the added expense of these high speed devices. The composite pattern (CP) method allows multiple structured light patterns to be combined via spatial frequency modulation, thereby enabling measurement and rendering of a 3-D surface model of an object using only a single pattern. The capture speed of a single pattern does not require synchronization and is only limited by the camera speed which is N times less than the N-pattern techniques. When used on partially translucent materials such as human skin, the CP weighting is corrupted thereby degrading the 3-D reconstruction. The method described herein, termed modified CP, extends the CP design with the addition of a stripe encoding pattern to be insensitive to the internal scattering of human skin. This stripe pattern, used in conjunction with a new spatial processing method, allows for less contrast sensitivity, less sensitivity to human skin spatial frequency response and thus higher resolution performance. The resolution performance is experimentally measured based on a measure our group has developed, referred to as the depth matched transfer function. Measurements and practical applications are demonstrated.


Proceedings of SPIE | 2011

Automated modified composite pattern single image depth acquisition

Charles J. Casey; Laurence G. Hassebrook

The use of structured light illumination techniques for three-dimensional data acquisition is, in many cases, limited to stationary subjects due to the multiple pattern projections needed for depth analysis. Traditional Composite Pattern (CP) multiplexing utilizes sinusoidal modulation of individual projection patterns to allow numerous patterns to be combined into a single image. However, due to demodulation artifacts, it is often difficult to accurately recover the subject surface contour information. On the other hand, if one were to project an image consisting of many thin, identical stripes onto the surface, one could, by isolating each stripe center, recreate a very accurate representation of surface contour. But in this case, recovery of depth information via triangulation would be quite difficult. The method described herein, Modified Composite Pattern (MCP), is a conjunction of these two concepts. Combining a traditional Composite Pattern multiplexed projection image with a pattern of thin stripes allows for accurate surface representation combined with nonambiguous identification of projection pattern elements. In this way, it is possible to recover surface depth characteristics using only a single structured light projection. The technique described utilizes a binary structured light projection sequence (consisting of four unique images) modulated according to Composite Pattern methodology. A stripe pattern overlay is then applied to the pattern. Upon projection and imaging of the subject surface, the stripe pattern is isolated, and the composite pattern information demodulated and recovered, allowing for 3D surface representation. Additionally, we introduce techniques which, when implemented, allow fully automated processing of the Modified Composite Pattern image.


Proceedings of SPIE, the International Society for Optical Engineering | 2007

Super resolution structured light illumination

Laurence G. Hassebrook; Akshay G. Pethe; Charles J. Casey; Veera Ganesh Yalla; Daniel L. Lau

We present an eight million point structured light illumination scanner design. It has a single patch projection resolution of 12,288 lines along the phase direction. The Basler CMOS video cameras are 2352 by 1726 pixel resolution. The configuration consists of a custom Boulder Nonlinear Systems Spatial Light Modulator for the projection system and dual four mega pixel digital video cameras. The camera field of views are tiled with minimal overlap region and a potential capture rate of 24 frames per second. This report is a status report of a project still under development. We will report on the concept of applying a 1D-square footprint projection chip and give preliminary results of single camera scans. The structured light illumination technique we use is the multi-pattern, multi-frequency phase measuring profilometry technique already published by our group.


Proceedings of SPIE | 2011

Distortion-insensitive correlation constellation detection

Charles J. Casey; Laurence G. Hassebrook; Eli Crane; Aaron Davidson

There are applications that require detection of multiple features which remain consistent in shape locally, but may change position with respect to one another globally. We refer to these feature sets as multi-feature constellations. We introduce a multi-level correlation filter design which uses composite feature detection filters, which on one level detect local features, and then on the next level detect constellations of these local feature responses. We demonstrate the constellation filter method with sign language recognition and fingerprint matching.


Applied Optics | 2011

Multifeature distortion-insensitive constellation detection.

Charles J. Casey; Laurence G. Hassebrook; Eli Crane; Aaron Davidson

Many applications require detection of multiple features that locally remain consistent in shape and intensity characteristics, but may globally change position with respect to one another over time or under different circumstances. We refer to these feature sets, defined by their characteristic relative positioning, as multifeature constellations. We introduce a method of processing in which multiple levels of correlation, using specially designed composite feature detection filters, are used to first detect local features, and then to detect constellations of these local features. We include experimental procedures and results indicating how the use of multifeature constellation detection may be utilized in applications such as sign language recognition and fingerprint matching.


Proceedings of SPIE | 2009

Correlation based swarm trackers for 3-dimensional manifold mesh formation

Charles J. Casey; Laurence G. Hassebrook; Priyanka Chaudhary

Our group has developed several methods for acquiring 3-dimensional objects in motion which include facial expressions. For this to be practical we need to identify and track various features contained in facial expressions. To accomplish this we introduce a set of feature based trackers and propose strategies for combining them together to form meshes. We present our strategy in the context of swarm theory where the elements of the swarm are the feature trackers and the communication structure of the swarm is essentially a spatial mesh. We demonstrate the concepts with examples of facial feature tracking.


Archive | 2011

Rotate and Hold and Scan (RAHAS) Structured Light Illumination Pattern Encoding and Decoding

Laurence G. Hassebrook; Charles J. Casey; Eli Crane; Walter F. Lundby


Archive | 2008

Lock and hold structured light illumination

Laurence G. Hassebrook; Daniel L. Lau; Charles J. Casey

Collaboration


Dive into the Charles J. Casey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eli Crane

University of Kentucky

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chun Guan

University of Kentucky

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge