Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lawrence B. Wolff is active.

Publication


Featured researches published by Lawrence B. Wolff.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1990

Polarization-based material classification from specular reflection

Lawrence B. Wolff

A computationally simple yet powerful method for distinguishing metal and dielectric material surfaces from the polarization characteristics of specularly reflected light is introduced. The method is completely passive, requiring only the sensing of transmitted radiance of reflected light through a polarizing filter positioned in multiple orientations in front of a camera sensor. Precise positioning of lighting is not required. An advantage of using a polarization-based method for material classification is its immunity to color variations, which so commonly exist on uniform material samples. A simple polarization-reflectance model, called the Fresnel reflectance model, is developed. The fundamental assumptions are that the diffuse component of reflection is completely unpolarized and that the polarization state of the specular component of reflection is dictated by the Fresnel reflection coefficients. The material classification method presented results axiomatically from the Fresnel reflectance model, by estimating the polarization Fresnel ratio. No assumptions are required about the functional form of the diffuse and specular components of reflection. The method is demonstrated on some common objects consisting of metal and dielectric parts. >


computer vision and pattern recognition | 2001

Illumination invariant face recognition using thermal infrared imagery

Diego A. Socolinsky; Lawrence B. Wolff; Joshua D. Neuheisel; Christopher K. Eveland

A key problem for face recognition has been accurate identification under variable illumination conditions. Conventional video cameras sense reflected light so that image grayvalues are a product of both intrinsic skin reflectivity and external incident illumination, thus obfuscating the intrinsic reflectivity of skin. Thermal emission from skin, on the other hand, is an intrinsic measurement that can be isolated from external illumination. We examine the invariance of Long-Wave InfraRed (LWIR) imagery with respect to different illumination conditions from the viewpoint of performance comparisons of two well-known face recognition algorithms applied to LWIR and visible imagery. We develop rigourous data collection protocols that formalize face recognition analysis for computer vision in the thermal IR.


IEEE Transactions on Image Processing | 2002

Multispectral image visualization through first-order fusion

Diego A. Socolinsky; Lawrence B. Wolff

We present a new formalism for the treatment and understanding of multispectral images and multisensor imagery based on first-order contrast information. Although little attention has been paid to the utility of multispectral contrast, we develop a theory for multispectral contrast that enables us to produce an optimal grayscale visualization of the first-order contrast of an image with an arbitrary number of bands. We demonstrate how our technique can reveal significantly more interpretive information to an image analyst, who can use it in a number of image understanding algorithms. Existing grayscale visualization strategies are reviewed. A variety of experimental results are presented to support the performance of the new method.


Journal of The Optical Society of America A-optics Image Science and Vision | 1994

Diffuse-reflectance model for smooth dielectric surfaces

Lawrence B. Wolff

A reflectance model that accurately predicts diffuse reflection from smooth inhomogeneous dielectric surfaces as a function of both viewing angle and angle of incidence is proposed. Utilizing results of radiative-transfer theory for subsurface multiple scattering, this new model precisely accounts for how incident light and the distribution of subsurface scattered light are influenced by Fresnel attenuation and Snell refraction at a smooth air–dielectric surface boundary. Whereas similar assumptions about subsurface scattering and Fresnel attenuation have been made in previous research on diffuse-reflectance modeling, the proposed model combines these assumptions in a different way and yields a more accurate expression for diffuse reflection that is shown to account for a number of empirical observations not predicted by existing models. What is particularly new about this diffuse-reflectance model is the resulting significant dependence on the viewing angle with respect to the surface normal. This dependence on the viewing angle explains distinctive properties of the behavior of diffuse reflection from smooth dielectric objects, properties not accounted for by existing diffuse-reflection models. Among these properties are prominent diffuse-reflection maxima effects occurring on objects when incident point-source illumination is greater than 50° relative to viewing, including the range from 90° to 180°, where the light source is behind the object with respect to viewing. For this range of incident illumination there is significant deviation from Lambertian behavior over a large portion of most smooth dielectric object surfaces, which makes it important for the computer vision community to be aware of such effects during incorporation of reflectance models into implementation of algorithms such as shape-from-shading. A number of experimental results are presented that verify the proposed diffuse-reflectance model.


computer vision and pattern recognition | 1989

Using polarization to separate reflection components

Lawrence B. Wolff

A technique is presented which utilizes the polarization properties of reflected light to separate specular and diffuse components of reflection. This technique works for both dielectric and metal surfaces, regardless of the color of the illuminating light source or the color detail on the object surface. In addition to separating out diffuse and specular components of reflection, the technique can also identify whether certain image regions correspond to a dielectric or metal object surface. Extensive experimentation is presented for a variety of dielectric and metal surfaces, both polished and rough, using a point light source.<<ETX>>


International Journal of Computer Vision | 1998

Improved Diffuse Reflection Models for Computer Vision

Lawrence B. Wolff; Shree K. Nayar; Michael Oren

There are many computational vision techniques that fundamentally rely upon assumptions about the nature of diffuse reflection from object surfaces consisting of commonly occurring nonmetallic materials. Probably the most prevalent assumption made about diffuse reflection by computer vision researchers is that its reflected radiance distribution is described by the Lambertian model, whether the surface is rough or smooth. While computationally and mathematically a relatively simple model, in physical reality the Lambertian model is deficient in accurately describing the reflected radiance distribution for both rough and smooth nonmetallic surfaces. Recently, in computer vision diffuse reflectance models have been proposed separately for rough, and, smooth nonconducting dielectric surfaces each of these models accurately predicting salient non-Lambertian phenomena that have important bearing on computer vision methods relying upon assumptions about diffuse reflection. Together these reflectance models are complementary in their respective applicability to rough and smooth surfaces. A unified treatment is presented here detailing important deviations from Lambertian behavior for both rough and smooth surfaces. Some speculation is given as to how these separate diffuse reflectance models may be combined.


international conference on robotics and automation | 1997

Liquid crystal polarization camera

Lawrence B. Wolff; Todd A. Mancini; Philippe O. Pouliquen; Andreas G. Andreou

We present a fully automated system which unites CCD camera technology with liquid crystal technology to create a polarization camera capable of sensing the partial linear polarization of reflected light from objects at pixel resolution. As polarization sensing not only measures intensity but also additional physical parameters of light, it can therefore provide a richer set of descriptive physical constraints for the understanding of images. Previously it has been shown that polarization cues can be used to perform dielectric/metal material identification, specular and diffuse reflection component analysis, as well as complex image segmentations that would be significantly more complicated or even infeasible using intensity and color alone. Such analysis has so far been done with a linear polarizer mechanically rotated in front of a CCD camera. The full automation of resolving polarization components using liquid crystals not only affords an elegant application, but significantly speeds up the sensing of polarization components and reduces the amount of optical distortion present in the wobbling of a mechanically rotating polarizer. In our system two twisted nematic liquid crystals are placed in front of a fixed linear polarizer placed in front of a CCD camera. The application of a series of electrical pulses to the liquid crystals in synchronization with the CCD camera video frame rate produces a controlled sequence of polarization component images that are stored and processed on Datacube boards. We present a scheme for mapping a partial linear polarization state measured at a pixel into hue, saturation and intensity producing a representation for a partial linear polarization image. Our polarization camera currently senses partial linear polarization and outputs such a color representation image at 5 Hz. The unique vision understanding capabilities of our polarization camera system are demonstrated with experimental results showing polarization-based dielectric/metal material classification, specular reflection and occluding contour segmentations in a fairly complex scene, and surface orientation constraints.


Image and Vision Computing | 1995

Polarization camera sensors

Lawrence B. Wolff; Andreas G. Andreou

Abstract Recently, polarization vision has been shown to simplify some important image understanding tasks that can be more difficult to perform with intensity vision alone. This, together with the more general capabilities of polarization vision for image understanding, motivates the building of camera sensors that automatically sense and process polarization information. Described in this paper are a variety of designs for polarization camera sensors that have been built to automatically sense partial linearly polarized light, and computationally process this sensed polarization information at pixel resolution to produce a visualization of reflected polarization from a scene, and/or a visualization of physical information in a scene directly related to sensed polarization. The three designs for polarization camera sensors presented utilize (i) serial acquisition of polarization components using liquid crystals, (ii) parallel acquisition of polarization components using a stereo pair of cameras and a polarizing beamsplitter, and (iii) a prototype photosensing chip with three scanlines, each scanline coated with a particular orientation of polarizing material. As the sensory input to polarization camera sensors subsumes that of standard intensity cameras, they can potentially significantly expand the application potential of computer vision. A number of images taken with polarization cameras are presented, showing potential applications to image understanding, object recognition, circuit board inspection and marine biology.


Journal of The Optical Society of America A-optics Image Science and Vision | 1994

Polarization camera for computer vision with a beam splitter

Lawrence B. Wolff

A fully automated system that utilizes two CCD cameras and a polarizing beam splitter to create a polarization camera capable of sensing the polarization of reflected light from objects at pixel resolution is presented. The physical dimensions of the polarization of light beyond that of intensity carry extra information from a scene that can provide a richer set of descriptive physical constraints for the understanding of images. It has been shown that polarization cues can be used to perform dielectric and metal material identification and specular-and diffuse-reflection component analysis, as well as complex image segmentations that would be immensely more complicated or even infeasible with the use of intensity and color alone. A polarizing-plate beam splitter is placed in front of two CCD cameras so that light beams reflected from and transmitted through the beam splitter are each incident upon a separate camera. The polarization state of the reflected and the transmitted beams are linearly independent in terms of two orthogonal-polarization components, and these components are resolved in real time from the simple solution of two simultaneous linear equations. The polarizing-plate beam splitter allows for the simultaneous measurement of two orthogonal-polarization components over fairly wide field views suitable for vision and robotics. A polarization contrast image can be produced at 15 Hz. Two sets of orthogonal-polarization component pairs can be resolved by electronically switching a twisted nematic liquid crystal placed in front of the beam splitter, permitting the real-time measurement of partial-linear-polarization images at 7.5 Hz. A scheme for mapping states of partial linear polarization into hue, saturation, and intensity, which is a very suitable representation for a polarization image, is illustrated. The unique vision-understanding capabilities of this polarization camera system are demonstrated with experimental results showing polarization-based dielectric and metal material classification, shape constraints from reflected polarization, and specular-reflection and occluding-contour segmentations in a fairly complex scene.


Image and Vision Computing | 2003

Tracking human faces in infrared video

Christopher K. Eveland; Diego A. Socolinsky; Lawrence B. Wolff

Abstract Detection and tracking of face regions in image sequences has applications to important problems such as face recognition, human–computer interaction, and video surveillance. Visible sensors have inherent limitations in solving this task, such as the need for sufficient and specific lighting conditions, as well as sensitivity to variations in skin color. Thermal infrared (IR) imaging sensors image emitted light, not reflected light, and therefore do not have these limitations, providing a 24-h, 365-day capability while also being more robust to variations in the appearance of individuals. In this paper, we present a system for tracking human heads that has three components. First, a method for modeling thermal emission from human skin that can be used for the purpose of segmenting and detecting faces and other exposed skin regions in IR imagery is presented. Second, the segmentation model is applied to the CONDENSATION algorithm for tracking the head regions over time. This includes a new observation density that is motivated by the segmentation results. Finally, we examine how to use the tracking results to refine the segmentation estimate.

Collaboration


Dive into the Lawrence B. Wolff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elli Angelopoulou

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elias A. Zerhouni

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wayne Mitzner

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge