Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael S. Langer is active.

Publication


Featured researches published by Michael S. Langer.


Perception | 2000

Depth discrimination from shading under diffuse lighting.

Michael S. Langer; Hh Bülthoff

The human visual system has a remarkable ability to interpret smooth patterns of light on a surface in terms of 3-D surface geometry. Classical studies of shape-from-shading perception have assumed that surface irradiance varies with the angle between the local surface normal and a collimated light source. This model holds, for example, on a sunny day. One common situation in which this model fails to hold, however, is under diffuse lighting such as on a cloudy day. Here we report on the first psychophysical experiments that address shape-from-shading under a uniform diffuse-lighting condition. Our hypothesis was that shape perception can be explained with a perceptual model that “dark means deep”. We tested this hypothesis by comparing performance in a depth-discrimination task to performance in a brightness-discrimination task, using identical stimuli. We found a significant correlation between responses in the two tasks, supporting a dark-means-deep model. However, overall performance in the depth-discrimination task was superior to that predicted by a dark-means-deep model. This implies that humans use a more accurate model than dark-means-deep to perceive shape-from-shading under diffuse lighting.


Perception | 2001

A Prior for Global Convexity in Local Shape-from-Shading

Michael S. Langer; Hh Bülthoff

To solve the ill-posed problem of shape-from-shading, the visual system often relies on prior assumptions such as illumination from above or viewpoint from above. Here we demonstrate that a third prior assumption is used—namely that the surface is globally convex. We use complex surface shapes that are realistically rendered with computer graphics, and we find that performance in a local-shape-discrimination task is significantly higher when the shapes are globally convex than when they are globally concave. The results are surprising because the qualitative global shapes of the surfaces are perceptually unambiguous. The results generalise findings such as the hollow-potato illusion (Hill and Bruce 1994 Perception 23 1335–1337) which consider global shape perception only.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Toward accurate recovery of shape from shading under diffuse lighting

A.J. Stewart; Michael S. Langer

A new surface radiance model for diffuse lighting is presented which incorporates shadows, interreflections, and surface orientation. An algorithm is presented that uses this model to compute shape-from-shading under diffuse lighting. The algorithm is tested on both synthetic and real images, and is found to perform more accurately than the only previous algorithm for this problem.


International Journal of Computer Vision | 1999

When Shadows Become Interreflections

Michael S. Langer

Shadows and interreflections are present in all real scenes and provide a rich set of photometric cues for vision. In this paper, we show how shadows and interreflections are intrinsically related. Shadows tend to occur in those parts of a scene in which interreflections have the largest gain. We provide several basic results concerning this relationship in terms of the interreflection modes of a scene. We show that for a given scene, the interreflection mode having the largest gain is a physically realizable radiance function. We derive bounds on the gain of this mode and discuss how this mode is related to shadows. We analyze how well an n-bounce model of interreflections approximates an infinite-bounce model and how shadows affect this approximation. Finally, we introduce a novel method for inferring surface color in a uni-chromatic scene. The method is based on the relative contrast of the scene in different color channels.


Journal of Vision | 2009

Learning illumination- and orientation-invariant representations of objects through temporal association

Guy Wallis; Benjamin T. Backus; Michael S. Langer; Gesche M. Huebner; Hh Bülthoff

As the orientation or illumination of an object changes so does its appearance. This paper considers how observers are nonetheless able to recognize objects that have undergone such changes. In particular the paper tests the hypothesis that observers rely on temporal correlations between different object views to decide whether they are views of the same object or not. In a series of experiments subjects were shown a sequence of views representing a slowly transforming object. Testing revealed that subjects had formed object representations which were directly influenced by the temporal characteristics of the training views. In particular, introducing spurious correlations between views of different peoples heads caused subjects to regard those views as being of a single person. This rapid and robust overriding of basic generalization processes supports the view that our recognition system tracks the correlated appearance of views of objects across time. Such view associations appear to allow the visual system to solve the view invariance problem without recourse to complex illumination models for extracting 3D form, or the use of the image plane transformations required to make appearance-based comparisons.


Journal of The Optical Society of America A-optics Image Science and Vision | 2005

Spectrum analysis of motion parallax in a 3D cluttered scene and application to egomotion.

Richard Mann; Michael S. Langer

Previous methods for estimating observer motion in a rigid 3D scene assume that image velocities can be measured at isolated points. When the observer is moving through a cluttered 3D scene such as a forest, however, pointwise measurements of image velocity are more challenging to obtain because multiple depths, and hence multiple velocities, are present in most local image regions. We introduce a method for estimating egomotion that avoids pointwise image velocity estimation as a first step. In its place, the direction of motion parallax in local image regions is estimated, using a spectrum-based method, and these directions are then combined to directly estimate 3D observer motion. There are two advantages to this approach. First, the method can be applied to a wide range of 3D cluttered scenes, including those for which pointwise image velocities cannot be measured because only normal velocity information is available. Second, the egomotion estimates can be used as a posterior constraint on estimating pointwise image velocities, since known egomotion parameters constrain the candidate image velocities at each point to a one-dimensional rather than a two-dimensional space.


eurographics symposium on rendering techniques | 2004

A spectral-particle hybrid method for rendering falling snow

Michael S. Langer; Linqiao Zhang; Allison W. Klein; Aditya Bhatia; Javeen Pereira; Dipinder Rekhi

Falling snow has the visual property that it is simultaneously a set of discrete moving particles as well as a dynamic texture. To capture the dynamic texture properties of falling snow using particle systems can, however, require so many particles that it severely impacts rendering rates. Here we address this limitation by rendering the texture properties directly. We use a standard particle system to generate a relatively sparse set of falling snow flakes, and we then composite in a dynamic texture to fill in between the particles. The texture is generated using a novel image-based spectral synthesis method. The spectrum of the falling snow texture is defined by a dispersion relation in the image plane, derived from linear perspective. The dispersion relation relates image speed, image size, and particle depth. In the frequency domain, it relates the wavelength and speed of moving 2D image sinusoids. The parameters of this spectral snow can be varied both across the image and over time. This provides the flexibility to match the direction and speed parameters of the spectral snow to those of the falling particles. Camera motion can also be matched. Our method produces visually pleasing results at interactive rendering rates. We demonstrate our approach by adding snow effects to static and dynamic scenes. An extension for creating rain effects is also presented.


intelligent robots and systems | 1995

Space occupancy using multiple shadowimages

Michael S. Langer; Gregory Dudek; Steven W. Zucker

Addresses the problem of estimating 3D space occupancy using video imagery in the context of mobile robotics. A stationary robot observes a cluttered scene from a single viewpoint, and a second robot illuminates the scene from a sequence of directions thus producing a sequence of grey-level images. Differences of successive images are used to compute a sequence of shadowimages. The problem is to compute free space and occupied space from these shadowimages. Solutions to this problem are known for the special case of terrain scenes. The authors generalize these solutions to non-terrain scenes by making two key observations. First, there is a subset constraint on the shadowimages of a non-terrain scene, which allows the visible surfaces of a non-terrain scene to be recovered by a terrain-based technique. Second, the remaining regions of the shadowimages provide a conservative estimate of the occupied space hidden by these visible surfaces.


computer vision and pattern recognition | 1997

What is a light source

Michael S. Langer; Steven W. Zucker

Traditional light source modelling is concerned with specific types of light sources, the two most common of which are point sources and daylight. Little attempt has been made, however, to relate different types of sources to each other. For example, how may the lighting from an overcast sky be compared to that from a lamp? Having a theoretical framework to compare different types of light sources is important for computer vision, in particular for understanding shading and shadow cues. A vision system needs to take account of the light source in order to interpret these cues. In this paper, we present a framework for comparing types of light sources which is based on a dimensional analysis of the set of light rays in a free space. Specifically, we introduce a 4-D light source hypercube in which the different types of sources may be embedded and compared. We also present a novel definition for light sources which generalizes the standard definition of a source as an emitter.


International Journal of Computer Vision | 2003

Optical Snow

Michael S. Langer; Richard Mann

Classical methods for measuring image motion by computer have concentrated on the cases of optical flow in which the motion field is continuous, or layered motion in which the motion field is piecewise continuous. Here we introduce a third natural category which we call optical snow. Optical snow arises in many natural situations such as camera motion in a highly cluttered 3-D scene, or a passive observer watching a snowfall. Optical snow yields dense motion parallax with depth discontinuities occurring near all image points. As such, constraints on smoothness or even smoothness in layers do not apply. In the Fourier domain, optical snow yields a one-parameter family of planes which we call a bowtie. We present a method for measuring the parameters of the direction and range of speeds of the motion for the special case of parallel optical snow. We demonstrate the effectiveness of the method for both synthetic and real image sequences.

Collaboration


Dive into the Michael S. Langer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sébastien Roy

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge