Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yaniv Morgenstern is active.

Publication


Featured researches published by Yaniv Morgenstern.


Journal of Vision | 2004

Contrast dependence of spatial summation revealed by classification image analysis

Yaniv Morgenstern; James H. Elder; Yuqian Hou

Detection of low-contrast luminance-defined stimuli can involve spatial summation over a large portion of the visual field. However prior psychophysical results suggest that the summation region may shrink substantially in the presence of high-contrast masking gratings or noise (Legge & Foley, 1980; Kersten, 1984). This may be related to recent findings that the extent of spatial summation in V1 neurons depends upon contrast (Sceniak et al., 1999; Kapadia et al., 1999). Here we use a classification image technique to directly test whether the psychophysical receptive field for a simple stimulus (a vertical edge in noise) is dependent upon contrast.


Proceedings of the National Academy of Sciences of the United States of America | 2011

The human visual system's assumption that light comes from above is weak

Yaniv Morgenstern; Richard F. Murray; Laurence R. Harris

Every biological or artificial visual system faces the problem that images are highly ambiguous, in the sense that every image depicts an infinite number of possible 3D arrangements of shapes, surface colors, and light sources. When estimating 3D shape from shading, the human visual system partly resolves this ambiguity by relying on the light-from-above prior, an assumption that light comes from overhead. However, light comes from overhead only on average, and most images contain visual information that contradicts the light-from-above prior, such as shadows indicating oblique lighting. How does the human visual system perceive 3D shape when there are contradictions between what it assumes and what it sees? Here we show that the visual system combines the light-from-above prior with visual lighting cues using an efficient statistical strategy that assigns a weight to the prior and to the cues and finds a maximum-likelihood lighting direction estimate that is a compromise between the two. The prior receives surprisingly little weight and can be overridden by lighting cues that are barely perceptible. Thus, the light-from-above prior plays a much more limited role in shape perception than previously thought, and instead human vision relies heavily on lighting cues to recover 3D shape. These findings also support the notion that the visual system efficiently integrates priors with cues to solve the difficult problem of recovering 3D shape from 2D images.


Journal of Vision | 2010

Cue combination on the circle and the sphere

Richard F. Murray; Yaniv Morgenstern

Bayesian cue combination models have been used to examine how human observers combine information from several cues to form estimates of linear quantities like depth. Here we develop an analogous theory for circular quantities like planar direction. The circular theory is broadly similar to the linear theory but differs in significant ways. First, in the circular theory the combined estimate is a nonlinear function of the individual cue estimates. Second, in the circular theory the mean of the combined estimate is affected not only by the means of individual cues and the weights assigned to individual cues but also by the variability of individual cues. Third, in the circular theory the combined estimate can be less certain than the individual estimates, if the individual estimates disagree with one another. Fourth, the circular theory does not have some of the closed-form expressions available in the linear theory, so data analysis requires numerical methods. We describe a vector sum model that gives a heuristic approximation to the circular theorys behavior. We also show how the theory can be extended to deal with spherical quantities like direction in three-dimensional space.


The Journal of Neuroscience | 2012

Local Visual Energy Mechanisms Revealed by Detection of Global Patterns

Yaniv Morgenstern; James H. Elder

A central goal of visual neuroscience is to relate the selectivity of individual neurons to perceptual judgments, such as detection of a visual pattern at low contrast or in noise. Since neurons in early areas of visual cortex carry information only about a local patch of the image, detection of global patterns must entail spatial pooling over many such neurons. Physiological methods provide access to local detection mechanisms at the single-neuron level but do not reveal how neural responses are combined to determine the perceptual decision. Behavioral methods provide access to perceptual judgments of a global stimulus but typically do not reveal the selectivity of the individual neurons underlying detection. Here we show how the existence of a nonlinearity in spatial pooling does allow properties of these early mechanisms to be estimated from behavioral responses to global stimuli. As an example, we consider detection of large-field sinusoidal gratings in noise. Based on human behavioral data, we estimate the length and width tuning of the local detection mechanisms and show that it is roughly consistent with the tuning of individual neurons in primary visual cortex of primate. We also show that a local energy model of pooling based on these estimated receptive fields is much more predictive of human judgments than competing models, such as probability summation. In addition to revealing underlying properties of early detection and spatial integration mechanisms in human cortex, our findings open a window on new methods for relating system-level perceptual judgments to neuron-level processing.


electronic imaging | 2015

The role of natural lighting diffuseness in human visual perception

Yaniv Morgenstern; Wilson S. Geisler; Richard F. Murray

The pattern of the light that falls on the retina is a conflation of real-world sources such as illumination and reflectance. Human observers often contend with the inherent ambiguity of the underlying sources by making assumptions about what real-world sources are most likely. Here we examine whether the visual system’s assumptions about illumination match the statistical regularities of the real world. We used a custom-built multidirectional photometer to capture lighting relevant to the shading of Lambertian surfaces in hundreds of real-world scenes. We quantify the diffuseness of these lighting measurements, and compare them to previous biases in human visual perception. We find that (1) natural lighting diffuseness falls over the same range as previous psychophysical estimates of the visual system’s assumptions on diffuseness, and (2) natural lighting almost always provides lighting direction cues that are strong enough to override the human visual system’s well known assumption that light tends to come from above. A consequence of these findings is that what seem to be errors in visual perception are often actually byproducts of the visual system knowing about and using reliable properties of real-world lighting when contending with ambiguous retinal images.


Journal of Vision | 2015

Exploring the perceptual similarity structure of dynamic textures.

Yaniv Morgenstern; Shinho Cho; Daniel Kersten

Most previous work on material perception has focused on their optical properties, such as matte and specular reflectance. However, material categories and attributes may also be recognized from their motion flows, such as the contrasting dynamic textures produced by water and honey. We used a multi-arrangement form of multidimensional scaling (MDS; Kriegeskorte and Mur, 2012) to investigate the perceptual dimensions with which human observers compare dynamic textures. Observers used the drag and drop operations of the computers mouse to arrange 97 dynamic textures (viewed as animated icons on a computer screen) according to their similarity. The dynamic textures were looped three-second movies windowed with either a small (radius = 38 pixels) or a large (radius = 225 pixels) circular aperture to control the level of spatial context. In the low spatial context condition (small apertures), the subjective reports and the MDS of the similarity arrangements were consistent with categorizations based on color and motion. In the high spatial context condition (large apertures), nine out of eleven human observers clustered the textures into semantic categories, including natural and artificial, liquid viscosity, wind, and hair/pili movement. The MDS of the average similarity arrangements for these nine observers showed that semantic categories were clustered along a primary attribute continuum from highly penetrable (e.g., water, honey), to less penetrable (e.g., hairs, cloth, fire), to impenetrable (e.g., cement, wood, metal). These findings suggest that larger spatial context reduces ambiguity resulting in similarity arrangements based on physically meaningful dimensions. Meeting abstract presented at VSS 2015.


Journal of Vision | 2014

Human vision is attuned to the diffuseness of natural light

Yaniv Morgenstern; Wilson S. Geisler; Richard F. Murray


Journal of Vision | 2010

Real-world illumination measurements with a multidirectional photometer

Yaniv Morgenstern; Richard F. Murray; Wilson S. Geisler


Journal of Vision | 2010

Contextual lighting cues can override the light-from-above prior

Yaniv Morgenstern; Richard F. Murray


Journal of Vision | 2011

A low-dimensional statistical model of natural lighting

Yaniv Morgenstern; Richard F. Murray; Wilson S. Geisler

Collaboration


Dive into the Yaniv Morgenstern's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wilson S. Geisler

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shinho Cho

University of Minnesota

View shared research outputs
Researchain Logo
Decentralizing Knowledge