Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael E. Rudd is active.

Publication


Featured researches published by Michael E. Rudd.


Neuron | 2009

The Challenges Natural Images Pose for Visual Adaptation

Fred Rieke; Michael E. Rudd

Advances in our understanding of natural image statistics and of gain control within the retinal circuitry are leading to new insights into the classic problem of retinal light adaptation. Here we review what we know about how rapid adaptation occurs during active exploration of the visual scene. Adaptational mechanisms must balance the competing demands of adapting quickly, locally, and reliably, and this balance must be maintained as lighting conditions change. Multiple adaptational mechanisms in different locations within the retina act in concert to accomplish this task, with lighting conditions dictating which mechanisms dominate.


Vision Research | 2001

Darkness filling-in: a neural model of darkness induction ☆

Michael E. Rudd; Karl Frederick Arrington

A model of darkness induction based on a neural filling-in mechanism is proposed. The model borrows principles from both Lands Retinex theory and BCS/FCS filling-in model of Grossberg and colleagues. The main novel assumption of the induction model is that darkness filling-in signals, which originate at luminance borders, are partially blocked when they try to cross other borders. The percentage of the filling-in signal that is blocked is proportional to the log luminance ratio across the border that does the blocking. The model is used to give a quantitative account of the data from a brightness matching experiment in which a decremental test disk was surrounded by two concentric rings. The luminances of the rings were independently varied to modulate the brightness of the test. Observers adjusted the luminance of a comparison disk surrounded by a single ring of higher luminance to match the test disk in brightness.


Vision Research | 2004

Quantitative properties of achromatic color induction: An edge integration analysis

Michael E. Rudd; Iris K. Zemach

Edge integration refers to a hypothetical process by which the visual system combines information about the local contrast, or luminance ratios, at luminance borders within an image to compute a scale of relative reflectances for the regions between the borders. The results of three achromatic color matching experiments, in which a test and matching ring were surrounded by one or more rings of varying luminance, were analyzed in terms of three alternative quantitative edge integration models: (1) a generalized Retinex algorithm, in which achromatic color is computed from a weighted sum of log luminance ratios, with weights free to vary as a function of distance from the test (Weighted Log Luminance Ratio model); (2) an elaboration of the first model, in which the weights given to distant edges are reduced by a percentage that depends on the log luminance ratios of borders lying between the distant edges and the target (Weighted Log Luminance Ratio model with Blockage); and (3) an alternative modification of the first model, in which Michelson contrasts are substituted for log luminance ratios in the achromatic color computation (Weighted Michelson Contrast model). The experimental results support the Weighted Log Luminance Ratio model over the other two edge integration models. The Weighted Log Luminance Ratio model is also shown to provide a better fit to the achromatic color matching data than does Wallachs Ratio Rule, which states that the two disks will match in achromatic color when their respective disk/ring luminance ratios are equal.


Journal of Vision | 2005

The highest luminance anchoring rule in achromatic color perception: Some counterexamples and an alternative theory

Michael E. Rudd; Iris K. Zemach

It has been hypothesized that lightness is computed in a series of stages involving: (1) extraction of local contrast or luminance ratios at borders; (2) edge integration, to combine contrast or luminance ratios across space; and (3) anchoring, to relate the relative lightness scale computed in Stage 2 to the scale of real-world reflectances. The results of several past experiments have been interpreted as supporting the highest luminance anchoring rule, which states that the highest luminance in a scene always appears white. We have previously proposed a quantitative model of achromatic color computation based on a distance-dependent edge integration mechanism. In the case of two disks surrounded by lower luminance rings, these two theories--highest luminance anchoring and distance--dependent edge integration-make different predictions regarding the luminance of a matching disk required to for an achromatic color match to a test disk of fixed luminance. The highest luminance rule predicts that luminance of the ring surrounding the test should make no difference, whereas the edge integration model predicts that increasing the surround luminance should reduce the luminance required for a match. The two theories were tested against one another in two experiments. The results of both experiments support the edge integration model over the highest luminance rule.


Journal of Vision | 2010

How attention and contrast gain control interact to regulate lightness contrast and assimilation: a computational neural model.

Michael E. Rudd

Recent theories of lightness perception assume that lightness (perceived reflectance) is computed by a process that contrasts the targets luminance with that of one or more regions in its spatial surround. A challenge for any such theory is the phenomenon of lightness assimilation, which occurs when increasing the luminance of a surround region increases the target lightness: the opposite of contrast. Here contrast and assimilation are studied quantitatively in lightness matching experiments utilizing concentric disk-and-ring displays. Whether contrast or assimilation is seen depends on a number of factors including: the luminance relations of the target, surround, and background; surround size; and matching instructions. When assimilation occurs, it is always part of a larger pattern in which assimilation and contrast both occur over different ranges of surround luminance. These findings are quantitatively modeled by a theory that assumes lightness is computed from a weighted sum of responses of edge detector neurons in visual cortex. The magnitude of the neural response to an edge is regulated by a combination of contrast gain control acting between neighboring edge detectors and a top-down attentional gain control that selectively weights the response to stimulus edges according to their task relevance.


human vision and electronic imaging conference | 2001

Lightness computation by a neural filling-in mechanism

Michael E. Rudd

A growing body of evidence suggests that the brain computes lightness in a two-stage process that involves (1) an early neural encoding of contrast at the locations of luminance borders in the visual image, and (2) a subsequent filling-in of the lightnesses of the regions lying between the borders. I will review evidence that supports this theory and present a computational model of lightness based on filling-in by a spatially-spreading cortical diffusion mechanism. The behavior of the model will be illustrate by showing how it quantitatively accounts for the lightness matching data of Rudd and Arrington. The models performance will be compared that of other theories of lightness, including retinex theory, a modified version of retinex theory that assumes edge integration with a falloff in spatial weighting of edge information with distance, lightness anchoring based on the highest luminance rule, and the BCS/FCS filling-in model developed by Grossberg and his colleagues.


Frontiers in Human Neuroscience | 2014

A cortical edge-integration model of object-based lightness computation that explains effects of spatial context and individual differences.

Michael E. Rudd

Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observers interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.


Psychonomic Bulletin & Review | 2009

The revelation effect for autobiographical memory: a mixture-model analysis

Daniel M. Bernstein; Michael E. Rudd; Edgar Erdfelder; Ryan Godfrey; Elizabeth F. Loftus

Participants provided information about their childhood by rating their confidence about whether they had experienced various events (e.g., “broke a window playing ball”). On some trials, participants unscrambled a key word from the event phrase (e.g., wdinwo—window) or an unrelated word (e.g., gnutge—nugget) before seeing the event and giving their confidence ratings. The act of unscrambling led participants to increase their confidence that the event occurred in their childhood, but only when the confidence rating immediately followed the act of unscrambling. This increase in confidence mirrors the “revelation effect” observed in word recognition experiments. In the present article, we analyzed our data using a new signal detection mixture distribution model that does not require the researcher to know the veracity of memory judgments a priori. Our analysis reveals that unscrambling a key word or an unrelated word affects response bias and discriminability in autobiographical memory tests in ways that are very similar to those that have been previously found for word recognition tasks.


Advances in Cognitive Psychology | 2007

Metacontrast masking and the cortical representation of surface color: dynamical aspects of edge integration and contrast gain control

Michael E. Rudd

This paper reviews recent theoretical and experimental work supporting the idea that brightness is computed in a series of neural stages involving edge integration and contrast gain control. It is proposed here that metacontrast and paracontrast masking occur as byproducts of the dynamical properties of these neural mechanisms. The brightness computation model assumes, more specifically, that early visual neurons in the retina, and cortical areas V1 and V2, encode local edge signals whose magnitudes are proportional to the logarithms of the luminance ratios at luminance edges within the retinal image. These local edge signals give rise to secondary neural lightness and darkness spatial induction signals, which are summed at a later stage of cortical processing to produce a neural representation of surface color, or achromatic color, in the case of the chromatically neutral stimuli considered here. Prior to the spatial summation of these edge-based induction signals, the weights assigned to local edge contrast are adjusted by cortical gain mechanisms involving both lateral interactions between neural edge detectors and top-down attentional control. We have previously constructed and computer-simulated a neural model of achromatic color perception based on these principles and have shown that our model gives a good quantitative account of the results of several brightness matching experiments. Adding to this model the realistic dynamical assumptions that 1) the neurons that encode local contrast exhibit transient firing rate enhancement at the onset of an edge, and 2) that the effects of contrast gain control take time to spread between edges, results in a dynamic model of brightness computation that predicts the existence Broca-Sulzer transient brightness enhancement of the target, Type B metacontrast masking, and a form of paracontrast masking in which the target brightness is enhanced when the mask precedes the target in time.


Journal of Electronic Imaging | 2017

Lightness computation by the human visual system

Michael E. Rudd

Abstract. A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann’s theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

Collaboration


Dive into the Michael E. Rudd's collaboration.

Top Co-Authors

Avatar

Iris K. Zemach

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Fred Rieke

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Dorin Popa

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan Godfrey

University of California

View shared research outputs
Top Co-Authors

Avatar

Amanda Heredia

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karen L. Syrjala

Fred Hutchinson Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Karl Frederick Arrington

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge