Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Melchi Michel is active.

Publication


Featured researches published by Melchi Michel.


Journal of Vision | 2011

Intrinsic position uncertainty explains detection and localization performance in peripheral vision

Melchi Michel; Wilson S. Geisler

Efficient performance in visual detection tasks requires excluding signals from irrelevant spatial locations. Indeed, researchers have found that detection performance in many tasks involving multiple potential target locations can be explained by the uncertainty the added locations contribute to the task. A similar type of Location Uncertainty may arise within the visual system itself. Converging evidence from hyperacuity and crowding studies suggests that feature localization declines rapidly in peripheral vision. This decline should add inherent position uncertainty to detection tasks. The current study used a modified detection task to measure how intrinsic position uncertainty changes with eccentricity. Subjects judged whether a Gabor target appeared within a cued region of a noisy display. The eccentricity and size of the region varied across blocks. When subjects detected the target, they used a mouse to indicate its location. This allowed measurement of localization as well as detection errors. An ideal observer degraded with internal response noise and position noise (uncertainty) accounted for both the detection and localization performance of the subjects. The results suggest that position uncertainty grows linearly with visual eccentricity and is independent of target contrast. Intrinsic position uncertainty appears to be a critical factor limiting search and detection performance.


Journal of Vision | 2007

Parameter learning but not structure learning: a Bayesian network model of constraints on early perceptual learning.

Melchi Michel; Robert A. Jacobs

Visual scientists have shown that people are capable of perceptual learning in a large variety of circumstances. Are there constraints on such learning? We propose a new constraint on early perceptual learning, namely, that people are capable of parameter learning-they can modify their knowledge of the prior probabilities of scene variables or of the statistical relationships among scene and perceptual variables that are already considered to be potentially dependent-but they are not capable of structure learning-they cannot learn new relationships among variables that are not considered to be potentially dependent, even when placed in novel environments in which these variables are strongly related. These ideas are formalized using the notation of Bayesian networks. We report the results of five experiments that evaluate whether subjects can demonstrate cue acquisition, which means that they can learn that a sensory signal is a cue to a perceptual judgment. In Experiment 1, subjects were placed in a novel environment that resembled natural environments in the sense that it contained systematic relationships among scene and perceptual variables that which are normally dependent. In this case, cue acquisition requires parameter learning and, as predicted, subjects succeeded in learning a new cue. In Experiments 2-5, subjects were placed in novel environments that did not resemble natural environments-they contained systematic relationships among scene and perceptual variables that are not normally dependent. Cue acquisition requires structure learning in these cases. Consistent with our hypothesis, subjects failed to learn new cues in Experiments 2-5. Overall, the results suggest that the mechanisms of early perceptual learning are biased such that people can only learn new contingencies between scene and sensory variables that are considered to be potentially dependent.


Nature Neuroscience | 2013

An illusion predicted by V1 population activity implicates cortical topography in shape perception

Melchi Michel; Yuzhi Chen; Wilson S. Geisler; Eyal Seidemann

Mammalian primary visual cortex (V1) is topographically organized such that the pattern of neural activation in V1 reflects the location and spatial extent of visual elements in the retinal image, but it is unclear whether this organization contributes to visual perception. We combined computational modeling, voltage-sensitive dye imaging (VSDI) in behaving monkeys and behavioral measurements in humans to investigate whether the large-scale topography of V1 population responses influences shape judgments. Specifically, we used a computational model to design visual stimuli that had the same physical shape, but were predicted to elicit variable V1 response spread. We confirmed these predictions with VSDI. Finally, we designed a behavioral task in which human observers judged the shapes of these stimuli and found that their judgments were systematically distorted by the spread of V1 activity. This illusion suggests that the topographic pattern of neural population responses in visual cortex contributes to visual perception.


Vision Research | 2015

Visual search under scotopic lighting conditions

Vivian C. Paulun; Alexander C. Schütz; Melchi Michel; Wilson S. Geisler; Karl R. Gegenfurtner

When we search for visual targets in a cluttered background we systematically move our eyes around to bring different regions of the scene into foveal view. We explored how visual search behavior changes when the fovea is not functional, as is the case in scotopic vision. Scotopic contrast sensitivity is significantly lower overall, with a functional scotoma in the fovea. We found that in scotopic search, for a medium- and a low-spatial-frequency target, individuals made longer lasting fixations that were not broadly distributed across the entire search display but tended to peak in the upper center, especially for the medium-frequency target. The distributions of fixation locations are qualitatively similar to those of an ideal searcher that has human scotopic detectability across the visual field, and interestingly, these predicted distributions are different from those predicted by an ideal searcher with human photopic detectability. We conclude that although there are some qualitative differences between human and ideal search behavior, humans make principled adjustments in their search behavior as ambient light level decreases.


Neural Computation | 2006

The Costs of Ignoring High-Order Correlations in Populations of Model Neurons

Melchi Michel; Robert A. Jacobs

Investigators debate the extent to which neural populations use pair-wise and higher-order statistical dependencies among neural responses to represent information about a visual stimulus. To study this issue, three statistical decoders were used to extract the information in the responses of model neurons about the binocular disparities present in simulated pairs of left-eye and right-eye images: (1) the full joint probability decoder considered all possible statistical relations among neural responses as potentially important; (2) the dependence tree decoder also considered all possible relations as potentially important, but it approximated high-order statistical correlations using a computationally tractable procedure; and (3) the independent response decoder, which assumed that neural responses are statistically independent, meaning that all correlations should be zero and thus can be ignored. Simulation results indicate that high-order correlations among model neuron responses contain significant information about binocular disparities and that the amount of this high-order information increases rapidly as a function of neural population size. Furthermore, the results highlight the potential importance of the dependence tree decoder to neuroscientists as a powerful but still practical way of approximating high-order correlations among neural responses.


Psychological Review | 2018

The capacity of trans-saccadic memory in visual search.

Nicholas Kleene; Melchi Michel

Maintaining a continuous, stable perception of the visual world relies on the ability to integrate information from previous fixations with the current one. An essential component of this integration is trans-saccadic memory (TSM), memory for information across saccades. TSM capacity may play a limiting role in tasks requiring efficient trans-saccadic integration, such as multiple-fixation visual search tasks. We estimated TSM capacity and investigated its relationship to visual short-term memory (VSTM) using two visual search tasks, one in which participants maintained fixation while saccades were simulated and another where participants made a sequence of actual saccades. We derived a memory-limited ideal observer model to estimate lower-bounds on memory capacities from human search performance. Analysis of the single-fixation search task resulted in capacity estimates (4–8 bits) consistent with those reported for traditional VSTM tasks. However, analysis of the multiple-fixation search task resulted in capacity estimates (15–32 bits) significantly larger than those measured for VSTM. Our results suggest that TSM plays an important role in visual search tasks, that the effective capacity of TSM is greater than or equal to that of VSTM, and that the TSM capacity of human observers significantly limits performance in multiple-fixation visual search tasks.


Journal of Vision | 2018

Textures as Global Signals of Abnormality in the Interpretation of Mammograms

Yelda Semizer; Melchi Michel; Karla K. Evans; Jeremy M. Wolfe

Evans et al. (2016) demonstrated that radiologists can discriminate between normal and abnormal breast tissue at a glance. To explain this ability, they suggested that radiologists might be using some “global signal” of abnormality. Our study sought to characterize these global signals as texture descriptions (i.e., a set of stationary spatial statistics) and to determine whether radiologists rely on such texture descriptions when discriminating between normal and abnormal breast tissue.


Journal of Vision | 2017

Intrinsic position uncertainty impairs overt search performance

Yelda Semizer; Melchi Michel

Uncertainty regarding the position of the search target is a fundamental component of visual search. However, due to perceptual limitations of the human visual system, this uncertainty can arise from intrinsic, as well as extrinsic, sources. The current study sought to characterize the role of intrinsic position uncertainty (IPU) in overt visual search and to determine whether it significantly limits human search performance. After completing a preliminary detection experiment to characterize sensitivity as a function of visual field position, observers completed a search task that required localizing a Gabor target within a field of synthetic luminance noise. The search experiment included two clutter conditions designed to modulate the effect of IPU across search displays of varying set size. In the Cluttered condition, the display was tiled uniformly with feature clutter to maximize the effects of IPU. In the Uncluttered condition, the clutter at irrelevant locations was removed to attenuate the effects of IPU. Finally, we derived an IPU-constrained ideal searcher model, limited by the IPU measured in human observers. Ideal searchers were simulated based on the detection sensitivity and fixation sequences measured for individual human observers. The IPU-constrained ideal searcher predicted performance trends similar to those exhibited by the human observers. In the Uncluttered condition, performance decreased steeply as a function of increasing set size. However, in the Cluttered condition, the effect of IPU dominated and performance was approximately constant as a function of set size. Our findings suggest that IPU substantially limits overt search performance, especially in crowded displays.


Journal of Vision | 2008

Learning optimal integration of arbitrary features in a perceptual discrimination task

Melchi Michel; Robert A. Jacobs


Journal of Vision | 2010

Visual learning with reliable and unreliable features

A. Emin Orhan; Melchi Michel; Robert A. Jacobs

Collaboration


Dive into the Melchi Michel's collaboration.

Top Co-Authors

Avatar

Wilson S. Geisler

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eyal Seidemann

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Yuzhi Chen

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jeremy M. Wolfe

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge