Michael L. Mack
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael L. Mack.
Eye Movements#R##N#A Window on Mind and Brain | 2007
John M. Henderson; James R. Brockmole; Monica S. Castelhano; Michael L. Mack
Publisher Summary This chapter presents testing of the hypothesis that fixation locations during scene viewing are primarily determined by visual salience. Eye movements were collected from participants who viewed photographs of real-world scenes during an active search task. Visual salience as determined by a popular computational model did not predict region-to-region saccades or saccade sequences any better than did a random model. Consistent with other reports in the literature, intensity, contrast, and edge density differed at fixated scene regions compared to regions that were not fixated, but these fixated regions also differ in rated semantic informativeness. Therefore, any observed correlations between fixation locations and image statistics cannot be unambiguously attributed to these image statistics. The chapter concludes that visual saliency does not account for eye movements during active search. The existing evidence is consistent with the hypothesis that cognitive factors play the dominant role in active gaze control.
Journal of Vision | 2009
Monica S. Castelhano; Michael L. Mack; John M. Henderson
Expanding on the seminal work of G. Buswell (1935) and I. A. Yarbus (1967), we investigated how task instruction influences specific parameters of eye movement control. In the present study, 20 participants viewed color photographs of natural scenes under two instruction sets: visual search and memorization. Results showed that task influenced a number of eye movement measures including the number of fixations and gaze duration on specific objects. Additional analyses revealed that the areas fixated were qualitatively different between the two tasks. However, other measures such as average saccade amplitude and individual fixation durations remained constant across the viewing of the scene and across tasks. The present study demonstrates that viewing task biases the selection of scene regions and aggregate measures of fixation time on those regions but does not influence other measures, such as the duration of individual fixations.
Vision Research | 2011
Jennifer J. Richler; Michael L. Mack; Thomas J. Palmeri; Isabel Gauthier
Face inversion effects are used as evidence that faces are processed differently from objects. Nevertheless, there is debate about whether processing differences between upright and inverted faces are qualitative or quantitative. We present two experiments comparing holistic processing of upright and inverted faces within the composite task, which requires participants to match one half of a test face while ignoring irrelevant variation in the other half of the test face. Inversion reduced overall performance but led to the same qualitative pattern of results as observed for upright faces (Experiment 1). However, longer presentation times were required to observe holistic effects for inverted compared to upright faces (Experiment 2). These results suggest that both upright and inverted faces are processed holistically, but inversion reduces overall processing efficiency.
Vision Research | 2009
Jennifer J. Richler; Michael L. Mack; Isabel Gauthier; Thomas J. Palmeri
Holistic processing (HP) of faces can be inferred from failure to selectively attend to part of a face. We explored how encoding time affects HP of faces by manipulating exposure duration of the study or test face in a sequential matching composite task. HP was observed for exposure as rapid as 50 ms, and was unaffected by whether exposure of the study or test face was limited. Holistic effects emerge as soon as performance is above chance, and are not larger at rapid exposure durations. Limiting exposure at study vs. test did have differential effects on response biases at the fastest exposure durations. These findings provide key constraints for understanding mechanisms of face recognition. These results are first to demonstrate that HP of faces emerges for very briefly presented faces, and that limited perceptual encoding time affects response biases and overall level of performance but not whether faces are processed holistically.
Psychonomic Bulletin & Review | 2008
Michael L. Mack; Isabel Gauthier; Javid Sadr; Thomas J. Palmeri
A tight temporal coupling between object detection (is an object there?) and object categorization (what kind of object is it?) has recently been reported (Grill-Spector & Kanwisher, 2005), suggesting that image segmentation into different objects and categorization of those objects at the basic level may be the very same mechanism. In the present work, we decoupled the time course of detection and categorization through two task manipulations. First, inverted objects were categorized significantly less accurately than upright objects across a range of image presentation durations, but no significant effect on object detection was observed. Second, systematically degrading stimuli affected categorization significantly more than object detection. The time course of object detection and object categorization can be selectively manipulated. They are not intrinsically linked. As soon as you know an object is there, you do not necessarily know what it is.
Current Biology | 2013
Michael L. Mack; Alison R. Preston; Bradley C. Love
Acts of cognition can be described at different levels of analysis: what behavior should characterize the act, what algorithms and representations underlie the behavior, and how the algorithms are physically realized in neural activity [1]. Theories that bridge levels of analysis offer more complete explanations by leveraging the constraints present at each level [2-4]. Despite the great potential for theoretical advances, few studies of cognition bridge levels of analysis. For example, formal cognitive models of category decisions accurately predict human decision making [5, 6], but whether model algorithms and representations supporting category decisions are consistent with underlying neural implementation remains unknown. This uncertainty is largely due to the hurdle of forging links between theory and brain [7-9]. Here, we tackle this critical problem by using brain response to characterize the nature of mental computations that support category decisions to evaluate two dominant, and opposing, models of categorization. We found that brain states during category decisions were significantly more consistent with latent model representations from exemplar [5] rather than prototype theory [10, 11]. Representations of individual experiences, not the abstraction of experiences, are critical for category decision making. Holding models accountable for behavior and neural implementation provides a means for advancing more complete descriptions of the algorithms of cognition.
Frontiers in Psychology | 2011
Michael L. Mack; Thomas J. Palmeri
An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction.
Journal of Vision | 2010
Michael L. Mack; Thomas J. Palmeri
How does object perception influence scene perception? A recent study of ultrarapid scene categorization (O. R. Joubert, G. A. Rousselet, D. Fize, & M. Fabre-Thorpe, 2007) reported facilitated scene categorization when scenes contained consistent objects compared to when scenes contained inconsistent objects. One proposal for this consistent-object advantage is that ultrarapid scene categorization is influenced directly by ultrarapid recognition of particular objects within the scene. We instead asked whether a simpler mechanism that relied only on scene categorization without any explicit object recognition could explain this consistent-object advantage. We combined a computational model of scene recognition based on global scene statistics (A. Oliva & A. Torralba, 2001) with a diffusion model of perceptual decision making (R. Ratcliff, 1978). This model is sufficient to account for the consistent-object advantage. The simulations suggest that this consistent-object advantage need not arise from ultrarapid object recognition influencing ultrarapid scene categorization, but from the inherent influence certain objects have on the global scene statistics diagnostic for scene categorization.
Journal of Experimental Psychology: Human Perception and Performance | 2010
Michael L. Mack; Thomas J. Palmeri
We investigated whether there exists a behavioral dependency between object detection and categorization. Previous work (Grill-Spector & Kanwisher, 2005) suggests that object detection and basic-level categorization may be the very same perceptual mechanism: As objects are parsed from the background they are categorized at the basic level. In the current study, we decouple object detection from categorization by manipulating the between-category contrast of the categorization decision. With a superordinate-level contrast with people as one of the target categories (e.g., cars vs. people), which replicates Grill-Spector and Kanwisher, we found that success at object detection depended on success at basic-level categorization and vice versa. But with a basic-level contrast (e.g., cars vs. boats) or superordinate-level contrast without people as a target category (e.g., dog vs. boat), success at object detection did not depend on success at basic-level categorization. Successful object detection could occur without successful basic-level categorization. Object detection and basic-level categorization do not seem to occur within the same early stage of visual processing.
Psychonomic Bulletin & Review | 2011
Michael L. Mack; Jennifer J. Richler; Isabel Gauthier; Thomas J. Palmeri
The theoretical framework of General Recognition Theory (GRT; Ashby & Townsend, Psychological Review, 93, 154–179, 1986) coupled with the empirical analysis tools of Multidimensional Signal Detection Analysis (MSDA; Kadlec & Townsend, Multidimensional models of perception and recognition, pp. 181–228, 1992) have become one important method for assessing dimensional interactions in perceptual decision-making. In this article, we critically examine MSDA and characterize cases where it is unable to discriminate two kinds of dimensional interactions: perceptual separability and decisional separability. We performed simulations with known instances of violations of perceptual or decisional separability, applied MSDA to the data generated by these simulations, and evaluated MSDA on its ability to accurately characterize the perceptual versus decisional source of these simulated dimensional interactions. Critical cases of violations of perceptual separability are often mischaracterized by MSDA as violations of decisional separability.