Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael C. Hout is active.

Publication


Featured researches published by Michael C. Hout.


International Journal of Psychophysiology | 2012

Memory strength and specificity revealed by pupillometry

Megan H. Papesh; Stephen D. Goldinger; Michael C. Hout

Voice-specificity effects in recognition memory were investigated using both behavioral data and pupillometry. Volunteers initially heard spoken words and nonwords in two voices; they later provided confidence-based old/new classifications to items presented in their original voices, changed (but familiar) voices, or entirely new voices. Recognition was more accurate for old-voice items, replicating prior research. Pupillometry was used to gauge cognitive demand during both encoding and testing: enlarged pupils revealed that participants devoted greater effort to encoding items that were subsequently recognized. Further, pupil responses were sensitive to the cue match between encoding and retrieval voices, as well as memory strength. Strong memories, and those with the closest encoding-retrieval voice matches, resulted in the highest peak pupil diameters. The results are discussed with respect to episodic memory models and Whittleseas (1997) SCAPE framework for recognition memory.


Attention Perception & Psychophysics | 2015

Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search

Michael C. Hout; Stephen D. Goldinger

When people look for things in the environment, they use target templates—mental representations of the objects they are attempting to locate—to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers’ templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.


Attention Perception & Psychophysics | 2010

Learning in repeated visual search

Michael C. Hout; Stephen D. Goldinger

Visual search (e.g., finding a specific object in an array of other objects) is performed most effectively when people are able to ignore distracting nontargets. In repeated search, however, incidental learning of object identities may facilitate performance. In three experiments, with over 1,100 participants, we examined the extent to which search could be facilitated by object memory and by memory for spatial layouts. Participants searched for new targets (real-world, nameable objects) embedded among repeated distractors. To make the task more challenging, some participants performed search for multiple targets, increasing demands on visual working memory (WM). Following search, memory for search distractors was assessed using a surprise two-alternative forced choice recognition memory test with semantically matched foils. Search performance was facilitated by distractor object learning and by spatial memory; it was most robust when object identity was consistently tied to spatial locations and weakest (or absent) when object identities were inconsistent across trials. Incidental memory for distractors was better among participants who searched under high WM load, relative to low WM load. These results were observed when visual search included exhaustive-search trials (Experiment 1) or when all trials were self-terminating (Experiment 2). In Experiment 3, stimulus exposure was equated across WM load groups by presenting objects in a single-object stream; recognition accuracy was similar to that in Experiments 1 and 2. Together, the results suggest that people incidentally generate memory for nontarget objects encountered during search and that such memory can facilitate search performance.


Journal of Experimental Psychology: General | 2013

The versatility of SpAM: a fast, efficient, spatial method of data collection for multidimensional scaling.

Michael C. Hout; Stephen D. Goldinger; Ryan Ferguson

Although traditional methods to collect similarity data (for multidimensional scaling [MDS]) are robust, they share a key shortcoming. Specifically, the possible pairwise comparisons in any set of objects grow rapidly as a function of set size. This leads to lengthy experimental protocols, or procedures that involve scaling stimulus subsets. We review existing methods of collecting similarity data, and critically examine the spatial arrangement method (SpAM) proposed by Goldstone (1994a), in which similarity ratings are obtained by presenting many stimuli at once. The participant moves stimuli around the computer screen, placing them at distances from one another that are proportional to subjective similarity. This provides a fast, efficient, and user-friendly method for obtaining MDS spaces. Participants gave similarity ratings to artificially constructed visual stimuli (comprising 2-3 perceptual dimensions) and nonvisual stimuli (animal names) with less-defined underlying dimensions. Ratings were obtained with 4 methods: pairwise comparisons, spatial arrangement, and 2 novel hybrid methods. We compared solutions from alternative methods to the pairwise method, finding that the SpAM produces high-quality MDS solutions. Monte Carlo simulations on degraded data suggest that the method is also robust to reductions in sample sizes and granularity. Moreover, coordinates derived from SpAM solutions accurately predicted discrimination among objects in same-different classification. We address the benefits of using a spatial medium to collect similarity measures.


Journal of Experimental Psychology: Human Perception and Performance | 2015

Failures of perception in the low-prevalence effect: Evidence from active and passive visual search

Michael C. Hout; Stephen C. Walenchok; Stephen D. Goldinger; Jeremy M. Wolfe

In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%-34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled.


Psychonomic Bulletin & Review | 2016

The poverty of embodied cognition

Stephen D. Goldinger; Megan H. Papesh; Anthony S. Barnhart; Whitney A. Hansen; Michael C. Hout

In recent years, there has been rapidly growing interest in embodied cognition, a multifaceted theoretical proposition that (1) cognitive processes are influenced by the body, (2) cognition exists in the service of action, (3) cognition is situated in the environment, and (4) cognition may occur without internal representations. Many proponents view embodied cognition as the next great paradigm shift for cognitive science. In this article, we critically examine the core ideas from embodied cognition, taking a “thought exercise” approach. We first note that the basic principles from embodiment theory are either unacceptably vague (e.g., the premise that perception is influenced by the body) or they offer nothing new (e.g., cognition evolved to optimize survival, emotions affect cognition, perception–action couplings are important). We next suggest that, for the vast majority of classic findings in cognitive science, embodied cognition offers no scientifically valuable insight. In most cases, the theory has no logical connections to the phenomena, other than some trivially true ideas. Beyond classic laboratory findings, embodiment theory is also unable to adequately address the basic experiences of cognitive life.


Psychonomic Bulletin & Review | 2014

Visual Similarity Is Stronger Than Semantic Similarity in Guiding Visual Search for Numbers

Hayward J. Godwin; Michael C. Hout; Tamaryn Menneer

Using a visual search task, we explored how behavior is influenced by both visual and semantic information. We recorded participants’ eye movements as they searched for a single target number in a search array of single-digit numbers (0–9). We examined the probability of fixating the various distractors as a function of two key dimensions: the visual similarity between the target and each distractor, and the semantic similarity (i.e., the numerical distance) between the target and each distractor. Visual similarity estimates were obtained using multidimensional scaling based on the independent observer similarity ratings. A linear mixed-effects model demonstrated that both visual and semantic similarity influenced the probability that distractors would be fixated. However, the visual similarity effect was substantially larger than the semantic similarity effect. We close by discussing the potential value of using this novel methodological approach and the implications for both simple and complex visual search displays.


FEMS Microbiology Ecology | 2014

Host species and developmental stage, but not host social structure, affects bacterial community structure in socially polymorphic bees

Quinn S. McFrederick; William T. Wcislo; Michael C. Hout; Ulrich G. Mueller

Social transmission and host developmental stage are thought to profoundly affect the structure of bacterial communities associated with honey bees and bumble bees, but these ideas have not been explored in other bee species. The halictid bees Megalopta centralis and M. genalis exhibit intrapopulation social polymorphism, which we exploit to test whether bacterial communities differ by host social structure, developmental stage, or host species. We collected social and solitary Megalopta nests and sampled bees and nest contents from all stages of host development. To survey these bacterial communities, we used 16S rRNA gene 454 pyrosequencing. We found no effect of social structure, but found differences by host species and developmental stage. Wolbachia prevalence differed between the two host species. Bacterial communities associated with different developmental stages appeared to be driven by environmentally acquired bacteria. A Lactobacillus kunkeei clade bacterium that is consistently associated with other bee species was dominant in pollen provisions and larval samples, but less abundant in mature larvae and pupae. Foraging adults appeared to often reacquire L. kunkeei clade bacteria, likely while foraging at flowers. Environmental transmission appears to be more important than social transmission for Megalopta bees at the cusp between social and solitary behavior.


Frontiers in Psychology | 2015

Is the preference of natural versus man-made scenes driven by bottom-up processing of the visual features of nature?

Omid Kardan; Emre Demiralp; Michael C. Hout; MaryCarol R. Hunter; Hossein Karimi; Taylor Hanayik; Grigori Yourganov; John Jonides; Marc G. Berman

Previous research has shown that viewing images of nature scenes can have a beneficial effect on memory, attention, and mood. In this study, we aimed to determine whether the preference of natural versus man-made scenes is driven by bottom–up processing of the low-level visual features of nature. We used participants’ ratings of perceived naturalness as well as esthetic preference for 307 images with varied natural and urban content. We then quantified 10 low-level image features for each image (a combination of spatial and color properties). These features were used to predict esthetic preference in the images, as well as to decompose perceived naturalness to its predictable (modeled by the low-level visual features) and non-modeled aspects. Interactions of these separate aspects of naturalness with the time it took to make a preference judgment showed that naturalness based on low-level features related more to preference when the judgment was faster (bottom–up). On the other hand, perceived naturalness that was not modeled by low-level features was related more to preference when the judgment was slower. A quadratic discriminant classification analysis showed how relevant each aspect of naturalness (modeled and non-modeled) was to predicting preference ratings, as well as the image features on their own. Finally, we compared the effect of color-related and structure-related modeled naturalness, and the remaining unmodeled naturalness in predicting esthetic preference. In summary, bottom–up (color and spatial) properties of natural images captured by our features and the non-modeled naturalness are important to esthetic judgments of natural and man-made scenes, with each predicting unique variance.


PLOS ONE | 2014

MM-MDS: A Multidimensional Scaling Database with Similarity Ratings for 240 Object Categories from the Massive Memory Picture Database

Michael C. Hout; Stephen D. Goldinger; Kyle Brady

Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of “sameness” among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16–17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the items prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of “sameness.”

Collaboration


Dive into the Michael C. Hout's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arryn Robbins

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Megan H. Papesh

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar

Collin Scarince

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar

Jeremy M. Wolfe

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grigori Yourganov

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Hossein Karimi

University of South Carolina

View shared research outputs
Researchain Logo
Decentralizing Knowledge