Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John F. Magnotti is active.

Publication


Featured researches published by John F. Magnotti.


Frontiers in Psychology | 2013

Causal inference of asynchronous audiovisual speech.

John F. Magnotti; Wei Ji Ma; Michael S. Beauchamp

During speech perception, humans integrate auditory information from the voice with visual information from the face. This multisensory integration increases perceptual precision, but only if the two cues come from the same talker; this requirement has been largely ignored by current models of speech perception. We describe a generative model of multisensory speech perception that includes this critical step of determining the likelihood that the voice and face information have a common cause. A key feature of the model is that it is based on a principled analysis of how an observer should solve this causal inference problem using the asynchrony between two cues and the reliability of the cues. This allows the model to make predictions about the behavior of subjects performing a synchrony judgment task, predictive power that does not exist in other approaches, such as post-hoc fitting of Gaussian curves to behavioral data. We tested the model predictions against the performance of 37 subjects performing a synchrony judgment task viewing audiovisual speech under a variety of manipulations, including varying asynchronies, intelligibility, and visual cue reliability. The causal inference model outperformed the Gaussian model across two experiments, providing a better fit to the behavioral data with fewer parameters. Because the causal inference model is derived from a principled understanding of the task, model parameters are directly interpretable in terms of stimulus and subject properties.


Psychonomic Bulletin & Review | 2015

The noisy encoding of disparity model of the McGurk effect

John F. Magnotti; Michael S. Beauchamp

In the McGurk effect, incongruent auditory and visual syllables are perceived as a third, completely different syllable. This striking illusion has become a popular assay of multisensory integration for individuals and clinical populations. However, there is enormous variability in how often the illusion is evoked by different stimuli and how often the illusion is perceived by different individuals. Most studies of the McGurk effect have used only one stimulus, making it impossible to separate stimulus and individual differences. We created a probabilistic model to separately estimate stimulus and individual differences in behavioral data from 165 individuals viewing up to 14 different McGurk stimuli. The noisy encoding of disparity (NED) model characterizes stimuli by their audiovisual disparity and characterizes individuals by how noisily they encode the stimulus disparity and by their disparity threshold for perceiving the illusion. The model accurately described perception of the McGurk effect in our sample, suggesting that differences between individuals are stable across stimulus differences. The most important benefit of the NED model is that it provides a method to compare multisensory integration across individuals and groups without the confound of stimulus differences. An added benefit is the ability to predict frequency of the McGurk effect for stimuli never before seen by an individual.


Attention Perception & Psychophysics | 2015

A link between individual differences in multisensory speech perception and eye movements

Demet Gurler; Nathan Doyle; Edgar Walker; John F. Magnotti; Michael S. Beauchamp

The McGurk effect is an illusion in which visual speech information dramatically alters the perception of auditory speech. However, there is a high degree of individual variability in how frequently the illusion is perceived: some individuals almost always perceive the McGurk effect, while others rarely do. Another axis of individual variability is the pattern of eye movements make while viewing a talking face: some individuals often fixate the mouth of the talker, while others rarely do. Since the talkers mouth carries the visual speech necessary information to induce the McGurk effect, we hypothesized that individuals who frequently perceive the McGurk effect should spend more time fixating the talkers mouth. We used infrared eye tracking to study eye movements as 40 participants viewed audiovisual speech. Frequent perceivers of the McGurk effect were more likely to fixate the mouth of the talker, and there was a significant correlation between McGurk frequency and mouth looking time. The noisy encoding of disparity model of McGurk perception showed that individuals who frequently fixated the mouth had lower sensory noise and higher disparity thresholds than those who rarely fixated the mouth. Differences in eye movements when viewing the talker’s face may be an important contributor to interindividual differences in multisensory speech perception.


Psychonomic Bulletin & Review | 2010

Testing pigeon memory in a change detection task

Anthony A. Wright; Jeffrey S. Katz; John F. Magnotti; L. Caitlin Elmore; Stephanie Babb; Sarah alwin

Six pigeons were trained in a change detection task with four colors. They were shown two colored circles on a sample array, followed by a test array with the color of one circle changed. The pigeons learned to choose the changed color and transferred their performance to four unfamiliar colors, suggesting that they had learned a generalized concept of color change. They also transferred performance to test delays several times their 50-msec training delay without prior delay training. The accurate delay performance of several seconds suggests that their change detection was memory based, as opposed to a perceptual attentional capture process. These experiments are the first to show that an animal species (pigeons, in this case) can learn a change detection task identical to ones used to test human memory, thereby providing the possibility of directly comparing short-term memory processing across species.


Biology Letters | 2015

Superior abstract-concept learning by Clark's nutcrackers (Nucifraga columbiana)

John F. Magnotti; Jeffrey S. Katz; Anthony A. Wright; Debbie M. Kelly

The ability to learn abstract relational concepts is fundamental to higher level cognition. In contrast to item-specific concepts (e.g. pictures containing trees versus pictures containing cars), abstract relational concepts are not bound to particular stimulus features, but instead involve the relationship between stimuli and therefore may be extrapolated to novel stimuli. Previous research investigating the same/different abstract concept has suggested that primates might be specially adapted to extract relations among items and would require fewer exemplars of a rule to learn an abstract concept than non-primate species. We assessed abstract-concept learning in an avian species, Clarks nutcracker (Nucifraga columbiana), using a small number of exemplars (eight pairs of the same rule, and 56 pairs of the different rule) identical to that previously used to compare rhesus monkeys, capuchin monkeys and pigeons. Nutcrackers as a group (N = 9) showed more novel stimulus transfer than any previous species tested with this small number of exemplars. Two nutcrackers showed full concept learning and four more showed transfer considerably above chance performance, indicating partial concept learning. These results show that the Clarks nutcracker, a corvid species well known for its amazing feats of spatial memory, learns the same/different abstract concept better than any non-human species (including non-human primates) yet tested on this same task.


PLOS Computational Biology | 2017

A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech

John F. Magnotti; Michael S. Beauchamp

Audiovisual speech integration combines information from auditory speech (talker’s voice) and visual speech (talker’s mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory “ba” + visual “ga” (AbaVga), that are integrated to produce a fused percept (“da”). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.


Journal of Cognitive Neuroscience | 2017

A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography

Muge Ozker; Inga M. Schepers; John F. Magnotti; Daniel Yoshor; Michael S. Beauchamp

Human speech can be comprehended using only auditory information from the talkers voice. However, comprehension is improved if the talkers face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschls gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.


Psychonomic Bulletin & Review | 2017

Abstract-concept learning in Black-billed magpies (Pica hudsonia)

John F. Magnotti; Anthony A. Wright; Kevin Leonard; Jeffrey S. Katz; Debbie M. Kelly

Abstract relational concepts depend upon relationships between stimuli (e.g., same vs. different) and transcend features of the training stimuli. Recent evidence shows that learning abstract concepts is shared across a variety species including birds. Our recent work with a highly-skilled food-storing bird, Clark’s nutcracker, revealed superior same/different abstract-concept learning compared to rhesus monkeys, capuchin monkeys, and pigeons. Here we test a more social, but less reliant on food-storing, corvid species, the Black-billed magpie (Pica hudsonia). We used the same procedures and training exemplars (eight pairs of the same rule, and 56 pairs of the different rule) as were used to test the other species. Magpies (n = 10) showed a level of abstract-concept learning that was equivalent to nutcrackers and greater than the primates and pigeons tested with these same exemplars. These findings suggest that superior initial abstract-concept learning abilities may be shared across corvids generally, rather than confined to those strongly reliant on spatial memory.


Psychological Science | 2017

Corvids Outperform Pigeons and Primates in Learning a Basic Concept

Anthony A. Wright; John F. Magnotti; Jeffrey S. Katz; Kevin Leonard; Alizée Vernouillet; Debbie M. Kelly

Corvids (birds of the family Corvidae) display intelligent behavior previously ascribed only to primates, but such feats are not directly comparable across species. To make direct species comparisons, we used a same/different task in the laboratory to assess abstract-concept learning in black-billed magpies (Pica hudsonia). Concept learning was tested with novel pictures after training. Concept learning improved with training-set size, and test accuracy eventually matched training accuracy—full concept learning—with a 128-picture set; this magpie performance was equivalent to that of Clark’s nutcrackers (a species of corvid) and monkeys (rhesus, capuchin) and better than that of pigeons. Even with an initial 8-item picture set, both corvid species showed partial concept learning, outperforming both monkeys and pigeons. Similar corvid performance refutes the hypothesis that nutcrackers’ prolific cache-location memory accounts for their superior concept learning, because magpies rely less on caching. That corvids with “primitive” neural architectures evolved to equal primates in full concept learning and even to outperform them on the initial 8-item picture test is a testament to the shared (convergent) survival importance of abstract-concept learning.


Behavioural Processes | 2010

Toward a framework for the evaluation of feature binding in pigeons

Jeffrey S. Katz; Robert G. Cook; John F. Magnotti

Pigeons were trained in a new procedure to test for visual binding errors between the dimensions of color and shape. In Experiment 1, pigeons learned to discriminate a target compound from 15 non-target compounds (constructed from four colors and shapes) by choosing one of two hoppers in a two-hopper choice task. The similarity of the target to non-target stimuli influenced choice responding. In Experiment 2, pigeons learned to detect a target compound when presented with a non-target compound within the same trial under conditions of simultaneity and sequentiality. Non-target trials were arranged to allow for the testing of binding errors (i.e., false identifications of the target on certain non-target trials). Transient evidence for binding errors in two of the birds occurred at the start of two-item training, but decreased with training. The experiments represent an important step toward developing a framework for the evaluation of visual feature binding in nonhumans.

Collaboration


Dive into the John F. Magnotti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony A. Wright

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

L. Caitlin Elmore

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antony D. Passaro

University of Texas Health Science Center at Houston

View shared research outputs
Researchain Logo
Decentralizing Knowledge