Aaron D. Mitchel
Pennsylvania State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aaron D. Mitchel.
Language Learning and Development | 2009
Daniel J. Weiss; Chip Gerfen; Aaron D. Mitchel
Studies using artificial language streams indicate that infants and adults can use statistics to correctly segment words. However, most studies have utilized only a single input language. Given the prevalence of bilingualism, how is multiple language input segmented? One particular problem may occur if learners combine input across languages: The statistics of particular units that overlap different languages may subsequently change and disrupt correct segmentation. Our study addresses this issue by employing artificial language streams to simulate the earliest stages of segmentation in adult L2-learners. In four experiments, participants tracked multiple sets of statistics for two artificial languages. Our results demonstrate that adult learners can track two sets of statistics simultaneously, suggesting that they form multiple representations when confronted with bilingual input. This work, along with planned infant experiments, informs a central issue in bilingualism research, namely, determining at what point listeners can form multiple representations when exposed to multiple languages.
Language and Cognitive Processes | 2010
Daniel J. Weiss; Chip Gerfen; Aaron D. Mitchel
The process of word segmentation is flexible, with many strategies potentially available to learners. This experiment explores how segmentation cues interact, and whether successful resolution of cue competition is related to general executive functioning. Participants listened to artificial speech streams that contained both statistical and pause-defined cues to word boundaries. When these cues ‘collide’ (indicating different locations for word boundaries), cue strength appears to dictate the predominant parsing strategy. When cues are relatively equal in strength, the ability to successfully deploy a segmentation strategy significantly correlates with stronger performance on the Simon task, a non-linguistic cognitive task typically thought to involve executive processes such as inhibitory control and selective attention. These results suggest that general information processing strategies may play a role in solving one of the early challenges for language learners.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2011
Aaron D. Mitchel; Daniel J. Weiss
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms.
Language and Cognitive Processes | 2010
Aaron D. Mitchel; Daniel J. Weiss
Recent research has demonstrated that adults successfully segment two interleaved artificial speech streams with incongruent statistics (i.e., streams whose combined statistics are noisier than the encapsulated statistics) only when provided with an indexical cue of speaker voice. In a series of five experiments, our study explores whether learners can utilise visual information to encapsulate statistics for each speech stream. We initially presented learners with incongruent artificial speech streams produced by the same female voice along with an accompanying visual display. Learners successfully segmented both streams when the audio stream was presented with an indexical cue of talking faces (Experiment 1). This learning cannot be attributed to the presence of the talking face display alone, as a single face paired with a single input stream did not improve segmentation (Experiment 2). Additionally, participants failed to successfully segment two streams when they were paired with a synchronised single talking face display (Experiment 3). Likewise, learners failed to successfully segment both streams when the visual indexical cue lacked audio-visual synchrony, such as changes in background screen colour (Experiment 4) or a static face display (Experiment 5). We end by discussing the possible relevance of the speakers face in speech segmentation and bilingual language acquisition.
Frontiers in Psychology | 2014
Aaron D. Mitchel; Morten H. Christiansen; Daniel J. Weiss
Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.
Language, cognition and neuroscience | 2014
Aaron D. Mitchel; Daniel J. Weiss
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.
Frontiers in Psychology | 2016
Laina G. Lusk; Aaron D. Mitchel
Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation.
Brain Imaging and Behavior | 2015
David W. Evans; Steven M. Lazar; K. B. Boomer; Aaron D. Mitchel; Andrew M. Michael; Gregory J. Moore
The social-cognitive deficits associated with several neurodevelopmental and neuropsychiatric disorders have been linked to structural and functional brain anomalies. Given the recent appreciation for quantitative approaches to behavior, in this study we examined the brain-behavior links in social cognition in healthy young adults from a quantitative approach. Twenty-two participants were administered quantitative measures of social cognition, including the social responsiveness scale (SRS), the empathizing questionnaire (EQ) and the systemizing questionnaire (SQ). Participants underwent a structural, 3-T magnetic resonance imaging (MRI) procedure that yielded both volumetric (voxel count) and asymmetry indices. Model fitting with backward elimination revealed that a combination of cortical, limbic and striatal regions accounted for significant variance in social behavior and cognitive styles that are typically associated with neurodevelopmental and neuropsychiatric disorders. Specifically, as caudate and amygdala volumes deviate from the typical R > L asymmetry, and cortical gray matter becomes more R > L asymmetrical, overall SRS and Emotion Recognition scores increase. Social Avoidance was explained by a combination of cortical gray matter, pallidum (rightward asymmetry) and caudate (deviation from rightward asymmetry). Rightward asymmetry of the pallidum was the sole predictor of Interpersonal Relationships and Repetitive Mannerisms. Increased D-scores on the EQ-SQ, an indication of greater systemizing relative to empathizing, was also explained by deviation from the typical R > L asymmetry of the caudate.These findings extend the brain-behavior links observed in neurodevelopmental disorders to the normal distribution of traits in a healthy sample.
Psychological Science | 2008
R. Brooke Lea; David N. Rapp; Andrew Elfenbein; Aaron D. Mitchel; Russell Swinburne Romine
Journal of Experimental Psychology: Human Perception and Performance | 2009
Robrecht P. R. D. van der Wel; Jeffrey R. Eder; Aaron D. Mitchel; Matthew M. Walsh; David A. Rosenbaum