Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph C. Toscano is active.

Publication


Featured researches published by Joseph C. Toscano.


Developmental Science | 2009

Statistical learning of phonetic categories: insights from a computational approach.

Bob McMurray; Richard N. Aslin; Joseph C. Toscano

Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infants native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model based on a mixture of Gaussians (MOG) architecture. Statistical learning alone was found to be insufficient for phonetic category learning--an additional competition mechanism was required in order for the categories in the input to be successfully learnt. When competition was added to the MOG architecture, this class of models successfully accounted for developmental enhancement and loss of sensitivity to phonetic contrasts. Moreover, the MOG with competition model was used to explore a potentially important distributional property of early speech categories--sparseness--in which portions of the space between phonetic categories are unmapped. Sparseness was found in all successful models and quickly emerged during development even when the initial parameters favoured continuous representations with no gaps. The implications of these models for phonetic category learning in infants are discussed.


Psychological Science | 2010

Continuous Perception and Graded Categorization Electrophysiological Evidence for a Linear Relationship Between the Acoustic Signal and Perceptual Encoding of Speech

Joseph C. Toscano; Bob McMurray; Joel Dennhardt; Steven J. Luck

Speech sounds are highly variable, yet listeners readily extract information from them and transform continuous acoustic signals into meaningful categories during language comprehension. A central question is whether perceptual encoding captures acoustic detail in a one-to-one fashion or whether it is affected by phonological categories. We addressed this question in an event-related potential (ERP) experiment in which listeners categorized spoken words that varied along a continuous acoustic dimension (voice-onset time, or VOT) in an auditory oddball task. We found that VOT effects were present through a late stage of perceptual processing (N1 component, ~100 ms poststimulus) and were independent of categorization. In addition, effects of within-category differences in VOT were present at a postperceptual categorization stage (P3 component, ~450 ms poststimulus). Thus, at perceptual levels, acoustic information is encoded continuously, independently of phonological information. Further, at phonological levels, fine-grained acoustic differences are preserved along with category information.


Attention Perception & Psychophysics | 2012

Cue-integration and context effects in speech: Evidence against speaking-rate normalization

Joseph C. Toscano; Bob McMurray

Listeners are able to accurately recognize speech despite variation in acoustic cues across contexts, such as different speaking rates. Previous work has suggested that listeners use rate information (indicated by vowel length; VL) to modify their use of context-dependent acoustic cues, like voice-onset time (VOT), a primary cue to voicing. We present several experiments and simulations that offer an alternative explanation: that listeners treat VL as a phonetic cue rather than as an indicator of speaking rate, and that they rely on general cue-integration principles to combine information from VOT and VL. We demonstrate that listeners use the two cues independently, that VL is used in both naturally produced and synthetic speech, and that the effects of stimulus naturalness can be explained by a cue-integration model. Together, these results suggest that listeners do not interpret VOT relative to rate information provided by VL and that the effects of speaking rate can be explained by more general cue-integration principles.


Language, cognition and neuroscience | 2015

The time-course of speaking rate compensation: Effects of sentential rate and vowel length on voicing judgments.

Joseph C. Toscano; Bob McMurray

Many sources of context information in speech (such as speaking rate) occur either before or after the phonetic cues they influence, yet there is little work examining the time-course of these effects. Here, we investigate how listeners compensate for preceding sentence rate and subsequent vowel length (VL; a secondary cue that has been used as a proxy for speaking rate) when categorising words varying in voice-onset time (VOT). Participants selected visual objects in a display while their eye-movements were recorded, allowing us to examine when each source of information had an effect on lexical processing. We found that the effect of VOT preceded that of VL, suggesting that each cue is used as it becomes available. In a second experiment, we found that, in contrast, the effect of preceding sentence rate occurred simultaneously with VOT, suggesting that listeners interpret VOT relative to preceding rate.


Psychonomic Bulletin & Review | 2013

Reconsidering the role of temporal order in spoken word recognition

Joseph C. Toscano; Nathaniel D. Anderson; Bob McMurray

Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.


Journal of the Acoustical Society of America | 2016

High-pass filtering obscures stimulus encoding characteristics of the auditory brainstem response: Evidence from click and tone stimuli

Alexandra R. Tabachnick; Joseph C. Toscano

The auditory brainstem response (ABR) is an electrophysiological measure of early auditory processing. While previous work has examined ABRs to clicks, tones, speech, and music, it remains unclear how changes in acoustic properties (e.g., frequency) map onto specific changes in ABR components. This may be partly due to filtering during data processing. High-pass filtering can severely distort cortical and subcortical responses, potentially obfuscating how stimuli are encoded. To address this, we measured ABRs to a wide range of pure tones (250 to 8000 Hz) and examined how high-pass filtering affects tone- and click-evoked ABRs. In Experiment 1, various high-pass filter settings (0.1-300 Hz) were applied to click-evoked ABRs. In Experiment 2, ABRs to brief tones across a six-step frequency continuum were collected, and the same high-pass filter settings were applied. Results indicate that excessive high-pass filtering diminishes the amplitude of ABR components, consistent with previous findings. In additio...


Journal of the Acoustical Society of America | 2016

Phonetic convergence in an immersive game-based task

Tifani M. Biro; Joseph C. Toscano; Navin Viswanathan

Phonetic convergence occurs when talkers change the acoustic-phonetic characteristics of their speech to be more similar to a conversational partner. It is typically studied using laboratory tasks, but the extent to which talkers converge varies considerably across studies. One aspect of these tasks that differs from real-world settings is how engaging they are. Highly contrived tasks may fail to elicit natural speech production, which could influence whether or not talkers converge. We address this issue by comparing the extent to which interlocutors converge in a repetitive, unengaging task versus an immersive video game-based task. Both tasks were designed to elicit production of specific words. Thirty word-initial voicing minimal pairs were used as stimuli, and we measured the degree to which phonetic cues (e.g., voice onset time; VOT) changed over the course of the experiment. For the more engaging task, participants’ VOT values for voiceless tokens trended towards convergence (i.e., they gradually s...


Journal of the Acoustical Society of America | 2016

Integration and maintenance of gradient acoustic information in spoken language processing

James B. Falandays; Joseph C. Toscano; Sarah Brown-Schmidt

Models of speech processing seek to explain how continuous acoustic input is mapped onto discrete symbols at various levels of representation, such as phonemes, words, and referents. While recent work has supported models that posit maintenance of fine-grained information, it is not clear how continuous, low-level information in the speech signal is integrated with discrete, higher-level linguistic information. To investigate this, we created acoustic continua between the pronouns “he” and “she” by manipulating the amplitude of frication in the initial phoneme. Using the visual world eye-tracking paradigm, listeners viewed scenes containing male and female referents and heard sentences containing a pronoun, which later disambiguated to a single referent. Measures of eye-gaze revealed immediate sensitivity to both graded acoustic information and discourse-level information. Moreover, when listeners made an initially incorrect interpretation of the referent, recovery time varied as a function of acoustic st...


Journal of the Acoustical Society of America | 2016

Event-related potential responses reveal simultaneous processing across multiple levels of representation in spoken word recognition

Emma C. Folk; Joseph C. Toscano

A controversial issue in spoken language comprehension concerns whether different sources of information are encapsulated from each other. Do listeners finish processing lower-level information (e.g., encoding acoustic differences) before beginning higher-level processing (e.g., determining the meaning of a word or its grammatical status)? We addressed these questions by examining the time-course of processing using an event-related potential experiment with a component-independent design. Listeners heard voiced/voiceless minimal pairs differing in (1) lexical status, (2) syntactic class (noun/verb distinctions), and (3) semantic content (animate/inanimate distinctions). For each voiced stimulus in a given condition (e.g., lexical status pair TUB/tup), there was a corresponding pair with a voiceless ending (tob/TOP). Stimuli were cross-spliced, allowing us to control for phonological and acoustic differences and examine higher-level effects independently of them. Widespread lexical status effects are obse...


Journal of the Acoustical Society of America | 2012

The consequences of lexical sensitivity to fine grained detail: Solving the problems of integrating cues, and processing speech in time

Bob McMurray; Joseph C. Toscano

Work on language comprehension is classically divided into two fields. Speech perception asks how listeners cope with variability from factors like talker and coarticulation to compute some phoneme-like unit; and word recognition assumed these units to ask how listeners cope with time and match the input to the lexicon. Evidence that within-category detail affects lexical activation (Andruski, et al., 1994; McMurray, et al., 2002) challenges this view: variability in the input is not “handled” by lower-level processes and instead survives until late in processing. However, the consequences of this have not been fleshed out. This talk begins to explore them using evidence from the eye-tracking paradigms. First, I show how lexical activation/competition processes can help cope with perceptual problems, by integrating acoustic cues that are strung out over time. Next, I examine a fundamental issue in word recognition, temporal order (e.g., distinguishing cat and tack). I present evidence that listeners repre...

Collaboration


Dive into the Joseph C. Toscano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven J. Luck

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge