Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juan M. Toro is active.

Publication


Featured researches published by Juan M. Toro.


Cognition | 2005

Speech segmentation by statistical learning depends on attention.

Juan M. Toro; Scott Sinnett; Salvador Soto-Faraco

We addressed the hypothesis that word segmentation based on statistical regularities occurs without the need of attention. Participants were presented with a stream of artificial speech in which the only cue to extract the words was the presence of statistical regularities between syllables. Half of the participants were asked to passively listen to the speech stream, while the other half were asked to perform a concurrent task. In Experiment 1, the concurrent task was performed on a separate auditory stream (noises), in Experiment 2 it was performed on a visual stream (pictures), and in Experiment 3 it was performed on pitch changes in the speech stream itself. Invariably, passive listening to the speech stream led to successful word extraction (as measured by a recognition test presented after the exposure phase), whereas diverted attention led to a dramatic impairment in word segmentation performance. These findings demonstrate that when attentional resources are depleted, word segmentation based on statistical regularities is seriously compromised.


Attention Perception & Psychophysics | 2005

Statistical computations over a speech stream in a rodent

Juan M. Toro; Josep B. Trobalón

Statistical learning is one of the key mechanisms available to human infants and adults when they face the problems of segmenting a speech stream (Saffran, Aslin, & Newport, 1996) and extracting long-distance regularities (Gómez, 2002; Peña, Bonatti, Nespor, & Mehler, 2002). In the present study, we explore statistical learning abilities in rats in the context of speech segmentation experiments. In a series of five experiments, we address whether rats can compute the necessary statistics to be able to segment synthesized speech streams and detect regularities associated with grammatical structures. Our results demonstrate that rats can segment the streams using the frequency of co-occurrence (not transitional probabilities, as human infants do) among items, showing that some basic statistical learning mechanism generalizes over nonprimate species. Nevertheless, rats did not differentiate among test items when the stream was organized over more complex regularities that involved nonadjacent elements and abstract grammar-like rules.


Psychological Science | 2008

Finding Words and Rules in a Speech Stream Functional Differences Between Vowels and Consonants

Juan M. Toro; Marina Nespor; Jacques Mehler; Luca L. Bonatti

We have proposed that consonants give cues primarily about the lexicon, whereas vowels carry cues about syntax. In a study supporting this hypothesis, we showed that when segmenting words from an artificial continuous stream, participants compute statistical relations over consonants, but not over vowels. In the study reported here, we tested the symmetrical hypothesis that when participants listen to words in a speech stream, they tend to exploit relations among vowels to extract generalizations, but tend to disregard the same relations among consonants. In our streams, participants could segment words on the basis of transitional probabilities in one tier and could extract a structural regularity in the other tier. Participants used consonants to extract words, but vowels to extract a structural generalization. They were unable to extract the same generalization using consonants, even when word segmentation was facilitated and the generalization made simpler. Our results suggest that different signal-driven computations prime lexical and grammatical processing.


NeuroImage | 2009

Time course and functional neuroanatomy of speech segmentation in adults

Toni Cunillera; Estela Camara; Juan M. Toro; Josep Marco-Pallarés; Núria Sebastián-Gallés; Hector Ortiz; Jesús Pujol; Antoni Rodríguez-Fornells

The present investigation was devoted to unraveling the time-course and brain regions involved in speech segmentation, which is one of the first processes necessary for learning a new language in adults and infants. A specific brain electrical pattern resembling the N400 language component was identified as an indicator of speech segmentation of candidate words. This N400 trace was clearly elicited after a short exposure to the words of the new language and showed a decrease in amplitude with longer exposure. Two brain regions were observed to be active during this process: the posterior superior temporal gyrus and the superior part of the ventral premotor cortex. We interpret these findings as evidence for the existence of an auditory-motor interface that is responsible for isolating possible candidate words when learning a new language in adults.


Animal Cognition | 2003

The use of prosodic cues in language discrimination tasks by rats.

Juan M. Toro; Josep B. Trobalón; Núria Sebastián-Gallés

Recent research with cotton-top tamarin monkeys has revealed language discrimination abilities similar to those found in human infants, demonstrating that these perceptual abilities are not unique to humans but are also present in non-human primates. Specifically, tamarins could discriminate forward but not backward sentences of Dutch from Japanese, using both natural and synthesized utterances. The present study was designed as a conceptual replication of the work on tamarins. Results show that rats trained in a discrimination learning task readily discriminate forward, but not backward sentences of Dutch from Japanese; the results are particularly robust for synthetic utterances, a pattern that shows greater parallels with newborns than with tamarins. Our results extend the claims made in the research with tamarins that the capacity to discriminate languages from different rhythmic classes depends on general perceptual abilities that evolved at least as far back as the rodents.


Cognition | 2010

Structural generalizations over consonants and vowels in 11-month-old infants

Ferran Pons; Juan M. Toro

Recent research has suggested consonants and vowels serve different roles during language processing. While statistical computations are preferentially made over consonants but not over vowels, simple structural generalizations are easily made over vowels but not over consonants. Nevertheless, the origins of this asymmetry are unknown. Here we tested if a lifelong experience with language is necessary for vowels to become the preferred target for structural generalizations. We presented 11-month-old infants with a series of CVCVCV nonsense words in which all vowels were arranged according to an AAB rule (first and second vowels were the same, while the third vowel was different). During the test, we presented infants with new words whose vowels either followed or not, the aforementioned rule. We found that infants readily generalized this rule when implemented over the vowels. However, when the same rule was implemented over the consonants, infants could not generalize it to new instances. These results parallel those found with adult participants and demonstrate that several years of experience learning a language are not necessary for functional asymmetries between consonants and vowels to appear.


Attention Perception & Psychophysics | 2013

Do humans and nonhuman animals share the grouping principles of the iambic–trochaic law?

Daniela M. de la Mora; Marina Nespor; Juan M. Toro

The iambic–trochaic law describes humans’ tendency to form trochaic groups over sequences varying in pitch or intensity (i.e., the loudest or highest sounds mark group beginnings), and iambic groups over sequences varying in duration (i.e., the longest sounds mark group endings). The extent to which these perceptual biases are shared by humans and nonhuman animals is yet unclear. In Experiment 1, we trained rats to discriminate pitch-alternating sequences of tones from sequences randomly varying in pitch. In Experiment 2, rats were trained to discriminate duration-alternating sequences of tones from sequences randomly varying in duration. We found that nonhuman animals group sequences based on pitch variations as trochees, but they do not group sequences varying in duration as iambs. Importantly, humans grouped the same stimuli following the principles of the iambic–trochaic law (Exp. 3). These results suggest the early emergence of the trochaic rhythmic grouping bias based on pitch, possibly relying on perceptual abilities shared by humans and other mammals, whereas the iambic rhythmic grouping bias based on duration might depend on language experience.


Attention Perception & Psychophysics | 2008

The quest for generalizations over consonants: Asymmetries between consonants and vowels are not the by-product of acoustic differences

Juan M. Toro; Mohinish Shukla; Marina Nespor; Ansgar D. Endress

Consonants and vowels may play different roles during language processing, consonants being preferentially involved in lexical processing, and vowels tending to mark syntactic constituency through prosodic cues. In support of this view, artificial language learning studies have demonstrated that consonants (C) support statistical computations, whereas vowels (V) allow certain structural generalizations. Nevertheless, these asymmetries could be mere by-products of lower level acoustic differences between Cs and Vs, in particular the energy they carry, and thus their relative salience. Here we address this issue and show that vowels remain the preferred targets for generalizations, even when consonants are made highly salient or vowels barely audible. Participants listened to speech streams of nonsense CVCVCV words, in which consonants followed a simple ABA structure. Participants failed to generalize this structure over sonorant consonants (Experiment 1), even when vowel duration was reduced to one third of that of consonants (Experiment 2). When vowels were eliminated from the stream, participants showed only a marginal evidence of generalizations (Experiment 4). In contrast, participants readily generalized the structure over barely audible vowels (Experiment 3). These results show that different roles of consonants and vowels cannot be readily reduced to acoustical and perceptual differences between these phonetic categories.


Cognition | 2013

Rule learning over consonants and vowels in a non-human animal

Daniela M. de la Mora; Juan M. Toro

Perception studies have shown similarities between humans and other animals in a wide array of language-related processes. However, the components of language that make it uniquely human have not been fully identified. Here we show that nonhuman animals extract rules over speech sequences that are difficult for humans. Specifically, animals easily learn rules over both consonants and vowels, while humans do it only over vowels. In Experiment 1, rats learned a rule implemented over vowels in CVCVCV nonsense words. In Experiment 2, rats learned the rule when it was implemented over the consonants. In both experiments, rats generalized such knowledge to novel words they had not heard before. Using the same stimuli, human adults learned the rules over the vowels but not over the consonants. These results suggest differences between humans and animals on speech processing might lie on the constraints they face while extracting information from the signal.


Frontiers in Human Neuroscience | 2016

Look at the Beat, Feel the Meter: Top–Down Effects of Meter Induction on Auditory and Visual Modalities

Alexandre Celma-Miralles; Robert F. de Menezes; Juan M. Toro

Recent research has demonstrated top–down effects on meter induction in the auditory modality. However, little is known about these effects in the visual domain, especially without the involvement of motor acts such as tapping. In the present study, we aim to assess whether the projection of meter on auditory beats is also present in the visual domain. We asked 16 musicians to internally project binary (i.e., a strong-weak pattern) and ternary (i.e., a strong-weak-weak pattern) meter onto separate, but analog, visual and auditory isochronous stimuli. Participants were presented with sequences of tones or blinking circular shapes (i.e., flashes) at 2.4 Hz while their electrophysiological responses were recorded. A frequency analysis of the elicited steady-state evoked potentials allowed us to compare the frequencies of the beat (2.4 Hz), its first harmonic (4.8 Hz), the binary subharmonic (1.2 Hz), and the ternary subharmonic (0.8 Hz) within and across modalities. Taking the amplitude spectra into account, we observed an enhancement of the amplitude at 0.8 Hz in the ternary condition for both modalities, suggesting meter induction across modalities. There was an interaction between modality and voltage at 2.4 and 4.8 Hz. Looking at the power spectra, we also observed significant differences from zero in the auditory, but not in the visual, binary condition at 1.2 Hz. These findings suggest that meter processing is modulated by top–down mechanisms that interact with our perception of rhythmic events and that such modulation can also be found in the visual domain. The reported cross-modal effects of meter may shed light on the origins of our timing mechanisms, partially developed in primates and allowing humans to synchronize across modalities accurately.

Collaboration


Dive into the Juan M. Toro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marina Nespor

International School for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ferran Pons

University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Sinnett

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge