Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Majid Zandipour is active.

Publication


Featured researches published by Majid Zandipour.


Journal of the Acoustical Society of America | 1999

Articulatory tradeoffs reduce acoustic variability during American English /r/ production

Frank H. Guenther; Carol Y. Espy-Wilson; Suzanne Boyce; Melanie L. Matthies; Majid Zandipour; Joseph S. Perkell

The American English phoneme /r/ has long been associated with large amounts of articulatory variability during production. This paper investigates the hypothesis that the articulatory variations used by a speaker to produce /r/ in different contexts exhibit systematic tradeoffs, or articulatory trading relations, that act to maintain a relatively stable acoustic signal despite the large variations in vocal tract shape. Acoustic and articulatory recordings were collected from seven speakers producing /r/ in five phonetic contexts. For every speaker, the different articulator configurations used to produce /r/ in the different phonetic contexts showed systematic tradeoffs, as evidenced by significant correlations between the positions of transducers mounted on the tongue. Analysis of acoustic and articulatory variabilities revealed that these tradeoffs act to reduce acoustic variability, thus allowing relatively large contextual variations in vocal tract shape for /r/ without seriously degrading the primary acoustic cue. Furthermore, some subjects appeared to use completely different articulatory gestures to produce /r/ in different phonetic contexts. When viewed in light of current models of speech movement control, these results appear to favor models that utilize an acoustic or auditory target for each phoneme over models that utilize a vocal tract shape target for each phoneme.


Journal of the Acoustical Society of America | 2003

Influences of tongue biomechanics on speech movements during the production of velar stop consonants: A modeling study

Pascal Perrier; Yohan Payan; Majid Zandipour; Joseph S. Perkell

This study explores the following hypothesis: forward looping movements of the tongue that are observed in VCV sequences are due partly to the anatomical arrangement of the tongue muscles, how they are used to produce a velar closure, and how the tongue interacts with the palate during consonantal closure. The study uses an anatomically based two-dimensional biomechanical tongue model. Tissue elastic properties are accounted for in finite-element modeling, and movement is controlled by constant-rate control parameter shifts. Tongue raising and lowering movements are produced by the model mainly with the combined actions of the genioglossus, styloglossus, and hyoglossus. Simulations of V1CV2 movements were made, where C is a velar consonant and V is [a], [i], or [u]. Both vowels and consonants are specified in terms of targets, but for the consonant the target is virtual, and cannot be reached because it is beyond the surface of the palate. If V1 is the vowel [a] or [u], the resulting trajectory describes a movement that begins to loop forward before consonant closure and continues to slide along the palate during the closure. This pattern is very stable when moderate changes are made to the specification of the target consonant location and agrees with data published in the literature. If V1 is the vowel [i], looping patterns are also observed, but their orientation was quite sensitive to small changes in the location of the consonant target. These findings also agree with patterns of variability observed in measurements from human speakers, but they contradict data published by Houde [Ph.D. dissertation (1967)]. These observations support the idea that the biomechanical properties of the tongue could be the main factor responsible for the forward loops when V1 is a back vowel, regardless of whether V2 is a back vowel or a front vowel. In the [i] context it seems that additional factors have to be taken into consideration in order to explain the observations made on some speakers.


Ear and Hearing | 2001

Changes in speech intelligibility of postlingually deaf adults after cochlear implantation.

John Gould; Harlan Lane; Jennell Vick; Joseph S. Perkell; Melanie L. Matthies; Majid Zandipour

Objective This study examines changes in the intelligibility of CVC words spoken by postlingually deafened adults after they have had 6 to 12 mo of experience with a cochlear implant. The hypothesis guiding the research is that the intelligibility of these speakers will improve after extended use of a cochlear implant. The paper also describes changes in CVC word intelligibility analyzed by phoneme class and by features. Design The speech of eight postlingually deaf adults was recorded before activation of the speech processors of their cochlear implants and at 6 mo and 1 yr after activation. Seventeen listeners with no known impairment of hearing completed a word identification task while listening to each implant user’s speech in noise. The percent information transmitted by the speakers in their pre- and postactivation recordings was measured for 11 English consonants and eight vowels separately. Results An overall improvement in word intelligibility was observed: seven of the eight speakers showed improvement in vowel intelligibility and six speakers showed improvement in consonant intelligibility. However, the intelligibility of specific consonant and vowel features varied greatly across speakers. Conclusions Extended use of a cochlear implant by postlingually deafened adults tends to enhance their intelligibility.


Journal of the Acoustical Society of America | 2001

Gestural timing effects in the ‘‘perfect memory’’ sequence observed under three rates by electromagnetometry

Mark Tiede; Joseph S. Perkell; Majid Zandipour; Melanie L. Matthies

In a well‐known example due to Browman and Goldstein [Lab. Phon. I, 341–376 (1990)] the /ktm/ sequence in the phrase ‘‘perfect memory’’ is contrasted between careful (list) and fluent production conditions. X‐ray microbeam data were used to show that although in the fluent case coproduction of the /m/ can mask the acoustic releases of the /k/ and /t/, both stops are nonetheless articulated. The current work uses EMMA data to examine this sequence in greater detail: 8 subjects produced the phrase in a carrier context under normal, fast, and clear rate conditions. Results confirm that tongue dorsum and tip movements toward velar and apical closure occur regardless of rate and observable acoustic effect. In addition, while movement amplitudes decreased somewhat as rate increased, little variation in the durations associated with the consonant gestures was observed. Instead, changes in rate primarily effected V to V duration and the relative phasing of the velar and apical closing gestures: the tongue tip max...


Journal of the Acoustical Society of America | 1998

Motor equivalence in the production of

Joseph S. Perkell; Melanie L. Matthies; Majid Zandipour

To explore the idea that speech motor goals are acoustic targets, upper lip protrusion and tongue blade fronting were examined in the sibilant /∫/ for evidence of motor equivalence in eight speakers of American English. Positive correlations across multiple repetitions of /∫/ (motor equivalence) would occur if the upper lip compensated with more protrusion when the tongue blade was further forward and vice versa. This motor equivalence would serve to maintain an adequate front cavity size for good acoustic separation from /s/ and enhance the acoustic stability of /∫/. It was hypothesized that motor equivalence would be found among less prototypical tokens, as elicited in a ‘‘clear+fast’’ speaking condition. Acoustic spectral analyses indicated excellent acoustic separation between the two sibilants, even in the ‘‘clear+fast’’ condition. There were significant positive correlations of the tongue‐blade and upper lip movements for 28% of all possible cases, with considerable individual variation. When motor ...


Journal of the Acoustical Society of America | 1997

Individual differences in cyclical and speech movement

Joseph S. Perkell; Majid Zandipour; Melanie L. Matthies

A comprehensive account of speech production should explain interspeaker variation. Toward this end, a study was performed with eight speakers to investigate whether individual kinematic performance limits, as reflected in a speechlike cyclical task, could account for differences in speech kinematics. Kinematic data from cyclical CV movements at rates from 1–6 Hz of the lower lip, tongue blade, and tongue dorsum were compared with data from speech utterances in different conditions, including normal, fast, clear, and slow. There were differences in movement distance, peak velocity, and duration among the speaking conditions and among the speakers. Three speakers produced ‘‘clear’’ speech utterances with distances, peak velocities, and durations that were greater than normal. The data from two of the three may reflect some increased effort for clear speech. The amount of overlap of the speech data and cyclical data varied across speakers, ranging from little overlap to complete overlap. Thus, in general, t...


Journal of the Acoustical Society of America | 2006

Different motor strategies for increasing speaking rate: Data and modeling

Majid Zandipour; Joseph S. Perkell; Mark Tiede; Frank H. Guenther; Kiyoshi Honda; Emi Z. Murano

Different motor strategies for increasing speaking rate: data and modeling EMG, kinematic and acoustic signals were recorded from two male subjects as they pronounced multiple repetitions of simple nonsense utterances. The resulting data indicate that the two subjects employed different motor strategies to increase speaking rate. When speaking faster, S1 significantly increased the size of the articulatory target region for his tongue movements, increased the speed of the tongue movements and the rate of EMG rise somewhat, while decreasing the movement duration significantly and movement distance slightly. In contrast, at the fast rate, S2 had the same size articulatory target region and rate of EMG rise as at the normal rate, but decreased the speed, distance, and duration of tongue movement slightly. Each subject had similar dispersions of acoustic targets in F1–F2 space at fast versus normal rates, but both shifted target centroids toward the center of the vowel space at the fast rate. Simulations with...


Journal of the Acoustical Society of America | 2006

Variation in vowel production

Joseph S. Perkell; Majid Zandipour; Satrajit S. Ghosh; Lucie Ménard; Harlan Lane; Mark Tiede; Frank H. Guenther

Acoustic and articulatory recordings were made of vowel productions by young adult speakers of American English—ten females and ten males—to investigate effects of speaker and speaking condition on measures of contrast and dispersion. The vowels in the words teat, tit, tet, tat, tot, and toot were embedded in two‐syllable ‘‘compound words’’ consisting of two CVC syllables, in which each of the two syllables comprised a real word, the consonants were /p/, /t/ or /k/, the two adjoining consonants were always the same, the first syllable was unstressed and the second stressed. Variations of phonetic context and stress were used to induce dispersion around each vowel centroid. The compound words were embedded in a carrier phrase and were spoken in normal, clear, and fast conditions. Initial analyses of F1 and F2 on 15 speakers have shown significant effects of speaker, speaking condition (and also vowel, stress, and context) on vowel contrast, and dispersion around means. Generally, dispersions increased and ...


Journal of the Acoustical Society of America | 2005

Effects of speaking condition and hearing status on vowel production in postlingually deaf adults with cochlear implant

Lucie Ménard; Margaret Denny; Harlan Lane; Melanie L. Matthies; Joseph S. Perkell; Ellen Stockmann; Jennell Vick; Majid Zandipour; Thomas J. Balkany; Marek Polack; Mark Tiede

This study investigates the effects of speaking condition and hearing status on vowel production by postlingually deafened adults. Thirteen cochlear implant users produced repetitions of nine American English vowels in three time samples (prior to implantation, one month, and one year after implantation); in three speaking conditions (clear, normal, and fast); and in two feedback conditions after implantation (implant processor turned on and off). Ten speakers with normal hearing were also recorded. Results show that vowel contrast, in the F1 versus F2 space, in mels, does not differ from the pre‐implant stage to the 1‐month stage. This finding indicates that shortly after implantation, speakers had not had enough experience with hearing from the implant to adequately retune their auditory feedback systems and use auditory feedback to improve feedforward commands. After 1 year of implant use, contrast distances had increased in both feedback conditions (processor on and off), indicating improvement in feedforward commands for phoneme production. Furthermore, after long‐term auditory deprivation, speakers were producing differences in contrast between clear and fast conditions in the range of those found for normal‐hearing speakers, leading to the inference that maintenance of these distinctions is not affected by hearing status. [Research supported by NIDCD.]


Journal of the Acoustical Society of America | 2004

Perturbation and compensation in speech acoustics using a jaw‐coupled robot

Mark Tiede; Frank H. Guenther; Joseph S. Perkell; Majid Zandipour; Guillaume Houle; David J. Ostry

Observations were made in three speakers of compensation in formant trajectories in response to jaw perturbations during utterances with the general form /siyCVd/, as in ‘‘see red.’’ Custom dental prostheses were used to help immobilize the head (upper jaw) and couple a computer‐controlled robotic device (lower jaw). A 3‐Newton perturbation force was applied to the jaw during one out of every five repetitions, selected at random, with half of the perturbations applied downward and half upward. Perturbations were triggered from jaw opening (for CV) exceeding a threshold relative to clench position. Audio (at 10 kHz) and jaw position (at 1 kHz) were recorded concurrently. Individual tokens were extracted using the perturbation threshold for alignment. Formants computed over these intervals show initial deviation from control trajectories and then compensation that begins 60–90 ms after perturbation. Since jaw position does not recover its unperturbed trajectory, compensation presumably is effected through m...

Collaboration


Dive into the Majid Zandipour's collaboration.

Top Co-Authors

Avatar

Joseph S. Perkell

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Melanie L. Matthies

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Tiede

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jennell Vick

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ellen Stockmann

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Harlan Lane

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Harlan Lane

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Margaret Denny

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yohan Payan

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge