Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean E. Andruski is active.

Publication


Featured researches published by Jean E. Andruski.


Journal of the Acoustical Society of America | 1999

Point vowels in Japanese mothers’ speech to infants and adults

Jean E. Andruski; Patricia K. Kuhl; Akiko Hayashi

American, Russian, and Swedish mothers produce acoustically more extreme point vowels (/i/, /u/, and /a/) when speaking to their infants than when speaking to another adult [Kuhl et al., Science 277, 684–686]. This study examines the three point vowels in Japanese mothers’ speech, and compares the acoustic structure of infant‐directed (ID) and adult‐directed (AD) tokens. Three target words containing /i/, /u/, and /a/ (bi:zu, batto, bu:tsu = beads, bat, boots) were recorded while mothers conversed with another native‐speaking adult, and with their infants, aged either 51/2 or 81/2 months. F1, F2, and F0 were measured at vowel onset, center, and offset. Acoustic, results will be compared for AD and ID speech, and expansion of the vowel space in Japanese mothers’ speech will be examined. [Work supported by NIH HD35465‐01S1.]


Journal of the Acoustical Society of America | 2005

Paired variability indices in assessing speech rhythm in Spanish/English bilingual language acquisition

Richard Work; Jean E. Andruski; Eugenia Casielles; Sahyang Kim; Geoff Nathan

Traditionally, English is classified as a stress‐timed language while Spanish is classified as syllable‐timed. Examining the contrasting development of rhythmic patterns in bilingual first language acquisition should provide information on how this differentiation takes place. As part of a longitudinal study, speech samples were taken of a Spanish/English bilingual child of Argentinean parents living in the Midwestern United States between the ages of 1;8 and 3;2. Spanish is spoken at home and English input comes primarily from an English day care the child attends 5 days a week. The parents act as interlocutors for Spanish recordings with a native speaker interacting with the child for the English recordings. Following the work of Grabe, Post and Watson (1999) and Grabe and Low (2002) a normalized Pairwise Variability Index (PVI) is used which compares, in utterances of minimally four syllables, the durations of vocalic intervals in successive syllables. Comparisons are then made between the rhythmic pat...


Journal of the Acoustical Society of America | 2007

Vowel identification and vowel space characteristics

Amy T. Neel; Jean E. Andruski

Acoustic characteristics of ten vowels produced by 45 men and 48 women from the Hillenbrand et al. (1995) study were correlated with identification accuracy. Global (mean f0, F1 and F2, duration, and amount of formant movement) and distinctive measures (vowel space area, mean distance among vowels, f0, F1 and F2 ranges, duration ratio between long and short vowels, and dynamic ratio between dynamic and static vowels) were used to predict identification scores. Global and distinctive measures accounted for less than one‐fourth of variance in identification scores: vowel space area alone accounted for 9% to 12% of variance. Differences in vowel identification across talkers were largely due to poor identification of two spectrally similar vowel pairs /ae/‐/eh/ and /uh/‐/ah/. Results of acoustic analysis and goodness ratings for well‐identified and poorly identified versions of these vowels will be presented. Preliminary analysis revealed that well‐identified vowels differed from poorly identified tokens not...


Journal of the Acoustical Society of America | 2004

Tone clarity in mixed pitch/phonation type tones

Jean E. Andruski

Lexical tone identity is often determined by a complex of acoustic cues. In Green Mong, a Hmong‐Mien language of Southeast Asia, a small subset of tones is characterized by phonation type in addition to pitch height, pitch contour, and duration, which characterize the remaining tones of the language. In tones that incorporate multiple cues to tonal identity, what makes a tone clear, or easy to recognize? This study examines acoustic and perceptual data to address this question. Six native speakers of Green Mong were asked to produce 132 phonological CV words in sentence context, using a conversational speaking style. Seventeen native speakers of the language were then asked to categorize three tones which have similar falling contours, but are differentiated by phonation type (breathy, creaky, and modal). Tokens that were correctly identified by 100% of the listeners were compared with tokens that were relatively poorly identified. Data indicate that the breathy‐ and creaky‐voiced tones are less susceptible to identification errors than the modal‐voiced tone. However, the clearest tokens of the three tones are also differentiated by details of pitch contour shape, and by duration. Similarities and differences between acoustic cue values for the best and worst tokens will be discussed.


Journal of the Acoustical Society of America | 1999

Russian vowels in mothers’ speech to infants and adults

Jean E. Andruski; Patricia K. Kuhl; Ludmilla A. Chistovich; Elena V. Kozhevnikova; V. L. Ryskina; Elvira I. Stolyarova

Cross‐linguistically, when mothers address their infants they produce acoustically more extreme point vowels (/i/, /u/, and /a/) than they produce when speaking to another adult [Kuhl et al., Science 277, 684–686 (1997)]. This study examines three nonpoint vowels in Russian (/e/, /o/, /’i/) and compares their acoustics in infant‐directed (ID) and adult‐directed (AD) speech with the acoustics of Russian point vowels in AD and ID speech. Six target words containing the nonpoint vowels in stressed syllables were recorded while ten mothers conversed with their infant and another adult. F1, F2, and F0 were measured at vowel onset, center, and offset. As with the point vowels, the acoustic structure of all three vowels differs significantly in ID and AD speech. F2 tends to move to an acoustically more extreme position in each case. In addition, nonpoint ID vowels show more vowel‐inherent formant movement across the course of the vowel. Formant movement may provide an important cue to vowel identity, and its exa...


Journal of the Acoustical Society of America | 1996

The acoustic structure of /i/, /u/, and /a/ in mothers’ speech to infants and adults

Jean E. Andruski; Patricia K. Kuhl

Research has shown that exposure to a specific language alters infants’ perception of vowel sounds by 6 months of age. Language spoken to infants may exert an important influence on the development of language‐specific patterns of vowel perception. This study compares the acoustic structure of vowels produced by ten American mothers in conversation with their infant and another adult. In both conversations, mothers were instructed to use seven words which contained either /i/, /u/, or /a/ (‘‘bead, sheep, boot, shoe, pot, sock’’ and ‘‘top’’). Results indicate that mothers consistently increased the degree of acoustic separation between vowel categories in their speech to infants, in comparison with their speech to adults. It is speculated that the acoustic structure of vowels in motherese speech contributes to infants’ acquisition of native‐language vowel categories by increasing between‐category acoustic differences and by highlighting the features that distinguish these vowels. Currently an attempt is be...


Journal of the Acoustical Society of America | 2013

Acoustic measurement of word-initial stop consonants in English-French interlingual homophones

Paula L. Castonguay; Jean E. Andruski

The purpose of the present study is to examine word-initial stop consonants of Canadian English (CE) and Canadian French (CF) interlingual homophones in order to describe how they differ in their acoustic properties. Interlingual homophones (IH) are words across languages that are phonemically identical but phonetically and semantically different, for example, English two /tu/ and French tout /tu/. Even though they are deemed phonemically identical, at the acoustical level they may be quite different. In the current study, Canadian bilingual English and French speakers were asked to produce interlingual homophones embedded in carrier phrases and in isolation. Voice onset time, relative burst intensity, and burst spectral properties of the IH words were measured and compared within and across languages. The acoustic measurements obtained will be used (1) to make predictions about which acoustic features may provide cues to language identity, and (2) to create stop tokens for a Goodness Rating study. ...


Journal of the Acoustical Society of America | 2010

Acoustic measurement of Canadian English and Canadian French interlingual homophones.

Paula L. Castonguay; Jean E. Andruski

The purpose of this study was to examine how interlingual homophones of Canadian English (CE) and Canadian French (CF) differ along acoustic dimensions as a prelude to perceptual tests. Two male Canadian monolingual speakers of English and French produced sentences in which interlingual homophones were embedded (e.g., two /thu/ and /tu/ meaning all). Voiced onset times (VOTs), formant frequencies (F1 and F2), and vowel‐inherent spectral changes (VISCs) of each respective speaker were examined. First, it is expected that the CE monolingual speaker will have aspirated (long‐lag) voiceless stops, while the CF monolingual speaker will have unaspirated (short‐lag) voiceless stops regardless of the context. Second, it is expected that CF vowels will have lower F1 values for lax vowels and will be produced more peripherally (with respect to F2 values) than CE vowels (i.e., French front vowels will be more advanced, and French back vowels will be more posterior). Lastly, it is expected that CF vowels will have le...


Journal of the Acoustical Society of America | 2008

A comparison of coarticulation in conversational and clear speech

Jean E. Andruski

Coarticulation may either hinder speech perception by increasing variability and altering the distinctive features of speech sounds, or enhance speech perception by providing additional cues to nearby sounds and spreading these cues out over time. This study examines the clear and conversational speech of 16 English speakers (8 females and 8 males) to compare the amount of coarticulation in clear and conversational speech when distinctive features are changed, as opposed to when nondistinctive features are changed. Devoicing of voiced fricatives and voicing of /t/ are investigated as examples of distinctive feature changes. Vowel nasalization and lip rounding in /s/ and /z/ are examined as examples of nondistinctive feature changes. Percentage of voicing during frication noise is used as a measure of fricative devoicing; voice onset time and percentage of voicing during the closure are used as measures of voicing in /t/; amplitude of the nasal formant and nasal formant onset time are used as measures of v...


Journal of the Acoustical Society of America | 2007

The value of F0, F3, and F4 in identifying disguised voices

Jean E. Andruski; Nikki Brugnone; Aaron Meyers

This study examines the value of F0, F3, and F4 for identifying speakers from a group of ten male and female speakers when the speakers deliberately changed their F0 and/or their articulatory patterns. The speakers were recorded producing a short passage in their normal speaking voice, a lower than normal speaking voice, and using a vocal disguise of their choice. Although F1 and F2 are strongly affected by articulation, the higher formants show relatively little effect of articulation. This relative stability may make them more useful than F1 and F2 for examining speaker identity. While formants reflect vocal tract cavity dimensions, F0 reflects a different aspect of speaker identity, namely the mass and stiffness of the speaker’s vocal folds. Like F3 and F4, F0 shows relatively little effect from articulation. However, speakers can voluntarily shift F0 into a range that is different from what they normally use. Measurements of F0, F3, and F4 were taken on selected stressed vowels from each recording con...

Collaboration


Dive into the Jean E. Andruski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy T. Neel

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akiko Hayashi

Tokyo Gakugei University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge