Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Corine Bickley is active.

Publication


Featured researches published by Corine Bickley.


SSW | 1997

A Framework for Synthesis of Segments Based on Pseudoarticulatory Parameters

Corine Bickley; Kenneth N. Stevens; David R. Williams

Procedures are described for rule-based synthesis of segmental aspects of an utterance using a formant synthesizer controlled by high-level parameters. The rules for synthesis of consonants are described in terms of: (1) the formation and release of consonantal constrictions; and (2) the actions of the secondary articulators of the glottis, the velopharyngeal opening, and active pharyngeal expansion or contraction. Examples of synthesis based in part on these rules are given. Advantages of using high-level parameters for rule-based synthesis are discussed.


Journal of the Acoustical Society of America | 2008

An acoustic comparison of nonlinguistic sounds to sentences spoken in American English.

Corine Bickley; Yell Inverso

Acoustic characteristics of nonlinguistic (nonspeech) sounds (NLSs) were measured for duration and spectral variation, and compared to acoustic characteristics of spoken sentences (TIMIT database). The NLS, included samples produced by animal, human, mechanical, and natural sources. The acoustic comparison examined stoplike onsets, fricativelike intervals, vowel‐like intervals, and syllablelike variations in amplitude. The NLSs were identified by two groups of listeners: listeners with normal hearing and users of cochlear implants. Results of the listening tests have been reported previously by Inverso et al. (2007) and Inverso (2008). All of the NLSs were identified accurately by listeners with normal hearing, but not by the users of cochlear implants. The current analysis focuses on the ways in which the NLSs are similar to and other ways in which they are different from sentences spoken by a variety of talkers. It was found that speechlike variation in amplitude, both in terms of duration and event ons...


Journal of the Acoustical Society of America | 1996

Multilevel approach to rule‐based speech synthesis using quasiarticulatory parameters

David R. Williams; Kenneth N. Stevens; Eric Carlson; Corine Bickley

Rules are being developed for controlling a formant synthesizer (HLsyn) in which ten higher‐level parameters control changes in vocal‐tract natural frequencies, areas of four orifices, and other aspects of articulation. The input to the rules is a phonetic sequence labeled in terms of distinctive features. The rules operate in four stages. First, the time locations of a sequence of landmarks are established. The landmarks are of three types: locations of vowel nuclei and offglides, locations of amplitude minima for glides, and times of formation and release of consonantal constrictions. Second, formant trajectories between vowel landmarks are derived. Third, these vowel‐to‐vowel formant movements are modified by the effects of consonantal constrictions, working from left to right. Fourth, the effects of secondary consonant articulations are added, including introduction of a velopharyngeal opening, spreading or constricting glottal movements, and influences of segmentally related vocal fold stiffening and...


Journal of the Acoustical Society of America | 1994

Synthesis of consonant sequences using a Klatt synthesizer with higher‐level control

Corine Bickley; Kenneth N. Stevens; David R. Williams

The use of higher‐level (HL), quasi‐articulatory parameters to control a formant‐based speech synthesizer has the potential advantage of greatly simplifying the array of parameters needed to synthesize consonants. The principal parameters in this array are the time‐varying cross‐sectional areas of the constriction formed by the primary articulator, and of the orifice formed by the secondary articulator, i.e., the glottal of the velopharyngeal port. This paper describes rule‐driven procedures for synthesizing consonant sequences that require changes in the primary articulator or movements of the secondary articulator. Examples include: (1) clusters such as /pl/ or /tw/ where the HL parameters are automatically mapped into frication noise in the coarticulated liquid or glide; (2) nasal‐obstruent sequences in which velopharyngeal closure automatically results in an intraoral pressure increase; (3) sequences of two stop consonants in which the two articulatory closures and releases are manifested acoustically...


Archive | 2012

Analysis of Responses to Lipreading Prompts as a Window to Deaf Students’ Writing Strategies

Corine Bickley; Mary June Moseley; Anna Stansky

The responses of deaf students to lipreading prompts were analyzed for strengths and weaknesses in mastery of written English. We identified the four VL2 participants who wrote the most sentences and then analyzed those sentences with respect to the vocabulary and syntactic structures written. We observed four trends in the responses of these four students: (1) function words were usually used correctly; (2) syntax errors were similar to those reported in the literature in other writing tasks; (3) semantically appropriate responses were constructed even when the student did not lipread the stimuli correctly, and (4) words that matched the syllabic structure of the stimuli were written even though the response words did not match the stimulus words. We also reviewed the VL2 demographic questionnaire for characteristics common to the four writers we analyzed. We found that these students were all encouraged by parents to read and write English. We concluded that existing written material can provide useful information concerning mastery of written English.


Journal of the Acoustical Society of America | 2009

Challenges in evaluating the intelligibility of text‐to‐speech.

Ann K. Syrdal; Murray F. Spiegel; Deborah Rekart; Susan R. Hertz; Thomas D. Carrell; H. Timothy Bunnell; Corine Bickley

Text‐to‐speech (TTS) technology imposes different constraints on intelligibility than those sufficient for the evaluation of other speech communication systems. For example, the newly revised standard S.2‐2009 explicitly excludes TTS from the speech communication systems it covers. Since there is no current standard appropriate for evaluating TTS intelligibility, the ASA Standards Bioacoustics (S3) working group on Text‐to‐Speech Technology (WG91) was formed with the initial goal of developing such standard. We describe several ways in which standard methods of testing speech intelligibility are unsuitable for TTS technology and outline our approach to overcoming these limitations. We present an overview of our proposed standard, which is currently nearing its final draft stages.


Journal of the Acoustical Society of America | 2006

Acoustical analysis of nonlinguistic sounds

Yell Inverso; Corine Bickley; Charles J. Limb

The perception of speech is an undisputed important goal for cochlear implantation; however, the reception of nonlinguistic sounds (NLS) is also important. Nonlinguistic sounds are important for one’s safety and they are considered to have importance to a person’s sense of connection to and welfare in one’s environment. NLS are different from speech, and the perception and acoustic characteristics of NLS have not been studied adequately to allow clinicians to fit CIs for optimal recognition of both speech and NLS. The specific categories of nonlinguistic sounds to be evaluated in this study are (1) human vocal/nonverbal, such as a baby crying or a person coughing; (2) mechanical/alerting, such as a telephone or an alarm clock buzzing; (3) nature/inanimate, such as weather sounds; (4) animal/insect, such as a dog barking; and (5) musical instruments such as the strum of a guitar. In this ongoing study, 50 listeners (half adults with normal hearing and half adult post‐ling deafened CI users) are being evalu...


Journal of the Acoustical Society of America | 2006

Perception of synthesized syllables and sentences by listeners with hearing loss

Corine Bickley; Amanda Dove; Andrea Liacouras; Jolene Mancini

The goal of this project is to create a set of synthesized speech materials that yields listening results equivalent to those that would be obtained using recorded naturally spoken materials, such as the CUNY Nonsense Syllable Test and the CASPERsent set of topic‐specific sentences. One research interest of the RERC‐HE is the following: Can synthesized speech be used as a substitute for recorded human speech in aural rehabilitation and in hearing aid research? The listeners in this study are both adults with normal hearing and with hearing loss. We have tested two listeners with normal hearing and four with hearing loss using nonsense syllables synthesized with DECtalk. The patterns of errors in their identification of phonemes follow expected patterns when subjects’ responses were analyzed with respect to their audiometric profiles and the acoustic characteristics of the synthesized phonemes. The same group of listeners is now being tested using sentences from the CASPERsent lists synthesized with two co...


Journal of the Acoustical Society of America | 1999

Does ‘‘one hundred fifty‐five’’ mean 155 or 100,55 or 100,50,5?

Corine Bickley; Lennart Nord

The goal of this study is to determine what are the acoustic cues, if any, that differentiate the spoken phrase that means 155 from one that means 100,55 or others for 150,5 or 1,155 or 100,50,5. The motivation for this study comes from an issue that arose in designing a speech‐user interface—that is, a user interface controlled by a user’s speech via a speech recognizer. Users of a speech‐user interface for programs that require numerical input (engineering design systems, accounting packages, etc.) need to speak sequences of numbers such as 100,50,5 (for example, to specify a three‐dimensional coordinate location) as well as 100,55 (for a two‐dimensional coordinate location). In this study, speakers produced utterances of one‐, two‐, and three‐number sequences in two languages: Swedish and English. These productions were analyzed acoustically in terms of fundamental frequency, syllable duration, and amplitude. Listeners identified for each production whether the speaker intended to specify one, two, or ...


Journal of the Acoustical Society of America | 1995

Analysis and perception of voice similarities among family members

Rachel E. Kushner; Corine Bickley

The similarities of voice between members of the same family were found to be high when measured both perceptually and acoustically. The recordings of nine subjects (two mothers, two daughters, two sisters, two brothers, and an unrelated male) were made using sentences of different stress and reiterant syllable combinations. The recordings were paired into related and unrelated sets. Unfamilar listeners were instructed to listen to the recordings and rate their similarity on a numerical scale. Results showed that paired voices of those who were related showed a significant number of high scores as opposed to the voiced pairs of individuals who were unrelated. It was also found that voice similarity was more easily detected when the listeners heard whole sentences as opposed to reiterant syllables and individual words. Other influential factors associated with high scores (similar sounding voices) were equal prosody and volume. Related pairs who had similar prosody scored higher than unrelated pairs with s...

Collaboration


Dive into the Corine Bickley's collaboration.

Top Co-Authors

Avatar

Kenneth N. Stevens

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H. Timothy Bunnell

Alfred I. duPont Hospital for Children

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge