Jason Lilley
University of Delaware
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jason Lilley.
Journal of the Acoustical Society of America | 2010
Laura Spinu; Jason Lilley
Several gender classification methods based on acoustic information were compared. The data came from 31 native speakers of Romanian (10 males, 21 females). A subset of fricatives and vowels (7348 tokens) was divided by hidden Markov model training into three acoustically uniform regions. For each region, (a) a set of cepstral coefficients, specifically c0–c4 and (b) two sets of spectral moments, specifically bark‐transformed and linear moments 1–4 plus rms, were extracted. The acoustic data were then used in a linear discriminant analysis to classify the tokens by gender. The findings show that cepstral coefficients perform better than spectral moments in gender classification, and that spectral moments are more successful than linear moments. The overall correct classification was 90% for the cepstral analysis, 78% for bark spectral moments, and 72% for linear moments. When the data from fricatives and vowels were examined separately, it was found that in all cases the vowel information yielded more acc...
Journal of the Acoustical Society of America | 2016
Jason Lilley; Laura Spinu
We use a classification tool previously tested on Romanian fricatives to categorize the front (non-sibilant) fricatives of English by place of articulation. Labiodental and interdental fricatives are difficult to distinguish acoustically, posing problems even for human perception. Prior classification work with English front fricatives has not been very successful with this contrast, with correct classification rates ranging from 40 to 60%. The feature set we use for coding the acoustic properties of the fricatives and their following vowels comprise the first six cepstral coefficients (c0–c5). The acoustic features are measured at 10-ms intervals across each segment; the measures obtained are then binned into three contiguous intervals for both the fricative and the vowel, representing the onset, steady state, and offset of each segment. The boundaries between regions are set by using a hidden Markov model to determine three internally uniform regions with respect to their acoustic properties. The mean v...
Journal of the Acoustical Society of America | 2010
Jason Lilley; Justin M. Aronoff; Sigfrid D. Soli; Timothy Bunnell; Ivan Pal
The hearing in noise test (HINT) is widely used in clinical settings to measure patients’ speech recognition thresholds (SRTs). The patient must repeat 20 sentences presented at signal‐to‐noise ratios (SNRs) that change adaptively throughout the test. HINT varies the SNR of the sentences according to the patient’s responses, so each response must be scored by a human judge before the next sentence is presented. An utterance verification engine (UVE) would remove both the need for a human judge and the subjectivity of human judgments. 2000 HINT responses were used from 25 listeners to develop a UVE based on hidden Markov models (HMMs), a lexicon of observed and likely words, and a grammar that generates utterances classifiable as correct or incorrect. An evaluation study is in progress with a planned enrollment of 25 normal‐hearing listeners. HMMs, lexicon, and grammar are updated after every five listeners. In results with ten listeners tested to date, scoring accuracy averaged 88% per subject (range 81%–...
Journal of the Acoustical Society of America | 2009
Laura Spinu; Jason Lilley; Irene Vogel; H. Timothy Bunnell
Hidden Markov models (HMMs) with two types of acoustic features were used to classify fricatives by place of articulation and palatalization status. The data were recordings of 31 native speakers of Romanian who produced a total of 3674 fricatives. Segments from four places (labial, alveolar, postalveolar, and dorsal) were examined, each of which appeared as plain and palatalized (with a palatal secondary articulation). Both the fricatives and the preceding vowels were divided by HMM training into three acoustically uniform regions, corresponding to the three states of the HMM models. Separate sets of monophone HMMs were trained using (a) the first four spectral moments plus rms amplitude and (b) the first five Bark‐cepstral coefficients. Generally, the first and second regions/states of the fricatives were more important in classifying the segments by place, while the third state contributed more to the classification by palatalization status. The success of the classification depended on the specific co...
SSW | 2007
H. Timothy Bunnell; Jason Lilley
conference of the international speech communication association | 2008
H. Timothy Bunnell; Jason Lilley
conference of the international speech communication association | 2017
H. Timothy Bunnell; Jason Lilley; Kathleen McGrath
conference of the international speech communication association | 2014
Jason Lilley; Susan Nittrouer; H. Timothy Bunnell
conference of the international speech communication association | 2014
Jason Lilley; James J. Mahshie; H. Timothy Bunnell
Archive | 1998
H. Timothy Bunnell; Jane McNicholas; James B. Polikoff; Jason Lilley; George Oikonomou