Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Frisch is active.

Publication


Featured researches published by Stefan Frisch.


Archive | 1996

Similarity and frequency in phonology

Stefan Frisch

Similarity and Frequency in Phonology Stefan Frisch This thesis focuses upon parallels between phonology and phonological processing. I study phonological speech errors and a phonotactic dissimilarity constraint, demonstrating they have analogous similarity and frequency effects. In addition, I show that abstract phonological constraints are influenced by the phonological encoding of lexical items. The results of this thesis are based on a metric of similarity computed using the representations of STRUCTURED SPECIFICATION (Broe 1993). This metric is quantitatively superior to traditional metrics of similarity which are based on feature counting. I also employ a probabilistic model of a gradient linguistic constraint which is based on categorical perception. In this model, the acceptability of a form is gradient, and acceptability is correlated with frequency. The most acceptable forms in a language are the most frequent ones. This constraint model provides a better fit to gradient phonotactic data than traditional categorical linguistic constraints. Together, the similarity metric and gradient constraint model demonstrate that statistical patterns in language can be relevant, principled, and formally modeled in linguistic theory. Using the gradient constraint model, I show that similarity effects in phonotactics are stronger word initially than later in the word. A parallel pattern is experimentally demonstrated for speech errors. I claim that the effect for speech errors follows from the fact that production of segmental material in a lexical item is inherently temporal. I argue that segmental information in lexical representations is sequentially accessed even for abstract phonological purposes, like phonotactics. The effects of word position on similarity in both speech production and phonotactics are accounted for in a connectionist model of lexical access, which does not differentiate the storage of a representation from its use. Structured specification is incompatible with UNDERSPECIFICATION (Kiparsky 1982, Archangeli 1984). In underspecification, features are left blank in a linguistic representation to capture redundancy relationships and phonological markedness. I demonstrate that models of similarity in phonotactics and speech errors which use underspecification do not model the data as well as the similarity metric based on structured specification. iii ACKNOWLEDGMENTS


Ear and Hearing | 2009

Preattentive cortical-evoked responses to pure tones, harmonic tones, and speech: influence of music training.

Dee Adams Nikjeh; Jennifer J. Lister; Stefan Frisch

Objective: Cortical auditory evoked potentials, including mismatch negativity (MMN) and P3a to pure tones, harmonic complexes, and speech syllables, were examined across groups of trained musicians and nonmusicians. Because of the extensive formal and informal auditory training received by musicians throughout their lifespan, it was predicted that these electrophysiological indicators of preattentive pitch discrimination and involuntary attention change would distinguish musicians from nonmusicians and provide insight regarding the influence of auditory training and experience on central auditory function. Design: A total of 102 (67 trained musicians, 35 nonmusicians) right-handed young women with normal hearing participated in three auditory stimulus conditions: pure tones (25 musicians/15 nonmusicians), harmonic tones (42 musicians/20 nonmusicians), and speech syllables (26 musicians/15 nonmusicians). Pure tone and harmonic tone stimuli were presented in multideviant oddball paradigms designed to elicit MMN and P3a. Each paradigm included one standard and two infrequently occurring deviants. For the pure tone condition, the standard pure tone was 1000 Hz, and the two deviant tones differed in frequency from the standard by either 1.5% (1015 Hz) or 6% (1060 Hz). The harmonic tone complexes were digitally created and contained a fundamental frequency (F0) and three harmonics. The amplitude of each harmonic was divided by its harmonic number to create a natural amplitude contour in the frequency spectrum. The standard tone was G4 (F0 = 392 Hz), and the two deviant tones differed in fundamental frequency from the standard by 1.5% (F0 = 386 Hz) or 6% (F0 = 370 Hz). The fundamental frequencies of the harmonic tones occur within the average female vocal range. The third condition to elicit MMN and P3a was designed for the presentation of speech syllables (/ba/ and /da/) and was structured as a traditional oddball paradigm (one standard/one infrequent deviant). Each speech stimulus was presented as a standard and a deviant in separate blocks. P1-N1-P2 was elicited before each oddball task by presenting each auditory stimulus alone in single blocks. All cortical auditory evoked potentials were recorded in a passive listening condition. Results: Incidental findings revealed that musicians had longer P1 latencies for pure tones and smaller P1 amplitudes for harmonic tones than nonmusicians. There were no P1 group differences for speech stimuli. Musicians compared with nonmusicians had shorter MMN latencies for all deviances (harmonic tones, pure tones, and speech). Musicians had shorter P3a latencies to harmonic tones and speech but not to pure tones. MMN and P3a amplitude were modulated by deviant frequency but not by group membership. Conclusions: Formally trained musicians compared with nonmusicians showed more efficient neural detection of pure tones and harmonic tones; demonstrated superior auditory sensory-memory traces for acoustic features of pure tones, harmonic tones, and speech; and revealed enhanced sensitivity to acoustic changes of spectrally rich stimuli (i.e., harmonic tones and speech). Findings support a general influence of music training on central auditory function and illustrate experience-facilitated modulation of the auditory neural system.


Archive | 1997

Synthesizing Allophonic Glottalization

Janet B. Pierrehumbert; Stefan Frisch

This chapter presents a method for synthesizing allophonic glottalization. The method is motivated by empirical studies of the phonological context for glottalization and of its acoustic consequences. A baseline study of production explored glottalization in two situations : (1) vowel-vowel hiatus across a word boundary, and (2) voiceless stops before sonorants. The study showed that allophonic glottalization depends on the segmental context, syllabic position, and phrasal prosody. Successful synthesis of contextually appropriate glottalization requires an architecture with a running window over a fully parsed phonological structure, or its effective equivalent. The signal coding used was based on the source model and cascade formant synthesis presented by [Kla87]. Synthesis of glottalization can be achieved by lowering the fundamental frequency (pulsive F function serving as0), keeping all other factors in formant synthesis constant. Thus, any synthesis procedure that has the ability to directly control F0 will be able to reproduce glottalization in a similar manner. For fully natural, theoretically correct synthesis, additional control parameters are needed to control the length of the glottal pulse and for spectral tilt.


Journal of the Acoustical Society of America | 2009

The relationship between pitch discrimination and vocal production: Comparison of vocal and instrumental musicians

Dee Adams Nikjeh; Jennifer J. Lister; Stefan Frisch

Auditory pitch discrimination and vocal pitch accuracy are fundamental abilities and essential skills of a professional singer; yet, the relationship between these abilities, particularly in trained vocal musicians, has not been the subject of much research. Difference limens for frequency (DLFs) and pitch production accuracy (PPA) were examined among 20 vocalists, 21 instrumentalists, and 21 nonmusicians. All were right-handed young adult females with normal hearing. Stimuli were harmonic tone complexes simulating piano tones and represented the mid-frequency of the untrained female vocal range, F0=261.63-392 Hz (C4-G4). DLFs were obtained by an adaptive psychophysical paradigm. Vocal pitch recordings were analyzed to determine PPA. Musicians demonstrated superior pitch discrimination and production accuracy compared to nonmusicians. These abilities did not distinguish instrumentalists and vocalists. DLF and PPA were significantly correlated with each other only for musicians with instrumental training; however, PPA was most consistent with minimal variance for vocalists. It would appear that a relationship between DLF and PPA develops with musical training, and these abilities can be differentially influenced by the type of specialty training.


Lingua | 1997

The change in negation in Middle English : A NEGP licensing account

Stefan Frisch

Abstract During the Middle English period (1150–1500 AD), the sentential negator of English changed from the preverbal ne to the postverbal not. During the transition, there was frequent use of a bipartite ne … not negator. This paper presents a detailed quantitative study of the change. I show that as not changes from sentence adverb, with a distribution parallel to never, to sentential negator, ne is lost. However, the rates of change in the use of not and ne are significantly different, indicating that these two forms are not in direct, grammatical competition. An account of the indirect connection between ne and not is given using a licensing condition for the projection of negation, NEGP. I also show that the overlapping use of the two systems of negation, ne … not, does not constitute an independent system. The change in negators shows that both functional and structural considerations are relevant in properly modeling syntactic change, and thus that diachronic change reflects aspects of competence and performance.


Ear and Hearing | 2000

Modeling Spoken Word Recognition Performance by Pediatric Cochlear Implant Users using Feature Identification

Stefan Frisch; David B. Pisoni

Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of le-ical access were developed. In one, early phoneme decisions are used in a le-ical search to find the best matching candidate. In the second, phoneme decisions are made only when le-ical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Le-ical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until le-ical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing.


Journal of the Acoustical Society of America | 2006

Production and perception of place of articulation errors

Adrienne M. Stearns; Stefan Frisch

Using ultrasound to examine speech production is gaining popularity because of its portability and noninvasiveness. This study examines ultrasound recordings of speech errors. In experiment 1, ultrasound images of participants’ tongues were recorded while they read tongue twisters. Onset stop closures were measured using the angle of the tongue blade and elevation of the tongue dorsum. Measurements of tongue twisters were compared to baseline production measures to examine the ways in which erroneous productions differ from normal productions. It was found that an error could create normal productions of the other category (categorical errors) or abnormal productions that fell outside the normal categories (gradient errors). Consonant productions extracted from experiment 1 were presented auditory‐only to naive listeners in experiment 2 for identification of the onset consonant. Overwhelmingly, the participants heard normal productions and gradient error productions as the intended sound. Categorical erro...


Journal of the Acoustical Society of America | 2006

Ultrasound study of velar‐vowel coarticulation

Sylvie M. Wodzinski; Stefan Frisch

In velar fronting, the closure location for a velar consonant is moved forward along the palate due to vowel context. This study is a replication and extension of a previous study on velar fronting [Wodzinski and Frisch, J. Acoust. Soc. Am. 114, 2395 (2003)]. In this study, ten participants produced sentences containing monosyllabic words with /k/ onsets and nine different American English vowels. Ultrasound was used to make measurements of lingual‐palatal constriction location at the midpoint of stop closure. Participants were recorded using a head‐stabilizing apparatus and an acoustically transparent standoff was used between the ultrasound probe and the jaw. Velar closure location was quantified by the angle of elevation from the horizontal axis of the ultrasound probe to the center of the velar closure. The articulatory frontness of the following vowel was quantified using the frequency of F2 at the vowel midpoint. A strong negative correlation between velar closure angle and the following vowel F2 wa...


Journal of the Acoustical Society of America | 2008

Semiautomatic measures of velar stop closure location using the EdgeTrak software.

Sabrina J. McCormick; Stefan Frisch; Sylvie M. Wodzinski

Previous work has found a strong correlation between the frontness of closure location for velar stops (measured manually from ultrasound images) and the frontness of the following vowel (measured by F2). In this study, semi automatic measures of tongue frontness during a velar closure were made. Tongue edge traces were made using the EDGETRAK software (Li et al., 2005, Clinical Linguistics and Phonetics, 545–554). Frontness was then quantified from these traces using three different measures: Bressman’s anteriority index (Bressman et al., 2005, Clinical Linguistics and Phonetics, 573–588), a modified version of the anteriority index created for this study, and a measure of the center of mass of the tongue created for this study. When compared to the original manual measures, the modified anteriority index correlated most highly with both the manual measurement of the consonant closure location and also with F2 of the following vowel. The modified anteriority index uses an angle based weight in the anteri...


Journal of the Acoustical Society of America | 2005

Reliablity of measurements from ultrasound images

Sarah M. Hardin; Stefan Frisch

As ultrasound imaging gains popularity in phonetic and speech science research, examining the reliability of measures taken from ultrasound images becomes important. This study assesses the reliability of hand measures of ultrasound data collected by graduate student researchers at the University of South Florida ultrasound imaging lab. Speech production data from two different experiments, ‘‘Ultrasound analysis of velar fronting’’ (Wodzinski, 2004) and ‘‘Ultrasound study of errors in speech production’’ [Frisch, (2003)] were analyzed by two different researchers to obtain inter‐rater reliability measures. In addition, one data set was measured twice by the same researcher, once when inexperienced with ultrasound analysis and 7 months later after considerable experience had been gained. The study compared researcher’s choice of image to analyze, the measures of the location of articulatory landmarks, and the measures used to quantify articulatory postures. Overall, hand measures of ultrasound images were ...

Collaboration


Dive into the Stefan Frisch's collaboration.

Top Co-Authors

Avatar

David B. Pisoni

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dee Adams Nikjeh

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Nathan D. Maxfield

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Wright

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge