Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Will Styler is active.

Publication


Featured researches published by Will Styler.


Journal of the American Medical Informatics Association | 2013

Towards comprehensive syntactic and semantic annotations of the clinical narrative

Daniel Albright; Arrick Lanfranchi; Anwen Fredriksen; Will Styler; Colin Warner; Jena D. Hwang; Jinho D. Choi; Dmitriy Dligach; Rodney D. Nielsen; James H. Martin; Wayne H. Ward; Martha Palmer; Guergana Savova

Objective To create annotated clinical narratives with layers of syntactic and semantic labels to facilitate advances in clinical natural language processing (NLP). To develop NLP algorithms and open source components. Methods Manual annotation of a clinical narrative corpus of 127 606 tokens following the Treebank schema for syntactic information, PropBank schema for predicate-argument structures, and the Unified Medical Language System (UMLS) schema for semantic information. NLP components were developed. Results The final corpus consists of 13 091 sentences containing 1772 distinct predicate lemmas. Of the 766 newly created PropBank frames, 74 are verbs. There are 28 539 named entity (NE) annotations spread over 15 UMLS semantic groups, one UMLS semantic type, and the Person semantic category. The most frequent annotations belong to the UMLS semantic groups of Procedures (15.71%), Disorders (14.74%), Concepts and Ideas (15.10%), Anatomy (12.80%), Chemicals and Drugs (7.49%), and the UMLS semantic type of Sign or Symptom (12.46%). Inter-annotator agreement results: Treebank (0.926), PropBank (0.891–0.931), NE (0.697–0.750). The part-of-speech tagger, constituency parser, dependency parser, and semantic role labeler are built from the corpus and released open source. A significant limitation uncovered by this project is the need for the NLP community to develop a widely agreed-upon schema for the annotation of clinical concepts and their relations. Conclusions This project takes a foundational step towards bringing the field of clinical NLP up to par with NLP in the general domain. The corpus creation and NLP components provide a resource for research and application development that would have been previously impossible.


workshop on events definition detection coreference and representation | 2014

Challenges of Adding Causation to Richer Event Descriptions

Rei Ikuta; Will Styler; Mariah Hamang; Tim O'Gorman; Martha Palmer

The goal of this study is to create guidelines for annotating cause-effect relations as part of the Richer Event Description schema. We present the challenges faced using the definition of causation in terms of counterfactual dependence and propose new guidelines for cause-effect annotation using an alternative definition which treats causation as an intrinsic relation between events. To support the use of such an intrinsic definition, we examine the theoretical problems that the counterfactual definition faces, show how the intrinsic definition solves those problems, and explain how the intrinsic definition adheres to psychological reality, at least for our annotation purposes, better than the counterfactual definition. We then evaluate the new guidelines by presenting results obtained from pilot annotations of ten documents, showing that an inter-annotator agreement (F1-score) of 0.5753 was achieved. The results provide a benchmark for future studies concerning cause-effect annotation in the RED schema.


Handbook of Linguistic Annotation | 2017

Annotating the Clinical Text – MiPACQ, ShARe, SHARPn and THYME Corpora

Guergana Savova; Sameer Pradhan; Martha Palmer; Will Styler; Wendy W. Chapman; Noémie Elhadad

In this chapter, we present several resources for linguistic and domain annotations over de-identified clinical narrative text. Clinical narrative is the free-text within the Electronic Medical Records that is generated by physicians at the point of care to describe the patient-provider encounter, tissue or image. These annotated gold resources enable the development of state-of-the-art computational methods for processing health-related text with a view towards downstream applications.


Journal of the Acoustical Society of America | 2011

Use of waveform mixing to synthesize a continuum of vowel nasality in natural speech

Will Styler; Rebecca Scarborough; Georgia Zellou

Studies of the perception of vowel nasality often use synthesized stimuli to produce controlled gradience in nasality. To investigate the perception of nasality in natural speech, a method was developed wherein vowels differing naturally in nasality (e.g., from CVC and NVN words) are mixed to yield tokens with various degrees of nasality. First, monosyllables (e.g., CVC, NVC, CVN, NVN) matched for vowel quality and consonant place of articulation were recorded. The vowels from two tokens were excised, matched for amplitude, duration and pitch contour, and then overlaid sample-by-sample according to a specified ratio. The resulting vowel was spliced back into the desired consonantal context. Iterating this process over a series of ratios produced natural-sounding tokens along a continuum of vowel nasality. Acoustic measurements of the nasality of output tokens [using A1-P0 (Chen, 1997)] confirmed a relation between the ratio used and the nasality of the output. Stimuli created in this manner were used in a...


Journal of Phonetics | 2018

Plosive voicing in Afrikaans: Differential cue weighting and tonogenesis

Andries W. Coetzee; Patrice Speeter Beddor; Kerby Shedden; Will Styler; Daan Wissing

Abstract This study documents the relation between f0 and prevoicing in the production and perception of plosive voicing in Afrikaans. Acoustic data show that Afrikaans speakers differed in how likely they were to produce prevoicing to mark phonologically voiced plosives, but that all speakers produced large and systematic f0 differences after phonologically voiced and voiceless plosives to convey the contrast between the voicing categories. This pattern is mirrored in these same participants’ perception: although some listeners relied more than others on prevoicing as a perceptual cue, all listeners used f0 (especially in the absence of prevoicing) to perceptually differentiate historically voiced and voiceless plosives. This variation in the speech community is shown to be generationally structured such that older speakers were more likely than younger speakers to produce prevoicing, and to rely on prevoicing perceptually. These patterns are consistent with generationally determined differential cue weighting in the speech community and with an ongoing sound change in which the original consonantal voicing contrast is being replaced by a tonal contrast on the following vowel.


Journal of the Acoustical Society of America | 2016

Normalizing nasality? Across-speaker variation in acoustical nasality measures

Will Styler

Although vowel nasality has traditionally been investigated using articulatory methods, acoustical methods have been increasingly used in the linguistic literature. However, while these acoustical measures have been shown to be correlated with nasality, their variability across-speakers is less-well investigated. To this end, we examined cross-speaker variation in coarticulatory vowel nasality in a large collection of multi-speaker English data, comparing measurements in CVC and NVN words at two points per vowel. Two known correlates were analyzed: A1-P0, where A1 is the amplitude of the harmonic under F1, and P0 is the amplitude of a low-frequency nasal peak (Chen 1997), and the bandwidth of F1 (Hawkins and Stevens 1985). Speakers varied both in terms of baseline measurements (e.g., the mean measurements in oral versus nasal vowels), and in the oral-nasal range (the measured difference between oral and nasal). This suggests that although analyzing centered within-speaker differences is unproblematic, the...


Journal of the Acoustical Society of America | 2018

Automatic tongue contour extraction in ultrasound images with convolutional neural networks

Jian Zhu; Will Styler; Ian C. Calloway

Ultrasound imaging of the tongue can provide detailed articulatory information addressing a variety of phonetic questions. However, using this method often requires the time-consuming process of manually labeling tongue contours in noisy images. This study presents a method for the automatic identification and extraction of tongue contours using convolutional neural networks, a machine learning algorithm that has been shown to be highly successful in many biomedical image segmentation tasks. We have adopted the U-net architecture (Ronneberger, Fischer, & Brox 2015, U-Net: Convolutional Networks for Biomedical Image Segmentation, DOI:10.1007/978-3-319-24574-428), which learns from human-annotated splines using multiple, repeated convolution and max-pooling layers in the network for feature extraction, as well as deconvolution layers for generating spatially precise predictions of the tongue contours. Trained using a preliminary dataset of 8881 human-labeled tongue images from three speakers, our model generates discrete tongue splines comparable to those identified by human annotators (Dice Similarity Coefficient = 0.71). Although work is ongoing, this neural network based method shows considerable promise for the post-processing of ultrasound images in phonetic research. [Work supported by NSF grant BCS-1348150 to P.S. Beddor and A.W. Coetzee.]Ultrasound imaging of the tongue can provide detailed articulatory information addressing a variety of phonetic questions. However, using this method often requires the time-consuming process of manually labeling tongue contours in noisy images. This study presents a method for the automatic identification and extraction of tongue contours using convolutional neural networks, a machine learning algorithm that has been shown to be highly successful in many biomedical image segmentation tasks. We have adopted the U-net architecture (Ronneberger, Fischer, & Brox 2015, U-Net: Convolutional Networks for Biomedical Image Segmentation, DOI:10.1007/978-3-319-24574-428), which learns from human-annotated splines using multiple, repeated convolution and max-pooling layers in the network for feature extraction, as well as deconvolution layers for generating spatially precise predictions of the tongue contours. Trained using a preliminary dataset of 8881 human-labeled tongue images from three speakers, our model gene...


Journal of the Acoustical Society of America | 2017

Pause postures in American English

Jelena Krivokapic; Will Styler; Ben Parrell; Jiseung Kim

Recent work examining articulation during pauses has found that articulatory patterns distinguish grammatical from ungrammatical pauses (Ramanarayanan et al. 2009). For Greek, Katsika et al. (2014) have identified pause postures (specific configurations of the vocal tract at prosodic boundaries) which are triggered by π-gestures with high activation levels and consequently occur at strong prosodic boundaries. This study investigates pause postures in American English, specifically, whether they occur, and how they are coordinated with other events at prosodic boundaries. In an electromagnetic articulometry (EMA) study, seven speakers produced 7-8 repetitions of 42 types of sentences varying in linguistic structure (stress, boundary, phrasing, and sentence type). Results from two speakers analyzed to date indicate that pause postures exist but that their presence might be speaker dependent. Analyses of gesture lags indicate a stable relationship between the boundary tone and pause posture, that the boundar...


Journal of the Acoustical Society of America | 2017

Using machine learning to identify articulatory gestures in time course data

Will Styler; Jelena Krivokapic; Ben Parrell; Jiseung Kim

One difficulty in working with articulatory data is objectively identifying phonological gestures, that is, distinguishing targeted gestural movement from general variability. Although human annotators are generally used, an automated approach to identifying meaningful patterns offers advantages in speed, consistency, and objective characterization of gestures (cf. Shaw and Kawahara 2017). This study examines Electromagnetic Articulography (EMA) data from seven American English speakers, aiming to identify and characterize pause postures (specific vocal tract configurations at prosodic boundaries; Katsika et al. 2014). Supervised machine learning using kernelized Support Vector Machine Classifiers (SVMs) took as training data 852 trajectories from three speakers analyzed to date, containing 104 pause postures identified by a human annotator, and classified the between-words lip aperture (LA) trajectory to identify tokens containing the pause posture, while also providing token-by-token gesture probability...


Journal of the Acoustical Society of America | 2014

Surveying the nasal peak: A1 and P0 in nasal and nasalized vowels

Will Styler; Rebecca Scarborough

Nasality can be measured in the acoustical signal using A1-P0, where A1 is the amplitude of the harmonic under F1, and P0 is the amplitude of a low-frequency nasal peak (~250 Hz) (Chen 1997). In principle, as nasality increases, P0 goes up and A1 is damped, yielding lower A1-P0. However, the details of the relationship between A1 and P0 in natural speech have not been well described. We examined 4778 vowels in French and English elicited words, measuring A1, P0, and the surrounding harmonic amplitudes, and comparing oral and nasal tokens (phonemic nasal vowels in French, and coarticulatorily nasalized vowels in English). Linear mixed-effects regressions confirmed that A1-P0 is predictive of nasality: 4.16 dB lower in English nasal contexts relative to oral and 5.73 dB lower in French (both p<0.001). In English, as expected, P0 increased 1.42 dB and A1 decreased 3.93 dB (p<0.001). In French, however, both A1 and P0 lowered with nasality (5.73 and 0.93 dB, respectively, p<0.001). Even so, in both languages, P0 became more prominent relative to adjacent harmonics in nasal vowels. These data reveal cross-linguistic differences in the acoustic realization of nasal vowels and suggest P0 prominence as a potential perceptual cue to be investigated.

Collaboration


Dive into the Will Styler's collaboration.

Top Co-Authors

Avatar

Martha Palmer

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Guergana Savova

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Rebecca Scarborough

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Georgia Zellou

University of California

View shared research outputs
Top Co-Authors

Avatar

James H. Martin

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiseung Kim

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven Bethard

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Wayne H. Ward

University of Colorado Boulder

View shared research outputs
Researchain Logo
Decentralizing Knowledge