Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristy Hollingshead is active.

Publication


Featured researches published by Kristy Hollingshead.


IEEE Transactions on Audio, Speech, and Language Processing | 2011

Spoken Language Derived Measures for Detecting Mild Cognitive Impairment

Brian Roark; Margaret Mitchell; John Paul Hosom; Kristy Hollingshead; Jeffrey Kaye

Spoken responses produced by subjects during neuropsychological exams can provide diagnostic markers beyond exam performance. In particular, characteristics of the spoken language itself can discriminate between subject groups. We present results on the utility of such markers in discriminating between healthy elderly subjects and subjects with mild cognitive impairment (MCI). Given the audio and transcript of a spoken narrative recall task, a range of markers are automatically derived. These markers include speech features such as pause frequency and duration, and many linguistic complexity measures. We examine measures calculated from manually annotated time alignments (of the transcript with the audio) and syntactic parse trees, as well as the same measures calculated from automatic (forced) time alignments and automatic parses. We show statistically significant differences between clinical subject groups for a number of measures. These differences are largely preserved with automation. We then present classification results, and demonstrate a statistically significant improvement in the area under the ROC curve (AUC) when using automatic spoken language derived features in addition to the neuropsychological test scores. Our results indicate that using multiple, complementary measures can aid in automatic detection of MCI.


meeting of the association for computational linguistics | 2007

Syntactic complexity measures for detecting Mild Cognitive Impairment

Brian Roark; Margaret Mitchell; Kristy Hollingshead

We consider the diagnostic utility of various syntactic complexity measures when extracted from spoken language samples of healthy and cognitively impaired subjects. We examine measures calculated from manually built parse trees, as well as the same measures calculated from automatic parses. We show statistically significant differences between clinical subject groups for a number of syntactic complexity measures, and these differences are preserved with automatic parsing. Different measures show different patterns for our data set, indicating that using multiple, complementary measures is important for such an application.


north american chapter of the association for computational linguistics | 2009

Linear Complexity Context-Free Parsing Pipelines via Chart Constraints

Brian Roark; Kristy Hollingshead

In this paper, we extend methods from Roark and Hollingshead (2008) for reducing the worst-case complexity of a context-free parsing pipeline via hard constraints derived from finite-state tagging pre-processing. Methods from our previous paper achieved quadratic worst-case complexity. We prove here that alternate methods for choosing constraints can achieve either linear or O(Nlog2N) complexity. These worst-case bounds on processing are demonstrated to be achieved without reducing the parsing accuracy, in fact in some cases improving the accuracy. The new methods achieve observed performance comparable to the previously published quadratic complexity method. Finally, we demonstrate improved performance by combining complexity bounding methods with additional high precision constraints.


empirical methods in natural language processing | 2005

Comparing and Combining Finite-State and Context-Free Parsers

Kristy Hollingshead; Seeger Fisher; Brian Roark

In this paper, we look at comparing high-accuracy context-free parsers with high-accuracy finite-state (shallow) parsers on several shallow parsing tasks. We show that previously reported comparisons greatly under-estimated the performance of context-free parsers for these tasks. We also demonstrate that context-free parsers can train effectively on relatively little training data, and are more robust to domain shift for shallow parsing tasks than has been previously reported. Finally, we establish that combining the output of context-free and finite-state parsers gives much higher results than the previous-best published results, on several common tasks. While the efficiency benefit of finite-state models is inarguable, the results presented here show that the corresponding cost in accuracy is higher than previously thought.


Natural Language Engineering | 2008

Dialogueview: Annotating dialogues in multiple views with abstraction†

Fan Yang; Peter A. Heeman; Kristy Hollingshead; Susan E. Strayer

This paper describes DialogueView, a tool for annotating dialogues with utterance boundaries, speech repairs, speech act tags, and hierarchical discourse blocks. The tool provides three views of a dialogue: WordView, which shows the transcribed words time-aligned with the audio signal; UtteranceView, which shows the dialogue line-by-line as if it were a script for a movie; and BlockView, which shows an outline of the dialogue. The different views provide different abstractions of what is occurring in the dialogue. Abstraction helps users focus on what is important for different annotation tasks. For example, for annotating speech repairs, utterance boundaries, and overlapping and abandoned utterances, the tool provides the exact timing information. For coding speech act tags and hierarchical discourse structure, a broader context is created by hiding such low-level details, which can still be accessed if needed. We find that the different abstractions allow users to annotate dialogues more quickly without sacrificing accuracy. The tool can be configured to meet the requirements of a variety of annotation schemes.


cross language evaluation forum | 2009

Simulating morphological analyzers with stochastic taggers for confidence estimation

Christian Monson; Kristy Hollingshead; Brian Roark

We propose a method for providing stochastic confidence estimates for rule-based and black-box natural language (NL) processing systems. Our method does not require labeled training data: We simply train stochastic models on the output of the original NL systems. Numeric confidence estimates enable both minimum Bayes risk-style optimization as well as principled system combination for these knowledge-based and black-box systems. In our specific experiments, we enrich ParaMor, a rule-based system for unsupervised morphology induction, with probabilistic segmentation confidences by training a statistical natural language tagger to simulate ParaMors morphological segmentations. By adjusting the numeric threshold above which the simulator proposes morpheme boundaries, we improve F1 of morpheme identification on a Hungarian corpus by 5.9% absolute. With numeric confidences in hand, we also combine ParaMors segmentation decisions with those of a second (blackbox) unsupervised morphology induction system, Morfessor. Our joint ParaMor-Morfessor system enhances F1 performance by a further 3.4% absolute, ultimately moving F1 from 41.4% to 50.7%.


north american chapter of the association for computational linguistics | 2016

Crazy Mad Nutters: The Language of Mental Health.

Jena D. Hwang; Kristy Hollingshead

Many people with mental illnesses face challenges posed by stigma perpetuated by fear and misconception in society at large. This societal stigma against mental health conditions is present in everyday language. In this study we take a set of 14 words with the potential to stigmatize mental health and sample Twitter as an approximation of contemporary discourse. Annotation reveals that these words are used with different senses, from expressive to stigmatizing to clinical.We use these wordsense annotations to extract a set of mental health–aware Twitter users, and compare their language use to that of an ageand gendermatched comparison set of users, discovering a difference in frequency of stigmatizing senses as well as a change in the target of pejorative senses. Such analysis may provide a first step towards a tool with the potential to help everyday people to increase awareness of their own stigmatizing language, and to measure the effectiveness of anti-stigma campaigns to change our discourse.


international conference on computational linguistics | 2008

Classifying Chart Cells for Quadratic Complexity Context-Free Inference

Brian Roark; Kristy Hollingshead


meeting of the association for computational linguistics | 2007

Pipeline Iteration

Kristy Hollingshead; Brian Roark


conference of the international speech communication association | 2004

Towards understanding mixed-initiative in task-oriented dialogues

Fan Yang; Peter A. Heeman; Kristy Hollingshead

Collaboration


Dive into the Kristy Hollingshead's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ting Qian

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Christian Monson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Erica Cooper

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jena D. Hwang

University of Colorado Boulder

View shared research outputs
Researchain Logo
Decentralizing Knowledge