Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher A. Pennington is active.

Publication


Featured researches published by Christopher A. Pennington.


conference on computers and accessibility | 2000

An intelligent tutoring system for deaf learners of written English

Lisa N. Michaud; Kathleen F. McCoy; Christopher A. Pennington

This paper describes progress toward a prototype implementation of a tool which aims to improve literacy in deaf high school and college students who are native (or near native) signers of American Sign Language (ASL). We envision a system that will take a piece of text written by a deaf student, analyze that text for grammatical errors, and engage that student in a tutorial dialogue, enabling the student to generate appropriate corrections to the text. A strong focus of this work is to develop a system which adapts this process to the knowledge level and learning strengths of the user and which has the flexibility to engage in multi-modal, multi-lingual tutorial instruction utilizing both English and the native language of the user.


ACM Transactions on Accessible Computing | 2009

User Interaction with Word Prediction: The Effects of Prediction Quality

Keith Trnka; John McCaw; Debra Yarrington; Kathleen F. McCoy; Christopher A. Pennington

Word prediction systems can reduce the number of keystrokes required to form a message in a letter-based AAC system. It has been questioned, however, whether such savings translate into an enhanced communication rate due to the additional overhead (e.g., shifting of focus and repeated scanning of a prediction list) required in using such a system. Our hypothesis is that word prediction has high potential for enhancing AAC communication rate, but the amount is dependent in a complex way on the accuracy of the predictions. Due to significant user interface variations in AAC systems and the potential bias of prior word prediction experience on existing devices, this hypothesis is difficult to verify. We present a study of two different word prediction methods compared against letter-by-letter entry at simulated AAC communication rates. We find that word prediction systems can in fact speed communication rate (an advanced system gave a 58.6% improvement), and that a more accurate word prediction system can raise the communication rate higher than is explained by the additional accuracy of the system alone due to better utilization (93.6% utilization for advanced versus 78.2% for basic).


intelligent user interfaces | 2006

Topic modeling in fringe word prediction for AAC

Keith Trnka; Debra Yarrington; Kathleen F. McCoy; Christopher A. Pennington

Word prediction can be used for enhancing the communication ability of persons with speech and language impairments. In this work, we explore two methods of adapting a language model to the topic of conversation, and apply these methods to the prediction of fringe words.


Natural Language Engineering | 1998

Compansion: From research prototype to practical integration

Kathleen F. McCoy; Christopher A. Pennington

Augmentative and Alternative Communication (AAC) is the field of study concerned with providing devices and techniques to augment the communicative ability of a person whose disability makes it difficult to speak or otherwise communicate in an understandable fashion. For several years, we have been applying natural language processing techniques to the field of AAC to develop intelligent communication aids that attempt to provide linguistically correct output while increasing communication rate. Previous effort has resulted in a research prototype called Compansion that expands telegraphic input. In this paper we describe that research prototype and introduce the Intelligent Parser Generator (IPG). IPG is intended to be a practical embodiment of the research prototype aimed at a group of users who have cognitive impairments that affect their linguistic ability. We describe both the theoretical underpinnings of Compansion and the practical considerations in developing a usable system for this population of users.


Lecture Notes in Computer Science | 1998

An Augmentative Communication Interface Based on Conversational Schemata

Peter Vanderheyden; Christopher A. Pennington

Many people with severe speech and motor impairments make use of augmentative and alternative communication (AAC) systems. These systems can employ a variety of techniques to organize stored words, phrases, and sentences, and to make them available to the user. It is argued in this chapter that an AAC system should make better use of the regularities in an individuals conversational experiences and the expectations that the individual normally brings into a conversational context.


intelligent user interfaces | 1997

Some interface issues in developing intelligent communication aids for people with disabilities

Kathleen F. McCoy; Patrick W. Demasco; Christopher A. Pennington; Arlene Luberoff Badman

Augmentative and Alternative Communication (AAC) is the field of study concerned with providing devices and techniques to augment the communicative ability of a person whose disability makes it difficult to speak in an understandable fashion. For several years, we have been applying natural language processing techniques to the field of AAC in order to develop intelligent communication aids that attempt to provide linguistically “correct” output while speeding communication rate. In this paper we describe some of the interface issues that must be considered when developing such a device. We focus on a project aimed at a group of users who have cognitive impairments that affect their linguistic ability. A prototype system is under development which will hopefully not only prove to be an effective communication aid, but may provide some language intervention benefits for this population.


conference on computers and accessibility | 1994

A communication tool for people with disabilities: lexical semantics for filling in the pieces

Kathleen F. McCoy; Patrick W. Demasco; Mark Alan Jones; Christopher A. Pennington; Peter Vanderheyden; Wendy M. Zickus

The goal of this project is to provide a communication tool for people with severe speech and motor impairments (SSMI). The tool will facilitate the formation of syntactically correct sentences in the fewest number of keystrokes. Consider the situation where an individual is using a word-based augmentative communication system—each word is (basically) one keystroke and morphological endings etc. require additional keystrokes. Our prototype system is intended to reduce the burden of the user by allowing him/her to select only the uninflected content words of the desired sentence. The system is responsible for adding proper function words (e.g., articles, prepositions) and necessary morphological endings. In order to accomplish this task, the system attempts to generate a semantic representation of an utterance under circumstances where syntactic (parse tree) information is not available because the input to the system is a compressed telegraphic message rather than a standard English sentence. The representation is used by the system to generate a full English sentence from the compressed input. The focus of the paper is on the knowledge and processing necessary to produce a semantic representation under these telegraphic constraints.


conference on computers and accessibility | 2005

A system for creating personalized synthetic voices

Debra Yarrington; Christopher A. Pennington; John Gray; H. Timothy Bunnell

We will be demonstrating the ModelTalker Voice Creation System, which allows users to create a personalized synthetic voice with an unrestricted vocabulary. The system includes a tool for recording a speech inventory and a program that converts the recorded inventory into a synthetic voice for the ModelTalker TTS engine. The entire system can be downloaded for use on a home PC or in a clinical setting, and the resulting synthetic voices can be used with any SAPI compliant system.We will demonstrate the recording process, and convert the recordings to a mini-database with a limited vocabulary for participants to hear.


Lecture Notes in Computer Science | 1998

Providing Intelligent Language Feedback for Augmentative Communication Users

Christopher A. Pennington; Kathleen F. McCoy

People with severe speech and motor impairments (SSMI) can often use augmentative communication devices to help them communicate. While these devices can provide speech synthesis or text output, the rate of communication is typically very slow. Consequently, augmentative communication users often develop telegraphic patterns of language usage. A natural language processing technique termed compansion (compression-expansion) has been developed that expands uninflected content words (i.e., compressed or telegraphic utterances) into syntactically and semantically well-formed sentences.


meeting of the association for computational linguistics | 2008

ModelTalker Voice Recorder---An Interface System for Recording a Corpus of Speech for Synthesis

Debra Yarrington; John Gray; Christopher A. Pennington; H. Timothy Bunnell; Allegra Cornaglia; Jason Lilley; Kyoko Nagao; James B. Polikoff

We will demonstrate the ModelTalker Voice Recorder (MT Voice Recorder) -- an interface system that lets individuals record and bank a speech database for the creation of a synthetic voice. The system guides users through an automatic calibration process that sets pitch, amplitude, and silence. The system then prompts users with both visual (text-based) and auditory prompts. Each recording is screened for pitch, amplitude and pronunciation and users are given immediate feedback on the acceptability of each recording. Users can then rerecord an unacceptable utterance. Recordings are automatically labeled and saved and a speech database is created from these recordings. The systems intention is to make the process of recording a corpus of utterances relatively easy for those inexperienced in linguistic analysis. Ultimately, the recorded corpus and the resulting speech database is used for concatenative synthetic speech, thus allowing individuals at home or in clinics to create a synthetic voice in their own voice. The interface may prove useful for other purposes as well. The system facilitates the recording and labeling of large corpora of speech, making it useful for speech and linguistic research, and it provides immediate feedback on pronunciation, thus making it useful as a clinical learning tool.

Collaboration


Dive into the Christopher A. Pennington's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Debra Yarrington

Alfred I. duPont Hospital for Children

View shared research outputs
Top Co-Authors

Avatar

H. Timothy Bunnell

Alfred I. duPont Hospital for Children

View shared research outputs
Top Co-Authors

Avatar

Keith Trnka

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

John McCaw

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Allegra Cornaglia

Alfred I. duPont Hospital for Children

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge