Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sherri L. Condon is active.

Publication


Featured researches published by Sherri L. Condon.


north american chapter of the association for computational linguistics | 2006

Cross linguistic name matching in English and Arabic: a one to many mapping extension of the Levenshtein edit distance algorithm

Andrew Freeman; Sherri L. Condon; Christopher M. Ackerman

This paper presents a solution to the problem of matching personal names in English to the same names represented in Arabic script. Standard string comparison measures perform poorly on this task due to varying transliteration conventions in both languages and the fact that Arabic script does not usually represent short vowels. Significant improvement is achieved by augmenting the classic Levenshtein edit-distance algorithm with character equivalency classes.


Behavior Research Methods Instruments & Computers | 1998

Message size constraints on discourse planning in synchronous computer-mediated communication

Claude G. Cech; Sherri L. Condon

Three groups of 20 dyads planned the MTV Music Video Awards Show over computers. The groups varied in whether they could send 4-line, 10-line, or 18-line messages, in part to examine whether increased planning efficiency in computer-mediated communication reflects communication strategies associated with constraints on message size. The results demonstrate that increased efficiency is not a function of such design features as the maximum message that may be sent. However, the subjects in the 4-line condition sent shorter messages to one another, were less likely to engage in a strategy of making multiple suggestions that required a single assent, and were more likely to differ in the relative proportion of their contributions to the discourse. The subjects in the 10-line condition had shorter maximum messages (and proportionately more disfluencies) than did those in the 18-line condition, despite the finding that the maximum messages of the latter would have also fit within 10 lines. Thus, the results also support a claim that size of the text window may result in different discourse management strategies and may influence an initial discourse-planning stage.


hawaii international conference on system sciences | 2004

Temporal properties of turn-taking and turn-packaging in synchronous computer-mediated communication

Claude G. Cech; Sherri L. Condon

Turn structure and timing are examined in a variety of quasi-synchronous computer-mediated interfaces. The message window size, presence of scrolling, a single message window vs. message windows for each participant, and message persistence were systematically varied for pairs of interlocutors engaged in the same decision-making task. Participants produced more total words and more turns in conditions with larger windows and in those with scrolling, while separate windows conditioned even larger increases on these measures. Turn sizes were smaller in the latter conditions and response times were faster. In the persistent separate-window conditions, messages from the partner intervened before participants completed responses in over half of the messages.


Machine Translation | 2012

Evaluation of 2-way Iraqi Arabic---English speech translation systems using automated metrics

Sherri L. Condon; Mark Arehart; Dan Parvaz; Gregory A. Sanders; Christy Doran; John S. Aberdeen

The Defense Advanced Research Projects Agency (DARPA) Spoken Language Communication and Translation System for Tactical Use (TRANSTAC) program (http://1.usa.gov/transtac) faced many challenges in applying automated measures of translation quality to Iraqi Arabic–English speech translation dialogues. Features of speech data in general and of Iraqi Arabic data in particular undermine basic assumptions of automated measures that depend on matching system outputs to reference translations. These features are described along with the challenges they present for evaluating machine translation quality using automated metrics. We show that scores for translation into Iraqi Arabic exhibit higher correlations with human judgments when they are computed from normalized system outputs and reference translations. Orthographic normalization, lexical normalization, and operations involving light stemming resulted in higher correlations with human judgments.


meeting of the association for computational linguistics | 1999

Measuring Conformity to Discourse Routines in Decision-Making Interactions

Sherri L. Condon; Claude G. Cech; William R. Edwards

In an effort to develop measures of discourse level management strategies, this study examines a measure of the degree to which decision-making interactions consist of sequences of utterance functions that are linked in a decision-making routine. The measure is applied to 100 dyadic interactions elicited in both face-to-face and computer-mediated environments with systematic variation of task complexity and message-window size. Every utterance in the interactions is coded according to a system that identifies decision-making functions and other routine functions of utterances. Markov analyses of the coded utterances make it possible to measure the relative frequencies with which sequences of 2 and 3 utterances trace a path in a Markov model of the decision routine. These proportions suggest that interactions in all conditions adhere to the model, although we find greater conformity in the computer-mediated environments, which is probably due to increased processing and attentional demands for greater efficiency. The results suggest that measures based on Markov analyses of coded interactions can provide useful measures for comparing discourse level properties, for correlating discourse features with other textual features, and for analyses of discourse management strategies.


international conference on computational linguistics | 2008

Learning to Match Names Across Languages

Inderjeet Mani; Alex Yeh; Sherri L. Condon

We report on research on matching names in different scripts across languages. We explore two trainable approaches based on comparing pronunciations. The first, a cross-lingual approach, uses an automatic name-matching program that exploits rules based on phonological comparisons of the two languages carried out by humans. The second, monolingual approach, relies only on automatic comparison of the phonological representations of each pair. Alignments produced by each approach are fed to a machine learning algorithm. Results show that the monolingual approach results in machine-learning based comparison of person-names in English and Chinese at an accuracy of over 97.0 F-measure.


international conference on systems engineering | 2015

Understanding Asynchronous Distributed Collaboration in an Enterprise Systems Engineering Context

Gary L. Klein; Jill L. Drury; Sherri L. Condon

End users—even though they may be distributed geographically and across time zones—can play a substantial and beneficial role in the enterprise systems engineering (ESE) process. However, it is difficult to support asynchronous distributed collaboration under typical workplace conditions, and it is even more challenging to do so in the context of a complex ESE effort that includes multiple, interdependent, supportive ESE methods and techniques. This paper presents our model of such efforts and describes the demands each model component places on asynchronous distributed collaboration during system development. Further, this paper explains how we have used the model to develop an interview protocol to probe for the presence of model components in individual ESE efforts and their levels of success in managing these variables.


ACM Transactions on Asian Language Information Processing | 2011

Machine Translation Errors: English and Iraqi Arabic

Sherri L. Condon; Dan Parvaz; John S. Aberdeen; Christy Doran; Andrew Freeman; Marwan Awad

Errors in machine translations of English-Iraqi Arabic dialogues were analyzed using the methods developed for the Human Translation Error Rate measure (HTER). Human annotations were used to refine the Translation Error Rate (TER) annotations. The analyses were performed on approximately 100 translations into each language from four translation systems. Results include high frequencies of pronoun errors and errors involving the copula in translations to English. High frequencies of errors in subject/person inflection and closed-word classes characterized translations to Iraqi Arabic. There were similar frequencies of word order errors in both translation directions and low frequencies of polarity errors. The problems associated with many errors can be predicted from structural differences between the two languages. Also problematic is the need to insert lexemes not present in the source or vice versa. Some problems associated with deictic elements like pronouns will require knowledge of the discourse context to resolve.


performance metrics for intelligent systems | 2009

Probability of successful transfer of low-level concepts via machine translation: a meta-evaluation

Gregory A. Sanders; Sherri L. Condon

In this paper, we present one of the important metrics used to measure the quality of machine translation in the DARPA TRANSTAC program. The metric is stated as either the probability or the odds of a machine translation system successfully transferring the meaning of content words (nouns, verbs, adjectives, adverbs, plus the most important quanitifiers and prepositions). We present the rationale for the metric, explain its implementation, and examine its performance. To characterize the performance of the metric, we compare it to utterance level (or sentence-level) human judgments of the semantic adequacy of the translations, obtained from a panel of bilingual judges who compare the source-language input to the target-language (translated) output. Language pairs examined in this paper include English-to-Arabic, Arabic-to-English, English-to-Dari, and Dari-to-English.


performance metrics for intelligent systems | 2009

Automated metrics for speech translation

Sherri L. Condon; Mark Arehart; Christy Doran; Dan Parvaz; John S. Aberdeen; Karine Megerdoomian; Beatrice T. Oshika

In this paper, we describe automated measures used to evaluate machine translation quality in the Defense Advanced Research Projects Agencys Spoken Language Communication and Translation System for Tactical Use program, which is developing speech translation systems for dialogue between English and Iraqi Arabic speakers in military contexts. Limitations of the automated measures are illustrated along with variants of the measures that seek to overcome those limitations. Both the dialogue structure of the data and the Iraqi Arabic language challenge these measures, and the paper presents some solutions adopted by MITRE and NIST to improve confidence in the scores.

Collaboration


Dive into the Sherri L. Condon's collaboration.

Top Co-Authors

Avatar

Claude G. Cech

University of Louisiana at Lafayette

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory A. Sanders

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig I. Schlenoff

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge