Dan Loehr
Mitre Corporation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dan Loehr.
language resources and evaluation | 2009
Thomas C. Schmidt; Susan Duncan; Oliver Ehmer; Jeffrey Hoyt; Michael Kipp; Dan Loehr; Magnus Magnusson; R. Travis Rose; Han Sloetjes
This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation and analysis of multimodality. Each of the tools has specific strengths so that a variety of different tools, working on the same data, can be desirable for project work. However this usually requires tedious conversion between formats. We propose a common exchange format for multimodal annotation, based on the annotation graph (AG) formalism, which is supported by import and export routines in the respective tools. In the current version of this format the common denominator information can be reliably exchanged between the tools, and additional information can be stored in a standardized way.
north american chapter of the association for computational linguistics | 2003
Carl Burke; Christy Doran; Abigail S. Gertner; Andy Gregorowicz; Lisa Harper; Joel Korb; Dan Loehr
We review existing types of dialogue managers (DMs), and propose that the Information State (IS) approach may allow both complexity of dialogue and ease of portability. We discuss implementational drawbacks of the only existing IS DM, and describe our work underway to develop a new DM resolving those drawbacks.
conference of the association for machine translation in the americas | 1998
Florence Reeder; Dan Loehr
A not-translated word (NTW) is a token which a machine translation (MT) system is unable to translate, leaving it untranslated in the output. The number of not-translated words in a document is used as one measure in the evaluation of MT systems. Many MT developers agree that in order to reduce the number of NTWs in their systems, designers must increase the size or coverage of the lexicon to include these untranslated tokens, so that the system can handle them in future processing. While we accept this method for enhancing MT capabilities, in assessing the nature of NTWs in real-world documents, we found surprising results. Our study looked at the NTW output from two commercially available MT systems (Systran and Globalink) and found that lexical coverage played a relatively small role in the words marked as not translated. In fact, 45% of the tokens in the list failed to translate for reasons other than that they were valid source language words not included in the MT lexicon. For instance, e-mail addresses, words already in the target language and acronyms were marked as not-translated words. This paper presents our analysis of NTWs and uses these results to argue that in addition to lexicon enhancement, MT systems could benefit from more sophisticated pre- and post-processing of real-world documents in order to weed out such NTWs.
International Journal of Speech Technology | 2003
Laurie E. Damianos; Dan Loehr; Carl Burke; Steve Hansen; Michael Viszmeg
We performed an exploratory study to examine the effects of speech-enabled input on a cognitive task involving analysis and annotation of objects in aerial reconnaissance videos. We added speech to an information fusion system to allow for hands-free annotation in order to examine the effect on efficiency, quality, task success, and user satisfaction. We hypothesized that speech recognition could be a cognitive-enabling technology by reducing the mental load of instrument manipulation and freeing up resources for the task at hand.Despite the lack of confidence participants had for the accuracy and temporal precision of the speech-enabled input, each reported that speech made it easier and faster to annotate images. When speech input was available, participants chose speech over manual input to make all annotations. Several participants noted that the additional modality was very effective in reducing the necessity to navigate controls and in allowing them to focus more on the task. Quantitative results suggest that people could potentially identify images faster with speech. However, people did not annotate better with speech (precision was lower, and recall was significantly lower). We attribute the lower recall/precision scores to the lack of undo and editing capabilities and insufficient experience by naïve users in an unfamiliar domain.This formative study has provided feedback for further development of the system augmented with speech-enabled input, as our results show that the availability of speech may lead to improved performance of expert domain users on more complicated tasks.
meeting of the association for computational linguistics | 1998
Susann LuperFoy; Dan Loehr; David Duff; Keith J. Miller; Florence Reeder; Lisa Harper
conference of the international speech communication association | 2001
Tony Bigbee; Dan Loehr; Lisa Harper
conference of the association for machine translation in the americas | 1998
Dan Loehr
Archive | 2002
Carl Burke; Lisa Harper; Dan Loehr
Archive | 2011
Jennifer Cole; Mark Hasegawa-Johnson; Dan Loehr; Linda Van Guilder; Henning Reetz; Stefan A. Frisch
Archive | 2013
Susan Duncan; Katharina J. Rohlfing; Dan Loehr