Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lori S. Levin is active.

Publication


Featured researches published by Lori S. Levin.


international conference on acoustics, speech, and signal processing | 1997

Janus-III: speech-to-speech translation in multiple languages

Alon Lavie; Alex Waibel; Lori S. Levin; Michael Finke; Donna Gates; Marsal Gavaldà; Torsten Zeppenfeld; Puming Zhan

This paper describes JANUS-III, our most recent version of the JANUS speech-to-speech translation system. We present an overview of the system and focus on how system design facilitates speech translation between multiple languages, and allows for easy adaptation to new source and target languages. We also describe our methodology for evaluation of end-to-end system performance with a variety of source and target languages. For system development and evaluation, we have experimented with both push-to-talk as well as cross-talk recording conditions. To date, our system has achieved performance levels of over 80% acceptable translations on transcribed input, and over 70% acceptable translations on speech input recognized with a 75-90% word accuracy. Our current major research is concentrated on enhancing the capabilities of the system to deal with input in broad and general domains.


meeting of the association for computational linguistics | 1995

Discourse Processing of Dialogues with Multiple Threads

Carolyn Penstein Rosé; Barbara Di Eugenio; Lori S. Levin; Carol Van Ess-Dykema

In this paper we will present our ongoing work on a plan-based discourse processor developed in the context of the Enthusiast Spanish to English translation system as part of the JANUS multi-lingual speech-to-speech translation system. We will demonstrate that theories of discourse which postulate a strict tree structure of discourse on either the intentional or attentional level are not totally adequate for handling spontaneous dialogues. We will present our extension to this approach along with its implementation in our plan-based discourse processor. We will demonstrate that the implementation of our approach outperforms an implementation based on the strict tree structure approach.


Machine Translation | 2000

The Janus-III Translation System: Speech-to-Speech Translation in Multiple Domains

Lori S. Levin; Alon Lavie; Monika Woszczyna; Donna Gates; Marsal Gavaldà; Detlef Koll; Alex Waibel

The Janus-III system translates spoken languages in limiteddomains. The current research focus is on expanding beyond tasksinvolving a single limited semantic domain to significantly broaderand richer domains. To achieve this goal, The MT components of oursystem have been engineered to build and manipulate multi-domain parselattices that are based on modular grammars for multiple semanticdomains. This approach yields solutions to several problems includingmulti-domain disambiguation, segmentation of spoken utterances intosentence units, modularity of system design, and re-use of earliersystems with incompatible output.


linguistic annotation workshop | 2009

Committed Belief Annotation and Tagging

Mona T. Diab; Lori S. Levin; Teruko Mitamura; Owen Rambow; Vinodkumar Prabhakaran; Weiwei Guo

We present a preliminary pilot study of belief annotation and automatic tagging. Our objective is to explore semantic meaning beyond surface propositions. We aim to model peoples cognitive states, namely their beliefs as expressed through linguistic means. We model the strength of their beliefs and their (the human) degree of commitment to their utterance. We explore only the perspective of the author of a text. We classify predicates into one of three possibilities: committed belief, non committed belief, or not applicable. We proceed to manually annotate data to that end, then we build a supervised framework to test the feasibility of automatically predicting these belief states. Even though the data is relatively small, we show that automatic prediction of a belief class is a feasible task. Using syntactic features, we are able to obtain significant improvements over a simple baseline of 23% F-measure absolute points. The best performing automatic tagging condition is where we use POS tag, word type feature AlphaNumeric, and shallow syntactic chunk information CHUNK. Our best overall performance is 53.97% F-measure.


Machine Translation | 2002

MT for Minority Languages UsingElicitation-Based Learning of SyntacticTransfer Rules

Katharina Probst; Lori S. Levin; Erik Peterson; Alon Lavie; Jaime G. Carbonell

The AVENUE project contains a run-time machine translationprogram that is surrounded by pre- and post-run-time modules. Thepost-run-time module selects among translation alternatives. Thepre-run-time modules are concerned with elicitation of data andautomatic learning of transfer rules in order to facilitate thedevelopment of machine translation between a language with extensiveresources for natural language processing and a language with fewresources for natural language processing. This paper describes therun-time transfer-based machine translation system as well as two ofthe pre-run-time modules: elicitation of data from the minoritylanguage and automated learning of transfer rules from theelicited data.


cross-language evaluation forum | 2008

ParaMor: Finding Paradigms across Morphology

Christian Monson; Jaime G. Carbonell; Alon Lavie; Lori S. Levin

ParaMor automatically learns morphological paradigms from unlabelled text, and uses them to annotate word forms with morpheme boundaries. ParaMor competed in the English and German tracks of Morpho Challenge 2007 (Kurimo et al., 2008). In English, ParaMors balanced precision and recall outperform at F1 an already sophisticated baseline induction algorithm, Morfessor (Creutz, 2006). In German, ParaMor suffers from a low morpheme recall. But combining ParaMors analyses with analyses from Morfessor results in a set of analyses that outperform either algorithm alone, and that place first in F1 among all algorithms submitted to Morpho Challenge 2007. Categories and Subject Descriptions: I.2 [Artificial Intelligence]: I.2.7 Natural Language Processing.


meeting of the association for computational linguistics | 1991

Syntax-Driven and Ontology-Driven Lexical Semantics

Sergei Nirenburg; Lori S. Levin

We describe the scopes of two schools in lexical semantics, which we call syntax-driven lexical semantics and ontology-driven lexical semantics, respectively. Both approaches are used in various applications at The Center for Machine Translation. We believe that a comparative analysis of these positions and clarification of claims and coverage is essential for the field as a whole.


international symposium on temporal representation and reasoning | 2006

From Language to Time: A Temporal Expression Anchorer

Benjamin Han; Donna Gates; Lori S. Levin

Understanding temporal expressions in natural language is a key step towards incorporating temporal information in many applications. In this paper we describe a system capable of anchoring such expressions in English: system TEA features a constraint-based calendar model and a compact representational language to capture the intensional meaning of temporal expressions. We also report favorable results from experiments conducted on several email datasets


north american chapter of the association for computational linguistics | 2000

Evaluation of a practical interlingua for task-oriented dialogue

Lori S. Levin; Donna Gates; Alon Lavie; Fabio Pianesi; Dorcas Wallace; Taro Watanabe; Monika Woszczyna

IF (Interchange Format), the interlingua used by the C-STAR consortium, is a speech-act based interlingua for task-oriented dialogue. IF was designed as a practical interlingua that could strike a balance between expressivity and simplicity. If it is too simple, components of meaning will be lost and coverage of unseen data will be low. On the other hand, if it is too complex, it cannot be used with a high degree of consistency by collaborators on different continents. In this paper, we suggest methods for evaluating the coverage of IF and the consistency with which it was used in the C-STAR consortium.


Computational Linguistics | 2012

Modality and negation in simt use of modality and negation in semantically-informed syntactic mt

Kathrin Baker; Michael Bloodgood; Bonnie J. Dorr; Chris Callison-Burch; Nathaniel Wesley Filardo; Christine D. Piatko; Lori S. Levin; Scott Miller

This article describes the resource- and system-building efforts of an 8-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically Informed Machine Translation (SIMT). We describe a new modality/negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation), and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86% (depending on genre) for tagging of a standard LDC data set.We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. Although the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described here. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu–English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.

Collaboration


Dive into the Lori S. Levin's collaboration.

Top Co-Authors

Avatar

Alon Lavie

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Donna Gates

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Waibel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Dyer

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Christian Monson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Monika Woszczyna

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Teruko Mitamura

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge