Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nikos Chatzichrisafis is active.

Publication


Featured researches published by Nikos Chatzichrisafis.


meeting of the association for computational linguistics | 2005

A Voice Enabled Procedure Browser for the International Space Station

Manny Rayner; Beth Ann Hockey; Nikos Chatzichrisafis; Kim Farrell; Jean-Michel Renders

Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station (ISS), is to the best of our knowledge the first spoken dialog system in space. This paper gives background on the system and the ISS procedures, then discusses the research developed to address three key problems: grammar-based speech recognition using the Regulus toolkit; SVM based methods for open microphone speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations.


Theory of Computing Systems \/ Mathematical Systems Theory | 2006

Evaluating Task Performance for a Unidirectional Controlled Language Medical Speech Translation System

Nikos Chatzichrisafis; Pierrette Bouillon; Manny Rayner; Marianne Santaholma; Marianne Starlander; Beth Ann Hockey

We present a task-level evaluation of the French to English version of MedSLT, a medium-vocabulary unidirectional controlled language medical speech translation system designed for doctor-patient diagnosis interviews. Our main goal was to establish task performance levels of novice users and compare them to expert users. Tests were carried out on eight medical students with no previous exposure to the system, with each student using the system for a total of three sessions. By the end of the third session, all the students were able to use the system confidently, with an average task completion time of about 4 minutes.


workshop on grammar based approaches to spoken language processing | 2007

A Bidirectional Grammar-Based Medical Speech Translator

Pierrette Bouillon; Glenn Flores; Marianne Starlander; Nikos Chatzichrisafis; Marianne Santaholma; Nikos Tsourakis; Manny Rayner; Beth Ann Hockey

We describe a bidirectional version of the grammar-based MedSLT medical speech system. The system supports simple medical examination dialogues about throat pain between an English-speaking physician and a Spanish-speaking patient. The physicians side of the dialogue is assumed to consist mostly of WH-questions, and the patients of elliptical answers. The paper focusses on the grammar-based speech processing architecture, the ellipsis resolution mechanism, and the online help system.


empirical methods in natural language processing | 2005

Japanese Speech Understanding using Grammar Specialization

Manny Rayner; Nikos Chatzichrisafis; Pierrette Bouillon; Yukie Nakao; Hitoshi Isahara; Kyoko Kanzaki; Beth Ann Hockey; Marianne Santaholma; Marianne Starlander

The most common speech understanding architecture for spoken dialogue systems is a combination of speech recognition based on a class N-gram language model, and robust parsing. For many types of applications, however, grammar-based recognition can offer concrete advantages. Training a good class N-gram language model requires substantial quantities of corpus data, which is generally not available at the start of a new project. Head-to-head comparisons of class N-gram/robust and grammar-based systems also suggest that users who are familiar with system coverage get better results from grammar-based architectures (Knight et al., 2001). As a consequence, deployed spoken dialogue systems for real-world applications frequently use grammar-based methods. This is particularly the case for speech translation systems. Although leading research systems like Verbmobil and NE-SPOLE! (Wahlster, 2000; Lavie et al., 2001) usually employ complex architectures combining statistical and rule-based methods, successful practical examples like Phraselator and S-MINDS (Phraselator, 2005; Sehda, 2005) are typically phrasal translators with grammar-based recognizers.


Theory of Computing Systems \/ Mathematical Systems Theory | 2006

MedSLT: A Limited-Domain Unidirectional Grammar-Based Medical Speech Translator

Manny Rayner; Pierrette Bouillon; Nikos Chatzichrisafis; Marianne Santaholma; Marianne Starlander; Beth Ann Hockey; Yukie Nakao; Hitoshi Isahara; Kyoko Kanzaki

MedSLT is a unidirectional medical speech translation system intended for use in doctor-patient diagnosis dialogues, which provides coverage of several different language pairs and subdomains. Vocabulary ranges from about 350 to 1000 surface words, depending on the language and subdomain. We will demo both the system itself and the development environment, which uses a combination of rule-based and data-driven methods to construct efficient recognisers, generators and transfer rule sets from small corpora.


IEEE Transactions on Audio, Speech, and Language Processing | 2007

Gaussian Mixture Clustering and Language Adaptation for the Development of a New Language Speech Recognition System

Nikos Chatzichrisafis; Vassilios Diakoloukas; Vassilios Digalakis; Costas Harizakis

The porting of a speech recognition system to a new language is usually a time-consuming and expensive process since it requires collecting, transcribing, and processing a large amount of language-specific training sentences. This work presents techniques for improved cross-language transfer of speech recognition systems to new target languages. Such techniques are particularly useful for target languages where minimal amounts of training data are available. We describe a novel method to produce a language-independent system by combining acoustic models from a number of source languages. This intermediate language-independent acoustic model is used to bootstrap a target-language system by applying language adaptation. For our experiments, we use acoustic models of seven source languages to develop a target Greek acoustic model. We show that our technique significantly outperforms a system trained from scratch when less than 8 h of read speech is available


Archive | 2010

Spoken Dialogue Application in Space: The Clarissa Procedure Browser

Manny Rayner; Beth Ann Hockey; Jean-Michel Renders; Nikos Chatzichrisafis; Kim Farrell

Anyone who has seen more than three science-fiction films will probably be able to suggest potential uses for spoken dialogue systems in space. Until recently, however, NASA and other space agencies have shown a surprising lack of interest in attempting to make this dream a reality and it is only in the last few years that any serious work has been carried out. The present chapter describes Clarissa, an experimental voice-enabled system developed at NASA Ames Research Center during a 3-year project starting in early 2002, which enables astronauts to navigate complex procedures using only spoken input and output. Clarissa was successfully tested on the International Space Station (ISS) on June 27, 2005, and is, to the best of our knowledge, the first spoken dialogue application in space.


Proceedings of the tenth Conference on European Association of Machine Translation | 2005

A Generic Multi-Lingual Open Source Platform for Limited- Domain Medical Speech Translation

Pierrette Bouillon; Manny Rayner; Nikos Chatzichrisafis; Beth Ann Hockey; Marianne Santaholma; Marianne Starlander; Yukie Nakao; Kyoko Kanzaki; Hitoshi Isahara


conference of the international speech communication association | 2005

A Methodology for Comparing Grammar-Based and Robust Approaches to Speech Understanding

Manny Rayner; Pierrette Bouillon; Nikos Chatzichrisafis; Beth Ann Hockey; Marianne Santaholma; Marianne Starlander; Hitoshi Isahara; Kyoko Kanzaki; Yukie Nakao


Traitement Automatique des Langues | 2007

Une grammaire partagée multitâche pour le traitement de la parole : application aux langues romanes

Pierrette Bouillon; Manny Rayner; Bruna Novellas; Marianne Starlander; Marianne Santaholma; Yukie Nakao; Nikos Chatzichrisafis

Collaboration


Dive into the Nikos Chatzichrisafis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yukie Nakao

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Hitoshi Isahara

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Kyoko Kanzaki

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge