Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan Romportl is active.

Publication


Featured researches published by Jan Romportl.


text, speech and dialogue | 2007

Recording and annotation of speech corpus for Czech unit selection speech synthesis

Jindřich Matoušek; Jan Romportl

The paper gives a brief summarisation of preparation and recording of a phonetically and prosodically rich speech corpus for Czech unit selection text-to-speech synthesis. Special attention is paid to the process of two-phase orthographic annotations of recorded sentences with regard to their coherence.


text speech and dialogue | 2004

Advanced Prosody Modelling

Jan Romportl; Jindřich Matoušek; Daniel Tihelka

A formal prosody model is proposed together with its application in a text-to-speech system. The model is based on a generative grammar of abstract prosodic functionally involved units. This grammar creates for each sentence a structure of immediate prosodic constituents in the form of a tree. Each prosodic word of a sentence is assigned with a description vector by a description function and this vector is used by a realization function to create appropriate intonation for the prosodic word. Parameters of the model are automatically set up using real speech data from a prosody corpus, which is also described.


text, speech and dialogue | 2005

Formal prosodic structures and their application in NLP

Jan Romportl; Jindřich Matoušek

A formal prosody description framework is introduced together with its relation to language semantics and NLP. The framework incorporates deep prosodic structures based on a generative grammar of abstract prosodic functionally involved units. This grammar creates for each sentence a structure of immediate prosodic constituents in the form of a tree. A speech corpus manually annotated by such prosodic structures is presented and its quantitative characteristics are discussed.


text speech and dialogue | 2008

Prosodic Phrases and Semantic Accents in Speech Corpus for Czech TTS Synthesis

Jan Romportl

We describe a statistical method for assignment of prosodic phrases and semantic accents in read speech data. The method is based on statistical evaluation of listening test data by a maximum-likelihood approach with parameters estimated by an EM algorithm. We also present linguistically relevant quantitative results about the prosodic phrase and semantic accent distribution in 250 Czech sentences.


Archive | 2013

Voice Conservation: Towards Creating a Speech-Aid System for Total Laryngectomees

Zdeněk Hanzlíček; Jan Romportl; Jindřich Matoušek

This paper describes the initial experiments on voice conservation of patients with laryngeal cancer in an advanced stage. The final aim is to create a speechaid device which is able to “speak” with their former voices. Our initial work is focused on applicability of speech data from patients with an impaired vocal tract for the purposes of speech synthesis. Preliminary results indicate that appropriately selected synthesis method can successfully learn a new voice, even from speech data which is of a lower quality.


language and technology conference | 2009

Czech senior COMPANION: wizard of Oz data collection and expressive speech corpus recording and annotation

Martin Grůber; Milan Legát; Pavel Ircing; Jan Romportl; Josef Psutka

This paper presents part of the data collection efforts undergone within the project COMPANIONS whose aim is to develop a set of dialogue systems that will be able to act as an artificial “companions” for human users. One of these systems, being developed in Czech language, is designed to be a partner of elderly people which will be able to talk with them about the photographs that capture mostly their family memories. The paper describes in detail the collection of natural dialogues using the Wizard of Oz scenario and also the re-use of the collected data for the creation of the expressive speech corpus that is planned for the development of the limited-domain Czech expressive TTS system.


international conference on signal processing | 2008

Statistical evaluation of reliability of large scale listening tests

Daniel Tihelka; Jan Romportl

The present paper deals with the evaluation of large-scale listening tests and with the detection of unaccountable or unreliable answers for each listener. The iterative maximum likelihood estimation scheme is proposed and its abilities are demonstrated and discussed on data collected from a large-scale listening test which was carried out with the aim to collect reference material capturing human perception of similarity of suprasegmental speech units.


information sciences, signal processing and their applications | 2003

Sentence boundary detection in Czech TTS system using neural networks

Jan Romportl; Daniel Tihelka; Jindrich Matousek

This paper proposes results of an application of a neural network on the problem of deciding whether a certain punctuation mark in Czech text is or is not the end of a sentence. It also discusses possibilities of using methods for relevant parameters extraction and compares a neural network based method with a Bayes classifier and a heuristic classifier.


text speech and dialogue | 2015

Speech Corpus Preparation for Voice Banking of Laryngectomised Patients

Markéta JźZová; Jan Romportl; Daniel Tihelka

This paper focuses on voice banking and creating personalised speech synthesis of laryngectomised patients who lose their voice after this radical surgery. Specific aspects of voice banking are discussed in the paper, including a description of the adjustments of the generic methods. The main attention is paid to the speech corpus building since the quality of synthesised speech depends a lot on the speech units variability and the number of their occurrences. Also some statistics and characteristics of the first experimental voices are presented and the possibility of using different speech synthesis methods depending on the voice quality and speech corpus size is pointed out.


international conference on signal processing | 2010

Audiovisual interface for Czech spoken dialogue system

Pavel Ircing; Jan Romportl; Zdeněk Loose

Our paper introduces implementation details of the application that serves as an audiovisual interface to the automatic dialogue system. It comprises a state-of-the-art large vocabulary continuous speech recognition engine and a TTS system coupled with an embodied avatar that is able to some extent convey a range of emotions to the user. The interface was originally designed for the dialogue system that allows elderly users to reminiscence about their photographs. However, the modular architecture of the whole system and the flexibility of messages that are used for communication between the modules facilitate seamless transition of the application to any domain of the dialogue.

Collaboration


Dive into the Jan Romportl's collaboration.

Top Co-Authors

Avatar

Daniel Tihelka

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pavel Ircing

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Eva Zackova

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Jindrich Matousek

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Josef Psutka

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Martin Grůber

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Michal Polák

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Milan Legát

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Miroslav Spousta

Charles University in Prague

View shared research outputs
Researchain Logo
Decentralizing Knowledge