Jan Silovsky
Technical University of Liberec
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan Silovsky.
multimedia signal processing | 2012
Jan Nouza; Karel Blavka; Jindrich Zdansky; Petr Cerva; Jan Silovsky; Marek Bohac; Josef Chaloupka; Michaela Kucharova; Ladislav Seps
This paper describes a complex system developed for processing, indexing and accessing data collected in large audio and audio-visual archives that make an important part of Czech cultural heritage. Recently, the system is being applied to the Czech Radio archive, namely to its oral history segment with more than 200.000 individual recordings covering almost ninety years of broadcasting in the Czech Republic and former Czechoslovakia. The ultimate goals are a) to transcribe a significant portion of the archive - with the support of speech, speaker and language recognition technology, b) index the transcriptions, and c) make the audio and text files fully searchable. So far, the system has processed and indexed over 75.000 spoken documents. Most of them come from the last two decades, but the recent demo collection includes also a series of presidential speeches since 1934. The full coverage of the archive should be available by the end of 2014.
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony | 2009
Jan Nouza; Jindrich Zdansky; Petr Cerva; Jan Silovsky
Slavic languages pose a big challenge for researchers dealing with speech technology. They exhibit a large degree of inflection, namely declension of nouns, pronouns and adjectives, and conjugation of verbs. This has a large impact on the size of lexical inventories in these languages, and significantly complicates the design of text-to-speech and, in particular, speech-to-text systems. In the paper, we demonstrate some of the typical features of the Slavic languages and show how they can be handled in the development of practical speech processing systems. We present our solutions we applied in the design of voice dictation and broadcast speech transcription systems developed for Czech. Furthermore, we demonstrate how these systems can be converted to another similar Slavic language, in our case Slovak. All the presented systems operate in real time with very large vocabularies (350K words in Czech, 170K words in Slovak) and some of them have been already deployed in practice.
Speech Communication | 2011
Ramón López-Cózar; Jan Silovsky; Martin Kroul
This paper proposes a technique to enhance emotion detection in spoken dialogue systems by means of two modules that combine different information sources. The first one, called Fusion-0, combines emotion predictions generated by a set of classifiers that deal with different kinds of information about each sentence uttered by the user. To do this, the module employs several methods for information fusion that produce other predictions about the emotional state of the user. The predictions are the input to the second information fusion module, called Fusion-1, where they are combined to deduce the emotional state of the user. Fusion-0 represents a method employed in previous studies to enhance classification rates, whereas Fusion-1 represents the novelty of the technique, which is the combination of emotion predictions generated by Fusion-0. One advantage of the technique is that it can be applied as a posterior processing stage to any other methods that combine information from different information sources at the decision level. This is so because the technique works on the predictions (outputs) of the methods, without interfering in the procedure used to obtain these predictions. Another advantage is that the technique can be implemented as a modular architecture, which facilitates the setting up within a spoken dialogue system as well as the deduction of the emotional state of the user in real time. Experiments have been carried out considering classifiers to deal with prosodic, acoustic, lexical, and dialogue acts information, and three methods to combine information: multiplication of probabilities, average of probabilities, and unweighted vote. The results show that the technique enhances the classification rates of the standard fusion by 2.27% and 3.38% absolute in experiments carried out considering two and three emotion categories, respectively.
International Workshop on Multimedia for Cultural Heritage | 2011
Jan Nouza; Karel Blavka; Marek Bohac; Petr Cerva; Jindrich Zdansky; Jan Silovsky; Jan Prazak
The Czech Radio archive of spoken documents is considered one of the gems of the Czech cultural heritage. It contains the largest collection (more than 100.000 hours) of spoken documents recorded during the last 90 years. We are developing a complex platform that should automatically transcribe a significant portion of the archive, index it and eventually prepare it for full-text search. The four-year project supported by the Czech Ministry of culture is challenging in the way that it copes with huge volumes of data, with historical as well as contemporary language, a rather low signal quality in case of old recordings, and also with documents spoken not only in Czech but also in Slovak. The technology used includes speech, speaker and language recognition modules, speaker and channel adaptation components, tools for data indexation and retrieval, and a web interface that allows for public access to the archive. Recently, a demo version of the platform is available for testing and searching in some 10.000 hours of already processed data.
Journal of Multimedia | 2012
Jan Nouza; Karel Blavka; Petr Cerva; Jindrich Zdansky; Jan Silovsky; Marek Bohac; Jan Prazak
In this paper we describe a complex software platform that is being developed for the automatic transcription and indexation of the Czech Radio archive of spoken documents. The archive contains more than 100.000 hours of audio recordings covering almost ninety years of public broadcasting in the Czech Republic and former Czechoslovakia. The platform is based on modern speech processing technology and includes modules for speech, speaker and language recognition, and tools for multimodal information retrieval. The aim of the project supported by the Czech Ministry of Culture is to make the archive accessible and searchable both for researchers as well as for wide public. After the first project’s year, the key modules have been already implemented and tested on a 27.400-hour subset of the archive. A web-based full-text search engine allows for the demonstration of the project’s current state.
intelligent data acquisition and advanced computing systems: technology and applications | 2011
Jan Prazak; Jan Silovsky
This paper investigates application of the Probabilistic Linear Discriminant Analysis (PLDA) for speaker clustering within a speaker diarization framework. Factor analysis is employed to extract low-dimensional representation of a sequence of acoustic feature vectors — so called i-vectors — and these i-vectors are modeled using the PLDA. Experiments were carried out using the COST278 broadcast news database. We achieved 33.7% relative improvement of the Diarization Error Rate (DER) and 43.8% relative improvement of the speaker error rate compared to the baseline system using clustering based on the Bayesian Information Criterion (BIC).
Speech Communication | 2013
Petr Cerva; Jan Silovsky; Jindrich Zdansky; Jan Nouza; Ladislav Seps
This paper deals with speaker-adaptive speech recognition for large spoken archives. The goal is to improve the recognition accuracy of an automatic speech recognition (ASR) system that is being deployed for transcription of a large archive of Czech radio. This archive represents a significant part of Czech cultural heritage, as it contains recordings covering 90years of broadcasting. A large portion of these documents (100,000h) is to be transcribed and made public for browsing. To improve the transcription results, an efficient speaker-adaptive scheme is proposed. The scheme is based on integration of speaker diarization and adaptation methods and is designed to achieve a low Real-Time Factor (RTF) of the entire adaptation process, because the archives size is enormous. It thus employs just two decoding passes, where the first one is carried out using the lexicon with a reduced number of items. Moreover, the transcripts from the first pass serve not only for adaptation, but also as the input to the speaker diarization module, which employs two-stage clustering. The output of diarization is then utilized for a cluster-based unsupervised Speaker Adaptation (SA) approach that also utilizes information based on the gender of each individual speaker. Presented experimental results on various types of programs show that our adaptation scheme yields a significant Word Error Rate (WER) reduction from 22.24% to 18.85% over the Speaker Independent (SI) system while operating at a reasonable RTF.
COST'10 Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and Enactment | 2010
Petr Cerva; Jan Nouza; Jan Silovsky
This paper deals with cross-lingual adaptation of a Large Vocabulary Continuous Speech Recognition (LVCSR) system between two similar Slavic languages --- from Czech to Slovak. The proposed adaptation scheme is performed in two consecutive phases and it is focused on acoustic modeling and phoneme and pronunciation mapping. It also utilizes language similarities between the source and the target language and speaker adaptation approaches. Presented experimental results show that the proposed cross-lingual adaptation approach yields to reduction of Word Error Rate (WER) from 12.8 % to 8.1 % in the voice dictation task.
multimedia signal processing | 2012
Jan Silovsky; Jindrich Zdansky; Jan Nouza; Petr Cerva; Jan Prazak
In this paper we study the effect of incorporation of automatic transcriptions in the speaker diarization process. We aim to improve both the diarization accuracy as evaluated by standard objective measures and quality of the diarization output from users perspective. Although the presented approach relies on output of an automatic speech recognizer, it makes no use of lexical information. Instead, we use information about word boundaries and classification of non-speech events occurring in the processed stream. The former information is used as constraining condition for speaker change-point candidates and the latter facilitate to neglect various vocal noise sounds that carry no speaker-specific information (considering representation of the signal by cepstral features) and thus harm the speakers representation. The experimental evaluation of the presented approach was carried out using the COST278 multilingual broadcast news database. We demonstrate that the approach yields improvement in terms of both speaker diarization and segmentation performance measures. Furthermore, we show that the number of change-points detected within words (and not at their boundaries) is significantly reduced.
multimedia signal processing | 2012
Petr Cerva; Jan Silovsky; Jindrich Zdansky; Ondrej Smola; Karel Blavka; Karel Palecek; Jan Nouza; Jiri Malek
This paper presents a complex system developed to improve the quality of distance learning by allowing people to browse the content of various (academic) lectures. The system consists of several main modules. The first automatic speech recognition (ASR) module is designed to cope with inflective Czech language and provides time-aligned transcriptions of input audio-visual recordings of lectures. These transcriptions are generated off-line in two recognition passes using speaker adaptation methods and language models mixed from various text sources including transcriptions of broadcast programs, spontaneous telephone talks, web discussions, thesis, etc. Lecture recordings and their transcriptions are then indexed and stored in the database. The next module, client-server web lecture browser, allows to browse or play the indexed content and search in it.