Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Z. Krňoul is active.

Publication


Featured researches published by Z. Krňoul.


Signal Processing | 2006

Design, implementation and evaluation of the Czech realistic audio-visual speech synthesis

M. Železný; Z. Krňoul; P. Císař; Jindřich Matoušek

This paper presents the whole process of creation of audio-visual speech synthesis system. Such system consists of two main parts, the acoustic synthesis emulating human speech and the facial animation emulating the human lip articulation. The acoustic subsystem is based on concatenation-based speech synthesis. The visual subsystem is designed as a realistic, fully three-dimensional parametrically controllable facial animation model. To be able to parametrically control the animation to emulate human articulation, the set of visual parameters has to be obtained for all basic speech units. To provide realistic animation, the database of lip movements of a real person need to be recorded and expressed by suitable parameterization. The set of control parameters for visual animation is then derived from this database. The 3D model of a head based on a head of a real person also makes the animation more realistic. To obtain such model, a 3D scanning of a real person has to be adopted. We present the design and implementation of above-mentioned process. The aim is to obtain realistic audio-visual speech synthesis with possibility to change the 3D head model according to particular person. The design, acquisition and processing of audio-visual speech corpus for such purpose is presented. Next, the process of both acoustic and visual speech synthesis is described. The visual speech synthesis comprises the tasks of model training, animation control, and co-articulation modelling. A facial animation can also increase intelligibility of a telephone speech to people with hearing disabilities. In such case the textual information to control the animation is not available. Solution to the problem of mapping visual parameters from speech signal either directly or through recognized text is presented. Furthermore, the 3D scanning algorithm is presented. It allows to obtain realistic 3D model based on a head of a real person and thus to personalize the talking head. In the end of this paper, evaluation of intelligibility of the presented audio-visual speech synthesis and its possible applications are presented.


text speech and dialogue | 2004

Realistic Face Animation for a Czech Talking Head

Z. Krňoul; M. Železný

This paper is focused on improving visual Czech speech synthesis. Our aim was the design of a highly natural and realistic talking head with a realistic 3D face model, improved co-articulation, and a realistic model of inner articulatory organs (teeth, the tongue and the palate). Besides very good articulation our aim was also expression of the mimic and emotions of the talking head. The intelligibility was verified by the listening test and the results of this test were analysed.


Journal on Multimodal User Interfaces | 2011

Automatic fingersign-to-speech translation system

Marek Hrúz; Pavel Campr; Erinç Dikici; Ahmet Alp Kindiroglu; Z. Krňoul; Alexander L. Ronzhin; Hasim Sak; Daniel Schorno; Hulya Yalcin; Lale Akarun; Oya Aran; Alexey Karpov; Murat Saraclar; M. Železný

The aim of this paper is to help the communication of two people, one hearing impaired and one visually impaired by converting speech to fingerspelling and fingerspelling to speech. Fingerspelling is a subset of sign language, and uses finger signs to spell letters of the spoken or written language. We aim to convert finger spelled words to speech and vice versa. Different spoken languages and sign languages such as English, Russian, Turkish and Czech are considered.


international conference on machine learning | 2007

Czech text-to-sign speech synthesizer

Z. Krňoul; Jakub Kanis; M. Železný; Luděk Müller

Recent research progress in developing of the Czech - Sign Speech synthesizer is presented. The current goal is to improve the system for automatic synthesis to produce accurate synthesis of the Sign Speech. The synthesis system converts written text to an animation of an artificial human model (avatar). This includes translation of text to sign phrases and their conversion to the animation of the avatar. The animation is composed of movements and deformations of segments of hands, a head and also a face. The system has been evaluated by two initial perceptual tests. The perceptual tests indicate that the designed synthesis system is capable to produce the intelligible Sign Speech.


text, speech and dialogue | 2007

Translation and conversion for Czech sign speech synthesis

Z. Krňoul; Miloš Železny

Recent research progress in developing of Czech Sign Speech synthesizer is presented. The current goal is to improve a system for automatic synthesis to produce accurate synthesis of the Sign Speech. The synthesis system converts written text to an animation of an artificial human model. This includes translation of text to sign phrases and its conversion to the animation of an avatar. The animation is composed of movements and deformations of segments of hands, a head and also a face. The system has been evaluated by two initial perceptual tests. The perceptual tests indicate that the designed synthesis system is capable to produce intelligible Sign Speech.


text, speech and dialogue | 2011

Towards automatic annotation of sign language dictionary corpora

Marek Hrúz; Z. Krňoul; Pavel Campr; Luděk Müller

This paper deals with novel automatic categorization of signs used in sign language dictionaries. The categorization provides additional information about lexical signs interpreted in the form of video files. We design a new method for automatic parameterization of these video files and categorization of the signs from extracted information. The method incorporates advanced image processing for detection and tracking of hands and head of signing character in the input image sequences. For tracking of hands we developed an algorithm based on object detection and discriminative probability models. For the tracking of head we use active appearance model. This method is a very powerful for detection and tracking of human face. We specify feasible conditions of the model enabling to use the extracted parameters for basic categorization of the non-manual component. We introduce an experiment with the automatic categorization determining symmetry, location and contact of hands, shape of mouth, close eyes and others. The result of experiment is primary the categorization of more than 200 signs and discussion of problems and next extension.


text, speech and dialogue | 2011

Web-based system for automatic reading of technical documents for vision impaired students

Jindřich Matoušek; Zdenek Hanzlícek; Michal Campr; Z. Krňoul; Pavel Campr; Martin Grůber

A web-based system for automatic reading of technical documents focused on vision-impaired primary-school students is presented in the paper. An overview of the system, both its backend (used by teachers to create and manage the documents) and frontend (used by students for viewing and reading the documents), is given. Text-to-speech synthesis utilised for the automatic reading and, especially, the automatic processing of mathematical and physical formulas are described as well.


conference on computers and accessibility | 2011

Web-based sign language synthesis and animation for on-line assistive technologies

Z. Krňoul

This article presents recent progress with design of sign language synthesis and avatar animation adapted for the web environment. New 3D rendering method is considered to enable transfer of avatar animation to end users. Furthermore the animation efficiency of facial expressions as part of the non-manual component is discussed. The designed web service ensures on-line accessibility and fluent animation of 3D avatar model, does not require any additional software and gives a wide range of usage for target users.


International Conference on Interactive Collaborative Robotics | 2018

Improvements in 3D Hand Pose Estimation Using Synthetic Data

Jakub Kanis; Dmitry Ryumin; Z. Krňoul

The neural networks currently outperform earlier approaches to the hand pose estimation. However, to achieve the superior results a large amount of the appropriate training data is desperately needed. But the acquisition of the real hand pose data is a time and resources consuming process. One of the possible solutions uses the synthetic training data. We introduce a method to generate synthetic depth images of the hand closely matching the real images. We extend the approach of the previous works to the modeling of the depth image data using the 3D scan of the subject’s hand and the hand pose prior given by the real data distribution. We found out that combining them with the real training data can result in a better performance.


international conference on speech and computer | 2016

Toward Sign Language Motion Capture Dataset Building

Z. Krňoul; Pavel Jedlička; Jakub Kanis; M. Železný

The article deals with a recording procedure for motion dataset building mainly for sign language synthesis systems. Data gloves and two types of optical motion capture techniques are considered such as one source of sign language speech data for advanced training of more natural and acceptable body movements of signing avatars. A summary of the state-of-the-art technologies provides an overview of possibilities, and even limiting factors in relation to the sign language recording. The combination of the motion capture technologies overcomes the existing difficulties of such a complex task of recording both manual and non-manual component of the sign language. A result is the recording procedure for simultaneous motion capture of signing subject towards further research yet unexplored phenomenon of sign language production by a human.

Collaboration


Dive into the Z. Krňoul's collaboration.

Top Co-Authors

Avatar

M. Železný

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Jakub Kanis

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

P. Císař

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Pavel Campr

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Luděk Müller

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marek Hrúz

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Martin Grůber

University of West Bohemia

View shared research outputs
Top Co-Authors

Avatar

Alexey Karpov

Russian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Michal Campr

University of West Bohemia

View shared research outputs
Researchain Logo
Decentralizing Knowledge