Yulia Gizatdinova
University of Tampere
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yulia Gizatdinova.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006
Yulia Gizatdinova; Veikko Surakka
Feature-based method for detecting landmarks from facial images was designed. The method was based on extracting oriented edges and constructing edge maps at two resolution levels. Edge regions with characteristic edge pattern formed landmark candidates. The method ensured invariance to expressions while detecting eyes. Nose and mouth detection was deteriorated by happiness and disgust.
Proceedings of the 2nd ACM workshop on Multimedia in forensics, security and intelligence | 2010
Guoying Zhao; Xiaohua Huang; Yulia Gizatdinova; Matti Pietikäinen
Visual information from captured video is important for speaker identification under noisy conditions that have background noise or cross talk among speakers. In this paper, we propose local spatiotemporal descriptors to represent and recognize speakers based solely on visual features. Spatiotemporal dynamic texture features of local binary patterns extracted from localized mouth regions are used for describing motion information in utterances, which can capture the spatial and temporal transition characteristics. Structural edge map features are extracted from the image frames for representing appearance characteristics. Combination of dynamic texture and structural features takes both motion and appearance together into account, providing the description ability for spatiotemporal development in speech. In our experiments on BANCA and XM2VTS databases the proposed method obtained promising recognition results comparing to the other features.
international conference on image analysis and processing | 2007
Yulia Gizatdinova; Veikko Surakka
The present aim was to develop a fully automatic feature-based method for expression-invariant detection of facial landmarks from still facial images. It is a continuation of our earlier work where we found that some certain muscle contractions made a deteriorating effect on the feature-based landmark detection especially in the lower face. Taking into account this crucial facial behavior, we introduced improvements to the method that allowed facial landmarks to be fully automatically detected from expressive images of high complexity. In the method, information on local oriented edges was utilized to compose edge maps of the image at two levels of resolution. The landmark candidates resulted from this step were further verified by edge orientation matching. We used knowledge on face geometry to find the proper spatial arrangement of the candidates. The results obtained demonstrated a high overall performance of the method while testing a wide range official displays.
workshop on applications of computer vision | 2012
Yulia Gizatdinova; Oleg Špakov; Veikko Surakka
We present a novel vision-based perceptual user interface for hands-free text entry that utilizes face detection and visual gesture detection to manipulate a scrollable virtual keyboard. A thorough experimentation was undertaken to quantitatively define a performance of the interface in hands-free pointing, selection and scrolling tasks. The experiments were conducted with nine participants in laboratory conditions. Several face and head gestures were examined for detection robustness and user convenience. The system gave a reasonable performance in terms of high gesture detection rate and small false alarm rate. The participants reported that a new interface was easy to understand and operate. Encouraged by these results, we discuss advantages and constraints of the interface and suggest possibilities for design improvements.
advanced visual interfaces | 2012
Yulia Gizatdinova; Oleg Špakov; Veikko Surakka
Video-based human-computer interaction has received increasing interest over the years. However, earlier research has been mainly focusing on technical characteristics of different methods rather than on user performance and experiences in using computer vision technology. This study aims to investigate performance characteristics of novice users and their subjective experiences in typing text with several video-based pointing and selection techniques. In Experiment 1, eye tracking and head tracking were applied for the task of pointing at the keys of a virtual keyboard. The results showed that gaze pointing was significantly faster but also more erroneous technique as compared with head pointing. Self-reported subjective ratings revealed that it was generally better, faster, more pleasant and efficient to type using gaze pointing than head pointing. In Experiment 2, mouth open and brows up facial gestures were utilized for confirming the selection of a given character. The results showed that text entry speed was approximately the same for both selection techniques, while mouth interaction caused significantly fewer errors than brow interaction. Subjective ratings did not reveal any significant differences between the techniques. Possibilities for design improvements are discussed.
Entertainment Computing | 2014
Mirja Ilves; Yulia Gizatdinova; Veikko Surakka; Esko Vankka
Abstract This study aimed to develop and test a hands-free video game that utilizes information on the player’s real-time face position and facial expressions as intrinsic elements of a gameplay. Special focus was given to investigating the user’s subjective experiences in utilizing computer vision input in the game interaction. The player’s goal was to steer a drunken character home as quickly as possible by moving their head. Additionally, the player could influence the behavior of game characters by using the facial expressions of frowning and smiling. The participants played the game with computer vision and a conventional joystick and rated the functionality of the control methods and their emotional and game experiences. The results showed that although the functionality of the joystick steering was rated higher than that of the computer vision method, the use of head movements and facial expressions enhanced the experiences of game playing in many ways. The participants rated playing with the computer vision technique as more entertaining, interesting, challenging, immersive, and arousing than doing so with a joystick. The results suggested that a high level of experienced arousal in the case of computer vision-based interaction may be a key factor for better experiences of game playing.
Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research | 2010
Yulia Gizatdinova; Veikko Surakka; Guoying Zhao; Erno Mäkinen; Roope Raisamo
Facial expressions are emotionally, socially and otherwise meaningful reflective signals in the face. Facial expressions play a critical role in human life, providing an important channel of nonverbal communication. Automation of the entire process of expression analysis can potentially facilitate human-computer interaction, making it to resemble mechanisms of human-human communication. In this paper, we present an ongoing research that aims at development of a novel spatiotemporal approach to expression classification in video. The novelty comes from a new facial representation that is based on local spatiotemporal feature descriptors. In particular, a combined dynamic edge and texture information is used for reliable description of both appearance and motion of the expression. Support vector machines are utilized to perform a final expression classification. The planned experiments will further systematically evaluate the performance of the developed method with several databases of complex facial expressions.
Pattern Recognition Letters | 2010
Yulia Gizatdinova; Veikko Surakka
Automatic localization of facial features is an essential step for many systems of face recognition, facial expression classification and intelligent vision-based human-computer interfaces. In this paper, we present automatic edge-based method of locating regions of prominent facial features from up-right facial images. The proposed localization scheme was tested on several public databases of complex facial expressions. The method demonstrated high localization rates when localization accuracy was evaluated by both a conventional point error measure and a new rectangular error measure that takes into account the location of the feature in the image and the true feature size.
systems, man and cybernetics | 2014
Henry Joutsijoki; Markus Haponen; Ivan Baldin; Jyrki Rasku; Yulia Gizatdinova; Michelangelo Paci; Jari Hyttinen; Katriina Aalto-Setälä; Martti Juhola
This paper focuses on induced pluripotent stem cell (iPSC) colony image classification using machine learning methods and different feature sets obtained from the intensity histograms. Intensity histograms are obtained from the whole iPSC colony images and as a baseline for it they are determined only from the iPSC colony area of images. Furthermore, we apply to both of the datasets two simple feature selection methods having altogether four datasets. Altogether, 30 different classification methods are tested and we perform thorough experimental tests. The best accuracy (55%) is obtained for the feature set evaluated from the whole image using Directed Acyclic Graph Support Vector Machines (DAGSVM). DAGSVM is also the best choice when intensity histograms are evaluated only from the iPSC colony area. By this means accuracy of 54% is achieved. The obtained results are promising for further research where, for instance, more sophisticated feature selection and extraction methods and other multi-class extensions of SVM will be examined. However, intensity histograms are not alone adequate for iPSC colony image classification.
international conference of the ieee engineering in medicine and biology society | 2014
Yulia Gizatdinova; Jyrki Rasku; Markus Haponen; Henry Joutsijoki; Ivan Baldin; Michelangelo Paci; Jari Hyttinen; Katriina Aalto-Setälä; Martti Juhola
Induced pluripotent stem cells (iPSC) can be derived from fully differentiated cells of adult individuals and used to obtain any other cell type of the human body. This implies numerous prospective applications of iPSCs in regenerative medicine and drug development. In order to obtain valid cell culture, a quality control process must be applied to identify and discard abnormal iPSC colonies. Computer vision systems that analyze visual characteristics of iPSC colony health can be especially useful in automating and improving the quality control process. In this paper, we present an ongoing research that aims at the development of local spatially-enhanced descriptors for classification of iPSC colony images. For this, local oriented edges and local binary patterns are extracted from the detected colony regions and used to represent structural and textural properties of the colonies, respectively. We preliminary tested the proposed descriptors in classifying iPSCs colonies according to the degree of colony abnormality. The tests showed promising results for both, detection of iPSC colony borders and colony classification.