Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hana Vrzakova is active.

Publication


Featured researches published by Hana Vrzakova.


eye tracking research & application | 2012

What do you want to do next: a novel approach for intent prediction in gaze-based interaction

Roman Bednarik; Hana Vrzakova; Michal Hradis

Interaction intent prediction and the Midas touch have been a longstanding challenge for eye-tracking researchers and users of gaze-based interaction. Inspired by machine learning approaches in biometric person authentication, we developed and tested an offline framework for task-independent prediction of interaction intents. We describe the principles of the method, the features extracted, normalization methods, and evaluation metrics. We systematically evaluated the proposed approach on an example dataset of gaze-augmented problem-solving sessions. We present results of three normalization methods, different feature sets and fusion of multiple feature types. Our results show that accuracy of up to 76% can be achieved with Area Under Curve around 80%. We discuss the possibility of applying the results for an online system capable of interaction intent prediction.


Archive | 2013

A Computational Approach for Prediction of Problem-Solving Behavior Using Support Vector Machines and Eye-Tracking Data

Roman Bednarik; Shahram Eivazi; Hana Vrzakova

Inference about high-level cognitive states during interaction is a fundamental task in building proactive intelligent systems that would allow effective offloading of mental operations to a computational architecture. We introduce an improved machine-learning pipeline able to predict user interactive behavior and performance using real-time eye-tracking. The inference is carried out using a support-vector machine (SVM) on a large set of features computed from eye movement data that are linked to concurrent high-level behavioral codes based on think aloud protocols. The differences between cognitive states can be inferred from overt visual attention patterns with accuracy over chance levels, although the overall accuracy is still low. The system can also classify and predict performance of the problem-solving users with up to 79 % accuracy. We suggest this prediction model as a universal approach for understanding of gaze in complex strategic behavior. The findings confirm that eye movement data carry important information about problem solving processes and that proactive systems can benefit from real-time monitoring of visual attention.


human factors in computing systems | 2013

That's not norma(n/l): a detailed analysis of midas touch in gaze-based problem-solving

Hana Vrzakova; Roman Bednarik

Interaction error prevention needs to start from a good understanding of the context of an error. One of the central issues in gaze-interaction research is the suppression of the so-called Midas touch: the interfaces incorrect evaluation of user gaze as a purposeful interaction command. We conduct a detailed analysis of numerous instances of these events during interactive problem-solving. By developing and applying an annotation scheme we present a taxonomy of the errors and remedial strategies users employ. We present the nuances, richness and development of the user behavior when dealing with the outcomes of the error, and uncover two major coping strategies. The knowledge will be used to design automatic error-prevention mechanisms for gaze-based interaction.


Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction | 2012

Hard lessons learned: mobile eye-tracking in cockpits

Hana Vrzakova; Roman Bednarik

Eye-tracking presents an attractive tool in testing of design alternatives in all stages of interface evaluation. Access to the operators visual attention behaviors provides information supporting design decisions. While mobile eye-tracking increases ecological validity it also brings about numerous constraints. In this work, we discuss mobile eye-tracking issues in the complex environment of a business jet flight simulator in industrial research settings. The cockpit and low illumination directly limited the setup of the eye-tracker and quality of recordings and evaluations. Here we present lessons learned and the best practices in setting up the eye-tracker under challenging simulation conditions.


Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications | 2016

Speakers' head and gaze dynamics weakly correlate in group conversation

Hana Vrzakova; Roman Bednarik; Yukiko I. Nakano; Fumio Nihei

When modeling natural conversational behavior of an agent, a head direction becomes an intuitive proxy to visual attention. We examine this assumption and carefully investigate the relationship between head directions and gaze dynamics through the use of eye-movement tracking. In a group conversation settings, we analyze relationships of the two nonverbal social signals - head directions and gaze dynamics - linked to influential and non-influential statements. We develop a clustering method to estimate the number of gaze targets. We employ this method to show that head and gaze dynamic behaviors are not correlated, and thus head cannot be used as a direct proxy to a persons gaze in the context of conversations. We also describe in detail how influential statements affect head and gaze behaviors. The findings have implications on methodology, modeling and design of natural conversational agents and present a supportive evidence for employing gaze-tracking into the future conversational technologies.


international conference on user modeling, adaptation, and personalization | 2015

Quiet Eye Affects Action Detection from Gaze More Than Context Length

Hana Vrzakova; Roman Bednarik

Every purposive interactive action begins with an intention to interact. In the domain of intelligent adaptive systems, behavioral signals linked to the actions are of great importance, and even though humans are good in such predictions, interactive systems are still falling behind. We explored mouse interaction and related eye-movement data from interactive problem solving situations and isolated sequences with high probability of interactive action. To establish whether one can predict the interactive action from gaze, we 1) analyzed gaze data using sliding fixation sequences of increasing length and 2) considered sequences several fixations prior to the action, either containing the last fixation before action (i.e. the quiet eye fixation) or not. Each fixation sequence was characterized by 54 gaze features and evaluated by an SVM-RBF classifier. The results of the systematic evaluation revealed importance of the quiet eye fixation and statistical differences of quiet eye fixation compared to other fixations prior to the action.


nordic conference on human-computer interaction | 2014

Influential statements and gaze for persuasion modeling

Hana Vrzakova; Roman Bednarik; Yukiko I. Nakano; Fumio Nihei

Influential statements during conversations change the flow of the discussion and open new directions in the conversation. The content of the statement does not make the statement influential alone, it is strengthened by behavioral patterns, such as voice pitch, facial gestures, gaze and body postures. In this work we focus on the relationship between influential statements and gaze, as a potential cue in the automatic detection of conversation skills and in replicating natural interaction behavior for companionship and persuasive technologies. Within a multimodal data corpus of group conversations, we present an approach to analysis of the rich social signals and explore the potentials for correlation between the influential statements and gaze. The statements in the conversations were semi-automatically annotated and scored according to the level of influence, which provided us with boundaries for the gaze analysis. We present the first results of this approach.


eye tracking research & application | 2014

Heatmap rendering from large-scale distributed datasets using cloud computing

Thanh-Chung Dao; Roman Bednarik; Hana Vrzakova

Heatmap is one of the most popular visualizations of gaze behavior, however, increasingly voluminous streams of eye-tracking data make processing of such visualization computationally demanding. Because of high requirements on a single processing machine, real-time visualizations from multiple users are unfeasible if rendered locally. We designed a framework that collects data from multiple eye-trackers regardless of their physical location, analyses these streams, and renders heatmaps in real-time. We propose a cloud computing architecture (EyeCloud) consisting of master and slave nodes on a cloud cluster, and a web interface for fast computation and effective aggregation of the large volumes of eye-tracking data. In experimental studies of the feasibility and effectiveness, we built a cloud cluster on a well-known service, implemented the architecture and reported on a comparison between the proposed system and traditional local processing. The results showed efficiency of the EyeCloud when recordings vary in durations. To our knowledge, this is the first solution to implement cloud computing for gaze visualization.


intelligent user interfaces | 2013

Computational approaches to visual attention for interaction inference

Hana Vrzakova

Many aspects of interaction are hard to directly observe and measure. My research focuses on particular aspects of UX such as cognitive workload, problem solving or engagement, and establishes computational links between them and visual attention. Using machine learning and pattern recognition techniques, I aim to achieve automatic inferences for HCI and employ them as enhancements in gaze-aware interfaces.


human factors in computing systems | 2018

AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time

Seonwook Park; Christoph Gebhardt; Roman Rädle; Anna Maria Feit; Hana Vrzakova; Niraj Ramesh Dayama; Hui Shyong Yeo; Clemens Nylandsted Klokmose; Aaron J. Quigley; Antti Oulasvirta; Otmar Hilliges

Developing cross-device multi-user interfaces (UIs) is a challenging problem. There are numerous ways in which content and interactivity can be distributed. However, good solutions must consider multiple users, their roles, their preferences and access rights, as well as device capabilities. Manual and rule-based solutions are tedious to create and do not scale to larger problems nor do they adapt to dynamic changes, such as users leaving or joining an activity. In this paper, we cast the problem of UI distribution as an assignment problem and propose to solve it using combinatorial optimization. We present a mixed integer programming formulation which allows real-time applications in dynamically changing collaborative settings. It optimizes the allocation of UI elements based on device capabilities, user roles, preferences, and access rights. We present a proof-of-concept designer-in-the-loop tool, allowing for quick solution exploration. Finally, we compare our approach to traditional paper prototyping in a lab study.

Collaboration


Dive into the Hana Vrzakova's collaboration.

Top Co-Authors

Avatar

Roman Bednarik

University of Eastern Finland

View shared research outputs
Top Co-Authors

Avatar

Piotr Bartczak

University of Eastern Finland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jani Koskinen

University of Eastern Finland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge