Shahram Eivazi
University of Eastern Finland
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shahram Eivazi.
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction | 2012
Roman Bednarik; Shahram Eivazi; Michal Hradis
When using a multiparty video mediated system, interacting participants assume a range of various roles and exhibit behaviors according to how engaged in the communication they are. In this paper we focus on estimation of conversational engagement from gaze signal. In particular, we present an annotation scheme for conversational engagement, a statistical analysis of gaze behavior across varying levels of engagement, and we classify vectors of computed eye tracking measures. The results show that in 74% of cases the level of engagement can be correctly classified into either high or low level. In addition, we describe the nuances of gaze during distinct levels of engagement.
nordic conference on human-computer interaction | 2014
Hoorieh Afkari; Shahram Eivazi; Roman Bednarik; Susanne Mäkelä
Micro-neurosurgery has been revolutionized by advances in the surgical microscope such as high magnification that have increased a surgeons ability to have a clear view of the surgical field. High magnification necessitates frequent interaction with the microscope during an operation, and the current interaction technique for positioning and adjusting the microscope introduces risk factors that force a surgeon to remove hands from the operating field. The purpose of this study is to investigate the potential for hands-free interaction in micro-neurosurgery. We present findings from a contextual study of how neurosurgeons interact with the microscope and the surgical team, and discuss the implications of the findings for designing hands-free, especially gaze-based interaction techniques for micro-neurosurgery.
Archive | 2013
Roman Bednarik; Shahram Eivazi; Hana Vrzakova
Inference about high-level cognitive states during interaction is a fundamental task in building proactive intelligent systems that would allow effective offloading of mental operations to a computational architecture. We introduce an improved machine-learning pipeline able to predict user interactive behavior and performance using real-time eye-tracking. The inference is carried out using a support-vector machine (SVM) on a large set of features computed from eye movement data that are linked to concurrent high-level behavioral codes based on think aloud protocols. The differences between cognitive states can be inferred from overt visual attention patterns with accuracy over chance levels, although the overall accuracy is still low. The system can also classify and predict performance of the problem-solving users with up to 79 % accuracy. We suggest this prediction model as a universal approach for understanding of gaze in complex strategic behavior. The findings confirm that eye movement data carry important information about problem solving processes and that proactive systems can benefit from real-time monitoring of visual attention.
IEEE Sensors Journal | 2016
Shahram Eivazi; Roman Bednarik; Ville Leinonen; Mikael von und zu Fraunberg; Juha E. Jääskeläinen
Eye tracking has long been known as a tool for attention tracking, however, the understanding of gaze in the critical domains such as surgery is still in its infancy. In image-guided surgery, studying the role that visual attention plays in eye-hand coordination, situation awareness, and instrumentation control is critical in order to understand the nature of expertise and explore the possibilities for gaze-based interaction. To date, the eye-tracking technology has not been embedded into an operation room microscope and thus limited knowledge is available about the role of attention in real-life image-guided surgery. To advance the state-of-the-art, we adopted an optical solution for eye tracking and embedded a binocular eye tracker into a surgical microscope. We present the design principles and development evaluation cycles, as well as highlight the technical challenges encountered when embedding an eye tracker for a surgical microscope. The developed solution can be applied for other types of microscopes and ocular-based optical devices, for example, ophthalmology, otolaryngology, plastic and reconstructive surgery, and astronomical devices.
eye tracking research & application | 2012
Michal Hradis; Shahram Eivazi; Roman Bednarik
This paper discusses estimation of active speaker in multi-party video-mediated communication from gaze data of one of the participants. In the explored settings, we predict voice activity of participants in one room based on gaze recordings of a single participant in another room. The two rooms were connected by high definition, low delay audio and video links and the participants engaged in different activities ranging from casual discussion to simple problem-solving games. We treat the task as a classification problem. We evaluate several types of features and parameter settings in the context of Support Vector Machine classification framework. The results show that using the proposed approach vocal activity of a speaker can be correctly predicted in 89 % of the time for which the gaze data are available.
eye tracking research & application | 2014
Andrea Mazzei; Shahram Eivazi; Youri Marko; Frédéric Kaplan; Pierre Dillenbourg
Studying natural reading and its underlying attention processes requires devices that are able to provide precise measurements of gaze without rendering the reading activity unnatural. In this paper we propose an eye tracking system that can be used to conduct analyses of reading behavior in low constrained experimental settings. The system is designed for dual-camera-based head-mounted eye trackers and allows free head movements and note taking. The system is composed of three different modules. First, a 3D model-based gaze estimation method computes the readers gaze trajectory. Second, a document image retrieval algorithm is used to recognize document pages and extract annotations. Third, a systematic error correction procedure is used to post-calibrate the system parameters and compensate for spatial drifts. The validation results show that the proposed method is capable of extracting reliable gaze data when reading in low constrained experimental conditions.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2016
Hoorieh Afkari; Roman Bednarik; Susanne Mäkelä; Shahram Eivazi
Abstract Interactions in micro-neurosurgical operating rooms form a complex orchestration of labor and information flows. In the center, there is the focus on patient safety and outcome quality in shortest possible time, while a neurosurgeon is fully focused on the task using a surgical microscope. To guarantee a successful outcome, maintaining a high level of situation awareness (SA) is essential. Suspension of action due to instrument exchange, interaction with a device, or communication affects information flows and collaboration. Situation awareness underlies these interactions. To further understand the mechanisms of SA, we used observations and interviews to gain insight into interactions in micro-neurosurgical theaters. We describe behaviors and strategies exhibited to maintain the interaction flow, in particular, between the scrub nurse and the surgeon. Results show how interactions based on action prediction and active observation within the well-organized environment are influenced, both positively and negatively, by the reliance of the work on the microscope. From this understanding, we discuss the opportunities in future technologies and interfaces for supporting situation awareness maintenance in operating rooms.
intelligent user interfaces | 2017
Shahram Eivazi; Enkelejda Kasneci
After many decades of research, the presence of intelligent user interfaces is unquestionable in any modern operating room (OR). For the first time, we aim to bring proactive intelligent systems into microsurgery OR. The first step towards an intelligent surgical microscope is to design an activity-aware microscope. In this paper, we present a novel system that we have built to record both eyes and instruments movements of surgeons while operating with a surgical microscope. We present a case study in micro-neurosurgery to show how the system monitors the surgeons activities. We achieved about 1 mm accuracy for gaze and instrument tracking. Now real-time ecologically valid data can be used to design, for example, a self-adjustable microscope.
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018
Shahram Eivazi; Benedikt Hosp; Anna Eivazi; Wolfgang Rosenstiel; Enkelejda Kasneci
Undoubtedly, eye movements contain an immense amount of information, especially when looking to fast eye movements, namely time to the fixation, saccade, and micro-saccade events. While, modern cameras support recording of few thousand frames per second, to date, the majority of studies use eye trackers with the frame rates of about 120 Hz for head-mounted and 250 Hz for remote-based trackers. In this study, we aim to overcome the challenge of the pupil tracking algorithms to perform real time with high speed cameras for remote eye tracking applications. We propose an iterative pupil center detection algorithm formulated as an optimization problem. We evaluated our algorithm on more than 13,000 eye images, in which it outperforms earlier solutions both with regard to runtime and detection accuracy. Moreover, our system is capable of boosting its runtime in an unsupervised manner, thus we remove the need for manual annotation of pupil images.
intelligent user interfaces | 2017
Shahram Eivazi; Michael Slupina; Hoorieh Afkari; Ahmad Hafez; Enkelejda Kasneci
In the past decade, eye tracking has emerged as a promising answer to the increasing needs of understanding surgical expertise. The implicit desire is to design an intelligent user interface (IUI) to monitor and assess the competency of surgical trainees. In this paper, for the first time in microsurgery, we explore the potential for a surgical automatic skill assessment through a combination of machine learning techniques, computational modeling, and eye tracking. We present primary findings from a random forest classification method where we achieved about 70% recognition rate for the detection of expert and novice group. This leads us to a conclusion that prediction of the micro-surgeon performance is possible, can be automated, and that the eye movement data carry important information about the skills of micro-surgeons.