Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Enkelejda Kasneci is active.

Publication


Featured researches published by Enkelejda Kasneci.


computer analysis of images and patterns | 2015

ExCuSe: Robust Pupil Detection in Real-World Scenarios

Thomas C. Kübler; Katrin Sippel; Wolfgang Rosenstiel; Enkelejda Kasneci

The reliable estimation of the pupil position is one the most important prerequisites in gaze-based HMI applications. Despite the rich landscape of image-based methods for pupil extraction, tracking the pupil in real-world images is highly challenging due to variations in the environment (e.g. changing illumination conditions, reflection, etc.), in the eye physiology or due to variations related to further sources of noise (e.g., contact lenses or mascara). We present a novel algorithm for robust pupil detection in real-world scenarios, which is based on edge filtering and oriented histograms calculated via the Angular Integral Projection Function. The evaluation on over 38,000 new, hand-labeled eye images from real-world tasks and 600 images from related work showed an outstanding robustness of our algorithm in comparison to the state-of-the-art. Download link (algorithm and data): https://www.ti.uni-tuebingen.de/Pupil-detection.1827.0.html?&L=1.


arXiv: Computer Vision and Pattern Recognition | 2016

ElSe: ellipse selection for robust pupil detection in real-world environments

Thiago Santini; Thomas C. Kübler; Enkelejda Kasneci

Fast and robust pupil detection is an essential prerequisite for video-based eye-tracking in real-world settings. Several algorithms for image-based pupil detection have been proposed in the past, their applicability, however, is mostly limited to laboratory conditions. In real-world scenarios, automated pupil detection has to face various challenges, such as illumination changes, reflections (on glasses), make-up, non-centered eye recording, and physiological eye characteristics. We propose ElSe, a novel algorithm based on ellipse evaluation of a filtered edge image. We aim at a robust, inexpensive approach that can be integrated in embedded architectures, e.g., driving. The proposed algorithm was evaluated against four state-of-the-art methods on over 93,000 hand-labeled images from which 55,000 are new eye images contributed by this work. On average, the proposed method achieved a 14.53% improvement on the detection rate relative to the best state-of-the-art performer. Algorithm and data sets are available for download: ftp://[email protected] (password:eyedata).


international conference on intelligent transportation systems | 2015

Driver-Activity Recognition in the Context of Conditionally Autonomous Driving

Christian Braunagel; Enkelejda Kasneci; Wolfgang Stolzmann; Wolfgang Rosenstiel

This paper presents a novel approach to automated recognition of the drivers activity, which is a crucial factor for determining the take-over readiness in conditionally autonomous driving scenarios. Therefore, an architecture based on head-and eye-tracking data is introduced in this study and several features are analyzed. The proposed approach is evaluated on data recorded during a driving simulator study with 73 subjects performing different secondary tasks while driving in an autonomous setting. The proposed architecture shows promising results towards in-vehicle driver-activity recognition. Furthermore, a significant improvement in the classification performance is demonstrated due to the consideration of novel features derived especially for the autonomous driving context.


PLOS ONE | 2014

Binocular glaucomatous visual field loss and its impact on visual exploration--a supermarket study.

Katrin Sippel; Enkelejda Kasneci; Kathrin Aehling; Martin Heister; Wolfgang Rosenstiel; Ulrich Schiefer; Elena Papageorgiou

Advanced glaucomatous visual field loss may critically interfere with quality of life. The purpose of this study was to (i) assess the impact of binocular glaucomatous visual field loss on a supermarket search task as an example of everyday living activities, (ii) to identify factors influencing the performance, and (iii) to investigate the related compensatory mechanisms. Ten patients with binocular glaucoma (GP), and ten healthy-sighted control subjects (GC) were asked to collect twenty different products chosen randomly in two supermarket racks as quickly as possible. The task performance was rated as “passed” or “failed” with regard to the time per correctly collected item. Based on the performance of control subjects, the threshold value for failing the task was defined as μ+3σ (in seconds per correctly collected item). Eye movements were recorded by means of a mobile eye tracker. Eight out of ten patients with glaucoma and all control subjects passed the task. Patients who failed the task needed significantly longer time (111.47 s ±12.12 s) to complete the task than patients who passed (64.45 s ±13.36 s, t-test, p<0.001). Furthermore, patients who passed the task showed a significantly higher number of glances towards the visual field defect (VFD) area than patients who failed (t-test, p<0.05). According to these results, glaucoma patients with defects in the binocular visual field display on average longer search times in a naturalistic supermarket task. However, a considerable number of patients, who compensate by frequent glancing towards the VFD, showed successful task performance. Therefore, systematic exploration of the VFD area seems to be a “time-effective” compensatory mechanism during the present supermarket task.


arXiv: Computer Vision and Pattern Recognition | 2016

Bayesian identification of fixations, saccades, and smooth pursuits

Thiago Santini; Thomas C. Kübler; Enkelejda Kasneci

Smooth pursuit eye movements provide meaningful insights and information on subjects behavior and health and may, in particular situations, disturb the performance of typical fixation/saccade classification algorithms. Thus, an automatic and efficient algorithm to identify these eye movements is paramount for eye-tracking research involving dynamic stimuli. In this paper, we propose the Bayesian Decision Theory Identification (I-BDT) algorithm, a novel algorithm for ternary classification of eye movements that is able to reliably separate fixations, saccades, and smooth pursuits in an online fashion, even for low-resolution eye trackers. The proposed algorithm is evaluated on four datasets with distinct mixtures of eye movements, including fixations, saccades, as well as straight and circular smooth pursuits; data was collected with a sample rate of 30 Hz from six subjects, totaling 24 evaluation datasets. The algorithm exhibits high and consistent performance across all datasets and movements relative to a manual annotation by a domain expert (recall: μ = 91.42%, σ = 9.52%; precision: μ = 95.60%, σ = 5.29%; specificity μ = 95.41%, σ = 7.02%) and displays a significant improvement when compared to I-VDT, an state-of-the-art algorithm (recall: μ = 87.67%, σ = 14.73%; precision: μ = 89.57%, σ = 8.05%; specificity μ = 92.10%, σ = 11.21%). Algorithm implementation and annotated datasets are openly available at www.ti.uni-tuebingen.de/perception


Artificial Neural Networks | 2015

Online Recognition of Fixations, Saccades, and Smooth Pursuits for Automated Analysis of Traffic Hazard Perception

Enkelejda Kasneci; Gjergji Kasneci; Thomas C. Kübler; Wolfgang Rosenstiel

Complex and hazardous driving situations often arise with the delayed perception of traffic objects. To automatically detect whether such objects have been perceived by the driver, there is a need for techniques that can reliably recognize whether the driver’s eyes have fixated or are pursuing the hazardous object. A prerequisite for such techniques is the reliable recognition of fixations, saccades, and smooth pursuits from raw eye tracking data. This chapter addresses the challenge of analyzing the driver’s visual behavior in an adaptive and online fashion to automatically distinguish between fixation clusters, saccades, and smooth pursuits.


eye tracking research & application | 2014

The applicability of probabilistic methods to the online recognition of fixations and saccades in dynamic scenes

Enkelejda Kasneci; Gjergji Kasneci; Thomas C. Kübler; Wolfgang Rosenstiel

In many applications involving scanpath analysis, especially when dynamic scenes are viewed, consecutive fixations and saccades, have to be identified and extracted from raw eye-tracking data in an online fashion. Since probabilistic methods can adapt not only to the individual viewing behavior, but also to changes in the scene, they are best suited for such tasks. In this paper we analyze the applicability of two types of main-stream probabilistic models to the identification of fixations and saccades in dynamic scenes: (1) Hidden Markov Models and (2) Bayesian Online Mixture Models. We analyze and compare the classification performance of the models on eye-tracking data collected during real-world driving experiments.


eye tracking research & application | 2014

SubsMatch: scanpath similarity in dynamic scenes based on subsequence frequencies

Thomas C. Kübler; Enkelejda Kasneci; Wolfgang Rosenstiel

The analysis of visual scanpaths, i.e., series of fixations and saccades, in complex dynamic scenarios is highly challenging and usually performed manually. We propose SubsMatch, a scanpath comparison algorithm for dynamic, interactive scenarios based on the frequency of repeated gaze patterns. Instead of measuring the gaze duration towards a semantic target object (which would be hard to label in dynamic scenes), we examine the frequency of attention shifts and exploratory eye movements. SubsMatch was evaluated on highly dynamic data from a driving experiment to identify differences between scanpaths of subjects who failed a driving test and subjects who passed.


Optometry and Vision Science | 2015

Driving with Glaucoma: Task Performance and Gaze Movements

Thomas C. Kübler; Enkelejda Kasneci; Wolfgang Rosenstiel; Martin Heister; Kathrin Aehling; Katja Nagel; Ulrich Schiefer; Elena Papageorgiou

Purpose The aim of this pilot study was to assess the driving performance and the visual search behavior, that is, eye and head movements, of patients with glaucoma in comparison to healthy-sighted subjects during a simulated driving test. Methods Driving performance and gaze behavior of six glaucoma patients and eight healthy-sighted age- and sex-matched control subjects were compared in an advanced driving simulator. All subjects underwent a 40-minute driving test including nine hazardous situations on city and rural roads. Fitness to drive was assessed by a masked driving instructor according to the requirements of the official German driving test. Several driving performance measures were investigated: lane position, time to line crossing, and speed. Additionally, eye and head movements were tracked and analyzed. Results Three out of six glaucoma patients passed the driving test and their driving performance was indistinguishable from that of the control group. Patients who passed the test showed an increased visual exploration in comparison to patients who failed; that is, they showed increased number of head and gaze movements toward eccentric regions. Furthermore, patients who failed the test showed a rightward bias in average lane position, probably in an attempt to maximize the safety margin to oncoming traffic. Conclusions Our study suggests that a considerable subgroup of subjects with binocular glaucomatous visual field loss shows a safe driving behavior in a virtual reality environment, because they adapt their viewing behavior by increasing their visual scanning. Hence, binocular visual field loss does not necessarily influence driving safety. We recommend that more individualized driving assessments, which will take into account the patient’s ability to compensate, are required.


Behavior Research Methods | 2017

SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies

Thomas C. Kübler; Colleen Rothe; Ulrich Schiefer; Wolfgang Rosenstiel; Enkelejda Kasneci

Our eye movements are driven by a continuous trade-off between the need for detailed examination of objects of interest and the necessity to keep an overview of our surrounding. In consequence, behavioral patterns that are characteristic for our actions and their planning are typically manifested in the way we move our eyes to interact with our environment. Identifying such patterns from individual eye movement measurements is however highly challenging. In this work, we tackle the challenge of quantifying the influence of experimental factors on eye movement sequences. We introduce an algorithm for extracting sequence-sensitive features from eye movements and for the classification of eye movements based on the frequencies of small subsequences. Our approach is evaluated against the state-of-the art on a novel and a very rich collection of eye movements data derived from four experimental settings, from static viewing tasks to highly dynamic outdoor settings. Our results show that the proposed method is able to classify eye movement sequences over a variety of experimental designs. The choice of parameters is discussed in detail with special focus on highlighting different aspects of general scanpath shape. Algorithms and evaluation data are available at: http://www.ti.uni-tuebingen.de/scanpathcomparison.html.

Collaboration


Dive into the Enkelejda Kasneci's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge