Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas C. Kübler is active.

Publication


Featured researches published by Thomas C. Kübler.


computer analysis of images and patterns | 2015

ExCuSe: Robust Pupil Detection in Real-World Scenarios

Thomas C. Kübler; Katrin Sippel; Wolfgang Rosenstiel; Enkelejda Kasneci

The reliable estimation of the pupil position is one the most important prerequisites in gaze-based HMI applications. Despite the rich landscape of image-based methods for pupil extraction, tracking the pupil in real-world images is highly challenging due to variations in the environment (e.g. changing illumination conditions, reflection, etc.), in the eye physiology or due to variations related to further sources of noise (e.g., contact lenses or mascara). We present a novel algorithm for robust pupil detection in real-world scenarios, which is based on edge filtering and oriented histograms calculated via the Angular Integral Projection Function. The evaluation on over 38,000 new, hand-labeled eye images from real-world tasks and 600 images from related work showed an outstanding robustness of our algorithm in comparison to the state-of-the-art. Download link (algorithm and data): https://www.ti.uni-tuebingen.de/Pupil-detection.1827.0.html?&L=1.


arXiv: Computer Vision and Pattern Recognition | 2016

ElSe: ellipse selection for robust pupil detection in real-world environments

Thiago Santini; Thomas C. Kübler; Enkelejda Kasneci

Fast and robust pupil detection is an essential prerequisite for video-based eye-tracking in real-world settings. Several algorithms for image-based pupil detection have been proposed in the past, their applicability, however, is mostly limited to laboratory conditions. In real-world scenarios, automated pupil detection has to face various challenges, such as illumination changes, reflections (on glasses), make-up, non-centered eye recording, and physiological eye characteristics. We propose ElSe, a novel algorithm based on ellipse evaluation of a filtered edge image. We aim at a robust, inexpensive approach that can be integrated in embedded architectures, e.g., driving. The proposed algorithm was evaluated against four state-of-the-art methods on over 93,000 hand-labeled images from which 55,000 are new eye images contributed by this work. On average, the proposed method achieved a 14.53% improvement on the detection rate relative to the best state-of-the-art performer. Algorithm and data sets are available for download: ftp://[email protected] (password:eyedata).


international conference on artificial neural networks | 2013

Online Classification of Eye Tracking Data for Automated Analysis of Traffic Hazard Perception

Enkelejda Tafaj; Thomas C. Kübler; Gjergji Kasneci; Wolfgang Rosenstiel; Martin Bogdan

Complex and hazardous driving situations often arise with the delayed perception of traffic objects. To automatically detect whether such objects have been perceived by the driver, there is a need for techniques that can reliably recognize whether the drivers eyes have fixated or are pursuing the hazardous object (i.e., detecting fixations, saccades, and smooth pursuits from raw eye tracking data). This paper presents a system for analyzing the drivers visual behavior based on an adaptive online algorithm for detecting and distinguishing between fixation clusters, saccades, and smooth pursuits.


arXiv: Computer Vision and Pattern Recognition | 2016

Bayesian identification of fixations, saccades, and smooth pursuits

Thiago Santini; Thomas C. Kübler; Enkelejda Kasneci

Smooth pursuit eye movements provide meaningful insights and information on subjects behavior and health and may, in particular situations, disturb the performance of typical fixation/saccade classification algorithms. Thus, an automatic and efficient algorithm to identify these eye movements is paramount for eye-tracking research involving dynamic stimuli. In this paper, we propose the Bayesian Decision Theory Identification (I-BDT) algorithm, a novel algorithm for ternary classification of eye movements that is able to reliably separate fixations, saccades, and smooth pursuits in an online fashion, even for low-resolution eye trackers. The proposed algorithm is evaluated on four datasets with distinct mixtures of eye movements, including fixations, saccades, as well as straight and circular smooth pursuits; data was collected with a sample rate of 30 Hz from six subjects, totaling 24 evaluation datasets. The algorithm exhibits high and consistent performance across all datasets and movements relative to a manual annotation by a domain expert (recall: μ = 91.42%, σ = 9.52%; precision: μ = 95.60%, σ = 5.29%; specificity μ = 95.41%, σ = 7.02%) and displays a significant improvement when compared to I-VDT, an state-of-the-art algorithm (recall: μ = 87.67%, σ = 14.73%; precision: μ = 89.57%, σ = 8.05%; specificity μ = 92.10%, σ = 11.21%). Algorithm implementation and annotated datasets are openly available at www.ti.uni-tuebingen.de/perception


Artificial Neural Networks | 2015

Online Recognition of Fixations, Saccades, and Smooth Pursuits for Automated Analysis of Traffic Hazard Perception

Enkelejda Kasneci; Gjergji Kasneci; Thomas C. Kübler; Wolfgang Rosenstiel

Complex and hazardous driving situations often arise with the delayed perception of traffic objects. To automatically detect whether such objects have been perceived by the driver, there is a need for techniques that can reliably recognize whether the driver’s eyes have fixated or are pursuing the hazardous object. A prerequisite for such techniques is the reliable recognition of fixations, saccades, and smooth pursuits from raw eye tracking data. This chapter addresses the challenge of analyzing the driver’s visual behavior in an adaptive and online fashion to automatically distinguish between fixation clusters, saccades, and smooth pursuits.


eye tracking research & application | 2014

The applicability of probabilistic methods to the online recognition of fixations and saccades in dynamic scenes

Enkelejda Kasneci; Gjergji Kasneci; Thomas C. Kübler; Wolfgang Rosenstiel

In many applications involving scanpath analysis, especially when dynamic scenes are viewed, consecutive fixations and saccades, have to be identified and extracted from raw eye-tracking data in an online fashion. Since probabilistic methods can adapt not only to the individual viewing behavior, but also to changes in the scene, they are best suited for such tasks. In this paper we analyze the applicability of two types of main-stream probabilistic models to the identification of fixations and saccades in dynamic scenes: (1) Hidden Markov Models and (2) Bayesian Online Mixture Models. We analyze and compare the classification performance of the models on eye-tracking data collected during real-world driving experiments.


eye tracking research & application | 2014

SubsMatch: scanpath similarity in dynamic scenes based on subsequence frequencies

Thomas C. Kübler; Enkelejda Kasneci; Wolfgang Rosenstiel

The analysis of visual scanpaths, i.e., series of fixations and saccades, in complex dynamic scenarios is highly challenging and usually performed manually. We propose SubsMatch, a scanpath comparison algorithm for dynamic, interactive scenarios based on the frequency of repeated gaze patterns. Instead of measuring the gaze duration towards a semantic target object (which would be hard to label in dynamic scenes), we examine the frequency of attention shifts and exploratory eye movements. SubsMatch was evaluated on highly dynamic data from a driving experiment to identify differences between scanpaths of subjects who failed a driving test and subjects who passed.


Optometry and Vision Science | 2015

Driving with Glaucoma: Task Performance and Gaze Movements

Thomas C. Kübler; Enkelejda Kasneci; Wolfgang Rosenstiel; Martin Heister; Kathrin Aehling; Katja Nagel; Ulrich Schiefer; Elena Papageorgiou

Purpose The aim of this pilot study was to assess the driving performance and the visual search behavior, that is, eye and head movements, of patients with glaucoma in comparison to healthy-sighted subjects during a simulated driving test. Methods Driving performance and gaze behavior of six glaucoma patients and eight healthy-sighted age- and sex-matched control subjects were compared in an advanced driving simulator. All subjects underwent a 40-minute driving test including nine hazardous situations on city and rural roads. Fitness to drive was assessed by a masked driving instructor according to the requirements of the official German driving test. Several driving performance measures were investigated: lane position, time to line crossing, and speed. Additionally, eye and head movements were tracked and analyzed. Results Three out of six glaucoma patients passed the driving test and their driving performance was indistinguishable from that of the control group. Patients who passed the test showed an increased visual exploration in comparison to patients who failed; that is, they showed increased number of head and gaze movements toward eccentric regions. Furthermore, patients who failed the test showed a rightward bias in average lane position, probably in an attempt to maximize the safety margin to oncoming traffic. Conclusions Our study suggests that a considerable subgroup of subjects with binocular glaucomatous visual field loss shows a safe driving behavior in a virtual reality environment, because they adapt their viewing behavior by increasing their visual scanning. Hence, binocular visual field loss does not necessarily influence driving safety. We recommend that more individualized driving assessments, which will take into account the patient’s ability to compensate, are required.


Behavior Research Methods | 2017

SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies

Thomas C. Kübler; Colleen Rothe; Ulrich Schiefer; Wolfgang Rosenstiel; Enkelejda Kasneci

Our eye movements are driven by a continuous trade-off between the need for detailed examination of objects of interest and the necessity to keep an overview of our surrounding. In consequence, behavioral patterns that are characteristic for our actions and their planning are typically manifested in the way we move our eyes to interact with our environment. Identifying such patterns from individual eye movement measurements is however highly challenging. In this work, we tackle the challenge of quantifying the influence of experimental factors on eye movement sequences. We introduce an algorithm for extracting sequence-sensitive features from eye movements and for the classification of eye movements based on the frequencies of small subsequences. Our approach is evaluated against the state-of-the art on a novel and a very rich collection of eye movements data derived from four experimental settings, from static viewing tasks to highly dynamic outdoor settings. Our results show that the proposed method is able to classify eye movement sequences over a variety of experimental designs. The choice of parameters is discussed in detail with special focus on highlighting different aspects of general scanpath shape. Algorithms and evaluation data are available at: http://www.ti.uni-tuebingen.de/scanpathcomparison.html.


Computers in Human Behavior | 2017

Aggregating physiological and eye tracking signals to predict perception in the absence of ground truth

Enkelejda Kasneci; Thomas C. Kübler; Klaus Broelemann; Gjergji Kasneci

Abstract Todays driving assistance systems build on numerous sensors to provide assistance for specific tasks. In order to not patronize the driver, intensity and timing of critical responses by such systems is determined based on parameters derived from vehicle dynamics and scene recognition. However, to date, information on object perception by the driver is not considered by such systems. With advances in eye-tracking technology, a powerful tool to assess the drivers visual perception has become available, which, in many studies, has been integrated with physiological signals, i.e., galvanic skin response and EEG, for reliable prediction of object perception. We address the problem of aggregating binary signals from physiological sensors and eye tracking to predict a drivers visual perception of scene hazards. In the absence of ground truth, it is crucial to use an aggregation scheme that estimates the reliability of each signal source and thus reliably aggregates signals to predict whether an object has been perceived. To this end, we apply state-of-the-art methods for response aggregation on data obtained from simulated driving sessions with 30 subjects. Our results show that a probabilistic aggregation scheme on top of an Expectation-Maximization-based estimation of source reliabilities can predict hazard perception at a recall and precision of 96% in real-time.

Collaboration


Dive into the Thomas C. Kübler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge