Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dan Witzner Hansen is active.

Publication


Featured researches published by Dan Witzner Hansen.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

In the Eye of the Beholder: A Survey of Models for Eyes and Gaze

Dan Witzner Hansen; Qiang Ji

Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond.


Computer Vision and Image Understanding | 2005

Eye tracking in the wild

Dan Witzner Hansen; Arthur E. C. Pece

An active contour tracker is presented which can be used for gaze-based interaction with off-the-shelf components. The underlying contour model is based on image statistics and avoids explicit feature detection. The tracker combines particle filtering with the EM algorithm. The method exhibits robustness to light changes and camera defocusing; consequently, the model is well suited for use in systems using off-the-shelf hardware, but may equally well be used in controlled environments, such as in IR-based settings. The method is even capable of handling sudden changes between IR and non-IR light conditions, without changing parameters. For the purpose of determining where the user is looking, calibration is usually needed. The number of calibration points used in different methods varies from a few to several thousands, depending on the prior knowledge used on the setup and equipment. We examine basic properties of gaze determination when the geometry of the camera, screen, and user is unknown. In particular we present a lower bound on the number of calibration points needed for gaze determination on planar objects, and we examine degenerate configurations. Based on this lower bound we apply a simple calibration procedure, to facilitate gaze estimation.


eye tracking research & application | 2010

Evaluation of a low-cost open-source gaze tracker

Javier San Agustin; Henrik H. T. Skovsgaard; Emilie Møllenbach; Maria Barret; Martin Tall; Dan Witzner Hansen; John Paulin Hansen

This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the users eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending on the typing system used. A pilot study to assess the usability of the system was also carried out in the home of a user with severe motor impairments. The user successfully typed on a wall-projected interface using his eye movements.


human factors in computing systems | 2009

Low-cost gaze interaction: ready to deliver the promises

Javier San Agustin; Henrik H. T. Skovsgaard; John Paulin Hansen; Dan Witzner Hansen

Eye movements are the only means of communication for some severely disabled people. However, the high prices of commercial eye tracking systems limit the access to this technology. In this pilot study we compare the performance of a low-cost, webcam-based gaze tracker that we have developed with two commercial trackers in two different tasks: target acquisition and eye typing. From analyses on throughput, words per minute and error rates we conclude that a low-cost solution can be as efficient as expensive commercial systems.


workshop on applications of computer vision | 2002

Eye typing using Markov and active appearance models

Dan Witzner Hansen; John Paulin Hansen; Mads Nielsen; Anders Johansen; Mikkel B. Stegmann

We propose a non-intrusive eye tracking system intended for the use of everyday gaze typing using web cameras. We argue that high precision in gaze tracking is not needed for on-screen typing due to natural language redundancy. This facilitates the use of low-cost video components for advanced multi-modal interactions based on video tracking systems. Robust methods are needed to track the eyes using web cameras due to the poor image quality. A realtime tracking scheme using a mean-shift color tracker and an Active Appearance Model of the eye is proposed. It is possible from this model to infer the state of the eye such as eye corners and the pupil location under scale and rotational changes.


eye tracking research & application | 2008

Noise tolerant selection by gaze-controlled pan and zoom in 3D

Dan Witzner Hansen; Henrik H. T. Skovsgaard; John Paulin Hansen; Emilie Møllenbach

This paper presents StarGazer - a new 3D interface for gaze-based interaction and target selection using continuous pan and zoom. Through StarGazer we address the issues of interacting with graph structured data and applications (i.e. gaze typing systems) using low resolution eye trackers or small-size displays. We show that it is possible to make robust selection even with a large number of selectable items on the screen and noisy gaze trackers. A test with 48 subjects demonstrated that users who have never tried gaze interaction before could rapidly adapt to the navigation principles of StarGazer. We tested three different display sizes (down to PDA-sized displays) and found that large screens are faster to navigate than small displays and that the error rate is higher for the smallest display. Half of the subjects were exposed to severe noise deliberately added on the cursor positions. We found that this had a negative impact on efficiency. However, the user remained in control and the noise did not seem to effect the error rate. Additionally, three subjects tested the effects of temporally adding noise to simulate latency in the gaze tracker. Even with a significant latency (about 200 ms) the subjects were able to type at acceptable rates. In a second test, seven subjects were allowed to adjust the zooming speed themselves. They achieved typing rates of more than eight words per minute without using language modeling. We conclude that the StarGazer application is an intuitive 3D interface for gaze navigation, allowing more selectable objects to be displayed on the screen than the accuracy of the gaze trackers would otherwise permit.


eye tracking research & application | 2010

Homography normalization for robust gaze estimation in uncalibrated setups

Dan Witzner Hansen; Javier San Agustin; Arantxa Villanueva

Homography normalization is presented as a novel gaze estimation method for uncalibrated setups. The method applies when head movements are present but without any requirements to camera calibration or geometric calibration. The method is geometrically and empirically demonstrated to be robust to head pose changes and despite being less constrained than cross-ratio methods, it consistently performs favorably by several degrees on both simulated data and data from physical setups. The physical setups include the use of off-the-shelf web cameras with infrared light (night vision) and standard cameras with and without infrared light. The benefits of homography normalization and uncalibrated setups in general are also demonstrated through obtaining gaze estimates (in the visible spectrum) using only the screen reflections on the cornea.


Proceedings of the 1st Conference on Novel Gaze-Controlled Applications | 2011

Mobile gaze-based screen interaction in 3D environments

Diako Mardanbegi; Dan Witzner Hansen

Head-mounted eye trackers can be used for mobile interaction as well as gaze estimation purposes. This paper presents a method that enables the user to interact with any planar digital display in a 3D environment using a head-mounted eye tracker. An effective method for identifying the screens in the field of view of the user is also presented which can be applied in a general scenario in which multiple users can interact with multiple screens. A particular application of using this technique is implemented in a home environment with two big screens and a mobile phone. In this application a user was able to interact with these screens using a wireless head-mounted eye tracker.


computer vision and pattern recognition | 2008

Cluster tracking with Time-of-Flight cameras

Dan Witzner Hansen; Mads Syska Hansen; Martin Kirschmeyer; Rasmus Larsen; Davide Silvestre

We describe a method for tracking people using a time-of-flight camera and apply the method for persistent authentication in a smart-environment. A background model is built by fusing information from intensity and depth images. While a geometric constraint is employed to improve pixel cluster coherence and reducing the influence of noise, the EM algorithm (expectation maximization) is used for tracking moving clusters of pixels significantly different from the background model. Each cluster is defined through a statistical model of points on the ground plane. We show the benefits of the time-of-flight principles for people tracking but also their current limitations.


Computer Vision and Image Understanding | 2007

An improved likelihood model for eye tracking

Dan Witzner Hansen; Riad I. Hammoud

While existing eye detection and tracking algorithms can work reasonably well in a controlled environment, they tend to perform poorly under real world imaging conditions where the lighting produces shadows and the persons eyes can be occluded by e.g. glasses or makeup. As a result, pixel clusters associated with the eyes tend to be grouped together with background-features. This problem occurs both for eye detection and eye tracking. Problems that especially plague eye tracking include head movement, eye blinking and light changes, all of which can cause the eyes to suddenly disappear. The usual approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a persons eyes between successively produced video image frames, even in situations where the persons head turns, the eyes momentarily close and/or the lighting conditions are variable. The present paper is directed to an efficient and reliable method of tracking a human eye between successively produced infrared interlaced image frames where the lighting conditions are challenging. It proposes a log likelihood-ratio function of foreground and background models in a particle filter-based eye tracking framework. It fuses key information from even, odd infrared fields (dark and bright-pupil) and their corresponding subtractive image into one single observation model. Experimental validations show good performance of the proposed eye tracker in challenging conditions that include moderate head motion and significant local and global lighting changes. The paper presents also an eye detector that relies on physiological infrared eye responses and a modified version of a cascaded classifier.

Collaboration


Dive into the Dan Witzner Hansen's collaboration.

Top Co-Authors

Avatar

John Paulin Hansen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Diako Mardanbegi

IT University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Javier San Agustin

IT University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rasmus Larsen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge