Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Paulin Hansen is active.

Publication


Featured researches published by John Paulin Hansen.


eye tracking research & application | 2010

Evaluation of a low-cost open-source gaze tracker

Javier San Agustin; Henrik H. T. Skovsgaard; Emilie Møllenbach; Maria Barret; Martin Tall; Dan Witzner Hansen; John Paulin Hansen

This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the users eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending on the typing system used. A pilot study to assess the usability of the system was also carried out in the home of a user with severe motor impairments. The user successfully typed on a wall-projected interface using his eye movements.


eye tracking research & application | 2004

Gaze typing compared with input by head and hand

John Paulin Hansen; Kristian Tørning; Anders Sewerin Johansen; Kenji Itoh; Hirotaka Aoki

This paper investigates the usability of gaze-typing systems for disabled people in a broad perspective that takes into account the usage scenarios and the particular users that these systems benefit. Design goals for a gaze-typing system are identified: productivity above 25 words per minute, robust tracking, high availability, and support of multimodal input. A detailed investigation of the efficiency and user satisfaction with a Danish and a Japanese gaze-typing system compares it to head- and mouse (hand) - typing. We found gaze typing to be more erroneous than the other two modalities. Gaze typing was just as fast as head typing, and both were slower than mouse (hand-) typing. Possibilities for design improvements are discussed.


human factors in computing systems | 2009

Low-cost gaze interaction: ready to deliver the promises

Javier San Agustin; Henrik H. T. Skovsgaard; John Paulin Hansen; Dan Witzner Hansen

Eye movements are the only means of communication for some severely disabled people. However, the high prices of commercial eye tracking systems limit the access to this technology. In this pilot study we compare the performance of a low-cost, webcam-based gaze tracker that we have developed with two commercial trackers in two different tasks: target acquisition and eye typing. From analyses on throughput, words per minute and error rates we conclude that a low-cost solution can be as efficient as expensive commercial systems.


workshop on applications of computer vision | 2002

Eye typing using Markov and active appearance models

Dan Witzner Hansen; John Paulin Hansen; Mads Nielsen; Anders Johansen; Mikkel B. Stegmann

We propose a non-intrusive eye tracking system intended for the use of everyday gaze typing using web cameras. We argue that high precision in gaze tracking is not needed for on-screen typing due to natural language redundancy. This facilitates the use of low-cost video components for advanced multi-modal interactions based on video tracking systems. Robust methods are needed to track the eyes using web cameras due to the poor image quality. A realtime tracking scheme using a mean-shift color tracker and an Active Appearance Model of the eye is proposed. It is possible from this model to infer the state of the eye such as eye corners and the pupil location under scale and rotational changes.


human factors in computing systems | 1996

New technological windows into mind: there is more in eyes and brains for human-computer interaction

Boris M. Velichkovsky; John Paulin Hansen

This is an overview of the recent progress leading towards a full subject-centered paradigm in human-computer interaction. At this new phase in the evolution of computer technologies it will be possible to take into account not just characteristics of average human beings, but create systems sensitive to the actual states of attention and intentions of interacting persons. We discuss some of these methods concentrating on the eye-tracking and brain imaging. The development is based on the use of eye movement data for a control of output devices, for gaze-contingent image processing and for disambiguating verbal as well as nonverbal information.


eye tracking research & application | 2008

Noise tolerant selection by gaze-controlled pan and zoom in 3D

Dan Witzner Hansen; Henrik H. T. Skovsgaard; John Paulin Hansen; Emilie Møllenbach

This paper presents StarGazer - a new 3D interface for gaze-based interaction and target selection using continuous pan and zoom. Through StarGazer we address the issues of interacting with graph structured data and applications (i.e. gaze typing systems) using low resolution eye trackers or small-size displays. We show that it is possible to make robust selection even with a large number of selectable items on the screen and noisy gaze trackers. A test with 48 subjects demonstrated that users who have never tried gaze interaction before could rapidly adapt to the navigation principles of StarGazer. We tested three different display sizes (down to PDA-sized displays) and found that large screens are faster to navigate than small displays and that the error rate is higher for the smallest display. Half of the subjects were exposed to severe noise deliberately added on the cursor positions. We found that this had a negative impact on efficiency. However, the user remained in control and the noise did not seem to effect the error rate. Additionally, three subjects tested the effects of temporally adding noise to simulate latency in the gaze tracker. Even with a significant latency (about 200 ms) the subjects were able to type at acceptable rates. In a second test, seven subjects were allowed to adjust the zooming speed themselves. They achieved typing rates of more than eight words per minute without using language modeling. We conclude that the StarGazer application is an intuitive 3D interface for gaze navigation, allowing more selectable objects to be displayed on the screen than the accuracy of the gaze trackers would otherwise permit.


international conference on human-computer interaction | 1995

Eye-gaze control of multimedia systems

John Paulin Hansen; Allan W. Andersen; Peter Roed

Publisher Summary This chapter describes the eye-gaze control of multimedia systems. Several non-intrusive systems for recording eye movements, gaze locations, pupil size, and blink frequencies have been introduced in recent years. The application of this technology falls into two main categories: (1) active device control, and (2) passive recordings. Active device control means the voluntary use of gaze positioning to do selections. Traditionally, passive recordings of the users ocular behavior have been made for different analyses—for example, human–computer interaction or newspaper reading. The chapter also describes a multimedia system that applies both types of recordings separately and introduces a qualitatively new interaction principle, termed “interest and emotion sensitive” media (IES), that emerges when the two types are integrated. It is suggested that interest and emotion sensitive media hold great potential for information systems in general—for example, information stands and interactive television.


Acta Psychologica | 1991

The use of eye mark recordings to support verbal retrospection in software testing

John Paulin Hansen

Abstract The work of eight computer novices on a number of closely defined problems with a PC text editor was recorded with an ordinary video camera as well as with mobile equipment for eye mark recording. Immediately afterwards, each recording was used to support the subjects in two separate retrospections on their task performance. The verbal protocols form this session were scored for the occurrence of manipulative, cognitive, and visual operational comments. A comparison showed the eye mark retrospectations to contain 50% more visual operational comments overall and to be slightly more problem-focused than the video retrospections. The video retrospections did, however, show the highest number of manipulative comments, whereas the number of cognitive comments was almost identical for the two kinds of retrospection. Eye mark recordings may thus satisfy the need for elaborative information on visual strategies without a concomitant loss of comments on the cognitive processes. Examples from the protocols illustrate how retrospection based on eye mark recordings can yield information which is of practical value for iterative software development and knowledge elicitation. The validity of the comments on eye mark recordings is discussed on the basis of experience with two recordings, which, unknown to the subject, had actually originated form another subject.


eye tracking research & application | 2012

Gaze input for mobile devices by dwell and gestures

Morten Lund Dybdal; Javier San Agustin; John Paulin Hansen

This paper investigates whether it is feasible to interact with the small screen of a smartphone using eye movements only. Two of the most common gaze-based selection strategies, dwell time selections and gaze gestures are compared in a target selection experiment. Finger-strokes and accelerometer-based interaction, i. e. tilting, are also considered. In an experiment with 11 subjects we found gaze interaction to have a lower performance than touch interaction but comparable to the error rate and completion time of accelerometer (i.e. tilt) interaction. Gaze gestures had a lower error rate and were faster than dwell selections by gaze, especially for small targets, suggesting that this method may be the best option for hands-free gaze control of smartphones.


Human Factors | 1995

An Experimental Investigation of Configural, Digital, and Temporal Information on Process Displays

John Paulin Hansen

The detection of a temporal event was investigated in two experiments comparing perceptual performance in terms of reaction time, hit rate, and number of false alarms on four separated and integrated display formats with and without historical information. In addition, the importance of supplementary digital information was examined. Experiment 1 showed that instantaneous displays were perceived faster than historical displays on high-rate events but not on low-rate events. The hit rate and the number of false alarms was higher on integrated displays than on separated displays. Experiment 2 used data with random noise added. Graphical integration across time and across the eight variables both had significant importance for the ability to see trends in the presence of noise. Adding digital information to graphical displays was found to decrease reaction time in general, but the hit rate may be reduced by adding numbers on integrated displays without a temporal dimension.

Collaboration


Dive into the John Paulin Hansen's collaboration.

Top Co-Authors

Avatar

Dan Witzner Hansen

IT University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Javier San Agustin

IT University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar

Kenji Itoh

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hirotaka Aoki

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandre Alapetite

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge