Arantxa Villanueva
University of Navarra
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arantxa Villanueva.
systems man and cybernetics | 2008
Arantxa Villanueva; Rafael Cabeza
The design of robust and high-performance gaze-tracking systems is one of the most important objectives of the eye-tracking community. In general, a subject calibration procedure is needed to learn system parameters and be able to estimate the gaze direction accurately. In this paper, we attempt to determine if subject calibration can be eliminated. A geometric analysis of a gaze-tracking system is conducted to determine user calibration requirements. The eye model used considers the offset between optical and visual axes, the refraction of the cornea, and Donders law. This paper demonstrates the minimal number of cameras, light sources, and user calibration points needed to solve for gaze estimation. The underlying geometric model is based on glint positions and pupil ellipse in the image, and the minimal hardware needed for this model is one camera and multiple light-emitting diodes. This paper proves that subject calibration is compulsory for correct gaze estimation and proposes a model based on a single point for subject calibration. The experiments carried out show that, although two glints and one calibration point are sufficient to perform gaze estimation (error ~ 1deg), using more light sources and calibration points can result in lower average errors.
international conference on computer vision | 2011
Leonardo De-Maeztu; Stefano Mattoccia; Arantxa Villanueva; Rafael Cabeza
Recent local stereo matching algorithms based on an adaptive-weight strategy achieve accuracy similar to global approaches. One of the major problems of these algorithms is that they are computationally expensive and this complexity increases proportionally to the window size. This paper proposes a novel cost aggregation step with complexity independent of the window size (i.e. O(1)) that outperforms state-of-the-art O(1) methods. Moreover, compared to other O(1) approaches, our method does not rely on integral histograms enabling aggregation using colour images instead of grayscale ones. Finally, to improve the results of the proposed algorithm a disparity refinement pipeline is also proposed. The overall algorithm produces results comparable to those of state-of-the-art stereo matching algorithms.
eye tracking research & application | 2010
Dan Witzner Hansen; Javier San Agustin; Arantxa Villanueva
Homography normalization is presented as a novel gaze estimation method for uncalibrated setups. The method applies when head movements are present but without any requirements to camera calibration or geometric calibration. The method is geometrically and empirically demonstrated to be robust to head pose changes and despite being less constrained than cross-ratio methods, it consistently performs favorably by several degrees on both simulated data and data from physical setups. The physical setups include the use of off-the-shelf web cameras with infrared light (night vision) and standard cameras with and without infrared light. The benefits of homography normalization and uncalibrated setups in general are also demonstrated through obtaining gaze estimates (in the visible spectrum) using only the screen reflections on the cornea.
eye tracking research & application | 2008
Juan J. Cerrolaza; Arantxa Villanueva; Rafael Cabeza
Of gaze tracking techniques, video-oculography (VOG) is one of the most attractive because of its versatility and simplicity. VOG systems based on general purpose mapping methods use simple polynomial expressions to estimate a users point of regard. Although the behaviour of such systems is generally acceptable, a detailed study of the calibration process is needed to facilitate progress in improving accuracy and tolerance to user head movement. To date, there has been no thorough comparative study of how mapping equations affect final system response. After developing a taxonomic classification of calibration functions, we examine over 400,000 models and evaluate the validity of several conventional assumptions. The rigorous experimental procedure employed enabled us to optimize the calibration process for a real VOG gaze tracking system and, thereby, halve the calibration time without detrimental effect on accuracy or tolerance to head movement.
Pattern Recognition Letters | 2011
Leonardo De-Maeztu; Arantxa Villanueva; Rafael Cabeza
Due to the similarities between neighbouring pixels as well as the intensity-value differences between corresponding pixels, classical matching measures based on intensity similarity produce slightly imprecise results. In this study, a gradient similarity-matching measure was implemented in a state-of-the-art local stereo-matching method (an adaptive support-weight algorithm). The new matching measure improved the precision of the results over the classical measures. Using the Middlebury stereo benchmark, when high accuracy was required in the disparity results our algorithm consistently outperformed other adaptive support-weight algorithms using different similarity measures, and it was the best local area-based method compared to the permanent Middlebury table entries.
Eurasip Journal on Image and Video Processing | 2007
Arantxa Villanueva; Rafael Cabeza
One of the most confusing aspects that one meets when introducing oneself into gaze tracking technology is the wide variety, in terms of hardware equipment, of available systems that provide solutions to the same matter, that is, determining the point the subject is looking at. The calibration process permits generally adjusting nonintrusive trackers based on quite different hardware and image features to the subject. The negative aspect of this simple procedure is that it permits the system to work properly but at the expense of a lack of control over the intrinsic behavior of the tracker. The objective of the presented article is to overcome this obstacle to explore more deeply the elements of a video-oculographic system, that is, eye, camera, lighting, and so forth, from a purely mathematical and geometrical point of view. The main contribution is to find out the minimum number of hardware elements and image features that are needed to determine the point the subject is looking at. A model has been constructed based on pupil contour and multiple lighting, and successfully tested with real subjects. On the other hand, theoretical aspects of video-oculographic systems have been thoroughly reviewed in order to build a theoretical basis for further studies.
eye tracking research & application | 2012
Laura Sesma; Arantxa Villanueva; Rafael Cabeza
Low cost eye tracking is an actual challenging research topic for the eye tracking community. Gaze tracking based on a web cam and without infrared light is a searched goal to broaden the applications of eye tracking systems. Web cam based eye tracking results in new challenges to solve such as a wider field of view and a lower image quality. In addition, no infrared light implies that glints cannot be used anymore as a tracking feature. In this paper, a thorough study has been carried out to evaluate pupil (iris) center-eye corner (PC-EC) vector as feature for gaze estimation based on interpolation methods in low cost eye tracking, as it is considered to be partially equivalent to the pupil center-corneal reflection (PC-CR) vector. The analysis is carried out both based on simulated and real data. The experiments show that eye corner positions in the image move slightly when the user is looking at different points of the screen, even with a static head position. This lowers the possible accuracy of the gaze estimation, significantly reducing the accuracy of the system under standard working conditions to 2--3 degrees.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012
Leonardo De-Maeztu; Arantxa Villanueva; Rafael Cabeza
Adaptive-weight algorithms currently represent the state of the art in local stereo matching. However, due to their computational requirements, these types of solutions are not suitable for real-time implementation. Here, we present a novel aggregation method inspired by the anisotropic diffusion technique used in image filtering. The proposed aggregation algorithm produces results similar to adaptive-weight solutions while reducing the computational requirements. Moreover, near real-time performance is demonstrated with a GPU implementation of the algorithm.
IEEE Transactions on Biomedical Engineering | 2012
Laura Sesma-Sanchez; Arantxa Villanueva; Rafael Cabeza
Video oculography (VOG) is one of the most commonly used techniques for gaze tracking because it enables nonintrusive eye detection and tracking. Improving the eye trackings accuracy and tolerance to user head movements is a common task in the field of gaze tracking; thus, a thorough study of how binocular information can improve a gaze tracking systems accuracy and tolerance to user head movements has been carried out. The analysis is focused on interpolation-based methods and systems with one and two infrared lights. New mapping features are proposed based on the commonly used pupil-glint vector using different distances as the normalization factor. For this study, an experimental procedure with six users based on a real VOG gaze tracking system was performed, and the results were contrasted with an eye simulator. Important conclusions have been obtained in terms of configuration, equation, and mapping features, such as the outperformance of the interglint distance as the normalization factor. Furthermore, the binocular gaze tracking system was found to have a similar or improved level of accuracy compared to that of the monocular gaze tracking system.
Image and Vision Computing | 2006
Arantxa Villanueva; Rafael Cabeza; Sonia Porta
Abstract Lately eye tracking system development and applications are becoming increasingly interesting. Efforts in eye tracking research cover a broad spectrum of fields, being mathematical modeling an important aspect and probably one of the least explored topics. In order to build up a robust and efficient model a deep mathematical review of the geometry and the intrinsic nature of eye tracking systems is needed. Video-oculography represents one of the most popular eye tracking methods due to its non-intrusive nature. The images acquired from user eye are analyzed in order to identify some selected features, and through a calibration process the parameters of a model are adjusted for each user. This paper tries to accomplish the first step of a more extensive model and presents a simple expression for pupil orientation based on framework physical parameters. The proposed model requires alternative calibration strategies depending on the number of parameters employed to obtain an efficient behavior. It exhibits lower errors than other generic mathematical expressions, which normally need more calibration points to construct a competent model. The paper starts modeling a whole video-oculographic system and once the model is derived, the work addresses different simplification ways in order to obtain a simpler and more efficient form. Lastly an experimental validation is provided.