Kenneth Alberto Funes Mora
Idiap Research Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kenneth Alberto Funes Mora.
computer vision and pattern recognition | 2012
Kenneth Alberto Funes Mora; Jean-Marc Odobez
This paper addresses the problem of free gaze estimation under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to estimate the gaze direction in the 3D space. In this context the paper makes the following contributions: (i) leveraging on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional estimation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on three users demonstrate the great potential of our approach.
computer vision and pattern recognition | 2014
Kenneth Alberto Funes Mora; Jean-Marc Odobez
We propose a head pose invariant gaze estimation model for distant RGB-D cameras. It relies on a geometric understanding of the 3D gaze action and generation of eye images. By introducing a semantic segmentation of the eye region within a generative process, the model (i) avoids the critical feature tracking of geometrical approaches requiring high resolution images, (ii) decouples the person dependent geometry from the ambient conditions, allowing adaptation to different conditions without retraining. Priors in the generative framework are adequate for training from few samples. In addition, the model is capable of gaze extrapolation allowing for less restrictive training schemes. Comparisons with state of the art methods validate these properties which make our method highly valuable for addressing many diverse tasks in sociology, HRI and HCI.
eye tracking research & application | 2014
Kenneth Alberto Funes Mora; Florent Monay; Jean-Marc Odobez
The lack of a common benchmark for the evaluation of the gaze estimation task from RGB and RGB-D data is a serious limitation for distinguishing the advantages and disadvantages of the many proposed algorithms found in the literature. This paper intends to overcome this limitation by introducing a novel database along with a common framework for the training and evaluation of gaze estimation approaches. In particular, we have designed this database to enable the evaluation of the robustness of algorithms with respect to the main challenges associated to this task: i) Head pose variations; ii) Person variation; iii) Changes in ambient and sensing conditions and iv) Types of target: screen or 3D object.
international conference on multimodal interfaces | 2013
Kenneth Alberto Funes Mora; Laurent Son Nguyen; Daniel Gatica-Perez; Jean-Marc Odobez
In this paper we propose a system capable of accurately coding gazing events in natural dyadic interactions. Contrary to previous works, our approach exploits the actual continuous gaze direction of a participant by leveraging on remote RGB-D sensors and a head pose-independent gaze estimation method. Our contributions are: i) we propose a system setup built from low-cost sensors and a technique to easily calibrate these sensors in a room with minimal assumptions; ii) we propose a method which, provided short manual annotations, can automatically detect gazing events in the rest of the sequence; iii) we demonstrate on substantially long, natural dyadic data that high accuracy can be obtained, showing the potential of our system. Our approach is non-invasive and does not require collaboration from the interactors. These characteristics are highly valuable in psychology and sociology research.
international conference on multimodal interfaces | 2015
Catharine Oertel; Kenneth Alberto Funes Mora; Joakim Gustafson; Jean-Marc Odobez
Estimating a silent participants degree of engagement and his role within a group discussion can be challenging, as there are no speech related cues available at the given time. Having this information available, however, can provide important insights into the dynamics of the group as a whole. In this paper, we study the classification of listeners into several categories (attentive listener, side participant and bystander). We devised a thin-sliced perception test where subjects were asked to assess listener roles and engagement levels in 15-second video-clips taken from a corpus of group interviews. Results show that humans are usually able to assess silent participant roles. Using the annotation to identify from a set of multimodal low-level features, such as past speaking activity, backchannels (both visual and verbal), as well as gaze patterns, we could identify the features which are able to distinguish between different listener categories. Moreover, the results show that many of the audio-visual effects observed on listeners in dyadic interactions, also hold for multi-party interactions. A preliminary classifier achieves an accuracy of 64 %.
international conference on multimodal interfaces | 2016
Catharine Oertel; José Lopes; Yu Yu; Kenneth Alberto Funes Mora; Joakim Gustafson; Alan W. Black; Jean-Marc Odobez
Current dialogue systems typically lack a variation of audio-visual feedback tokens. Either they do not encompass feedback tokens at all, or only support a limited set of stereotypical functions. However, this does not mirror the subtleties of spontaneous conversations. If we want to be able to build an artificial listener, as a first step towards building an empathetic artificial agent, we also need to be able to synthesize more subtle audio-visual feedback tokens. In this study, we devised an array of monomodal and multimodal binary comparison perception tests and experiments to understand how different realisations of verbal and visual feedback tokens influence third-party perception of the degree of attentiveness. This allowed us to investigate i) which features (amplitude, frequency, duration...) of the visual feedback influences attentiveness perception; ii) whether visual or verbal backchannels are perceived to be more attentive iii) whether the fusion of unimodal tokens with low perceived attentiveness increases the degree of perceived attentiveness compared to unimodal tokens with high perceived attentiveness taken alone; iv) the automatic ranking of audio-visual feedback token in terms of conveyed degree of attentiveness.
international conference on multimodal interfaces | 2013
Kenneth Alberto Funes Mora
In this PhD thesis the problem of 3D head pose and gaze tracking from minimal user cooperation is addressed. By exploiting characteristics of RGB-D sensors, contributions have been made related to consequent problems of the lack of cooperation: in particular, head pose and inter-person appearance variability; in addition to low resolution handling. The resulting system enabled diverse multimodal applications. In particular, recent work combined multiple RGB-D sensors to detect gazing events in dyadic interactions. The research plan consists of: i) Improving the robustness, accuracy and usability of the head pose and gaze tracking system; ii) To use additional multimodal cues, such as speech and dynamic context, to train and adapt gaze models in an unsupervised manner; iii) To extend the application of 3D gaze estimation to diverse multimodal applications. This includes visual focus of attention tasks involving multiple visual targets, e.g.~people in a meeting-like setup.
international conference on image processing | 2013
Kenneth Alberto Funes Mora; Jean-Marc Odobez
Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions | 2014
Catharine Oertel; Kenneth Alberto Funes Mora; Samira Sheikhi; Jean-Marc Odobez; Joakim Gustafson
Archive | 2014
Kenneth Alberto Funes Mora; Florent Monay; Jean-Marc Odobez