Marc-Antoine Nüssli
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marc-Antoine Nüssli.
Computers in Human Behavior | 2011
Mirweis Sangin; Gaëlle Molinari; Marc-Antoine Nüssli; Pierre Dillenbourg
We report an empirical study where we investigated the effects, on the collaborative outcomes and processes, of a cognition-related awareness tool providing learners with cues about their peers level of prior knowledge. Sixty-four university students participated in a remote computer-mediated dyadic learning scenario. Co-learners were provided (or not) with a visual representation of their peers level of prior knowledge through what we refer to as a knowledge awareness tool (KAT). The results show that, providing co-learners with objective cues about the level of their peers prior knowledge positively impacts learning outcomes. In addition, this effect seems to be mediated by the fact that co-learners provided with these objective cues become more accurate in estimating their partners knowledge - accuracy that predicts higher outcomes. Analyses on the process level of the verbal interactions indicate that the KAT seems to sensitize co-learners to the fragile nature of their partners as well as their own prior knowledge. The beneficial effect of the KAT seems to mainly rely on this induction of epistemic uncertainty that implicitly triggers compensation socio-cognitive strategies; strategies that appear to be beneficial to the learning process.
international conference on multimedia and expo | 2009
Yan Liu; Pei-Yun Hsueh; Jennifer Lai; Mirweis Sangin; Marc-Antoine Nüssli; Pierre Dillenbourg
In this paper, we analyze complex gaze tracking data in a collaborative task and apply machine learning models to automatically predict skill-level differences between participants. Specifically, we present findings that address the two primary challenges for this prediction task: (1) extracting meaningful features from the gaze information, and (2) casting the prediction task as a machine learning (ML) problem. The results show that our approach based on profile hidden Markov models are up to 96% accurate and can make the determination as fast as one minute into the collaboration, with only 5% of gaze observations registered. We also provide a qualitative analysis of gaze patterns that reveal the relative expertise level of the paired users in a collaborative learning user study.
computer supported collaborative learning | 2009
Marc-Antoine Nüssli; Patrick Jermann; Mirweis Sangin; Pierre Dillenbourg
This study aims to explore the possibility of using machine learning techniques to build predictive models of performance in collaborative induction tasks. More specifically, we explored how signal-level data, like eye-gaze data and raw speech may be used to build such models. The results show that such low level features have effectively some potential to predict performance in such tasks. Implications for future applications design are shortly discussed.
international conference on multimodal interfaces | 2010
Weifeng Li; Marc-Antoine Nüssli; Patrick Jermann
The use of dual eye-tracking is investigated in a collaborative game setting. Social context influences individual gaze and action during a collaborative Tetris game: results show that experts as well as novices adapt their playing style when interacting in mixed ability pairs. The long term goal of our work is to design adaptive gaze awareness tools that take the pair composition into account. We therefore investigate the automatic detection (or recognition) of pair composition using dual gaze-based as well as action-based multimodal features. We describe several methods for the improvement of detection (or recognition) and experimentally demonstrate their effectiveness, especially in the situations when the collected gaze data are noisy.
intelligent user interfaces | 2013
Marc-Antoine Nüssli; Patrick Jermann; Mirweis Sangin; Pierre Dillenbourg
Previous studies have shown that people tend to look at a visual referent just before saying the corresponding word, and similarly, listeners look at the referent right after hearing the name of the object. We first replicated these results in an ecologically valid situation in which collaborators are engaged in an unconstrained dialogue. Secondly, building upon these findings, we developed a model, called REGARD, which monitors speech and gaze during collaboration in order to automatically detect associations between words and objects of the shared workspace. The results are very promising showing that the model is actually able to detect correctly most of the references made by the collaborators. Perspectives of applications are briefly discussed.
eye tracking research & application | 2008
Mauro Cherubini; Marc-Antoine Nüssli; Pierre Dillenbourg
conference on computer supported cooperative work | 2012
Patrick Jermann; Marc-Antoine Nüssli
Journal of Eye Movement Research | 2010
Mauro Cherubini; Marc-Antoine Nüssli; Pierre Dillenbourg
computer supported collaborative learning | 2011
Patrick Jermann; Dejana Mullins; Marc-Antoine Nüssli; Pierre Dillenbourg
BCS '10 Proceedings of the 24th BCS Interaction Specialist Group Conference | 2010
Patrick Jermann; Marc-Antoine Nüssli; Weifeng Li