Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mirweis Sangin is active.

Publication


Featured researches published by Mirweis Sangin.


Computers in Human Behavior | 2011

Facilitating peer knowledge modeling: Effects of a knowledge awareness tool on collaborative learning outcomes and processes

Mirweis Sangin; Gaëlle Molinari; Marc-Antoine Nüssli; Pierre Dillenbourg

We report an empirical study where we investigated the effects, on the collaborative outcomes and processes, of a cognition-related awareness tool providing learners with cues about their peers level of prior knowledge. Sixty-four university students participated in a remote computer-mediated dyadic learning scenario. Co-learners were provided (or not) with a visual representation of their peers level of prior knowledge through what we refer to as a knowledge awareness tool (KAT). The results show that, providing co-learners with objective cues about the level of their peers prior knowledge positively impacts learning outcomes. In addition, this effect seems to be mediated by the fact that co-learners provided with these objective cues become more accurate in estimating their partners knowledge - accuracy that predicts higher outcomes. Analyses on the process level of the verbal interactions indicate that the KAT seems to sensitize co-learners to the fragile nature of their partners as well as their own prior knowledge. The beneficial effect of the KAT seems to mainly rely on this induction of epistemic uncertainty that implicitly triggers compensation socio-cognitive strategies; strategies that appear to be beneficial to the learning process.


international conference on multimedia and expo | 2009

Who is the expert? Analyzing gaze data to predict expertise level in collaborative applications

Yan Liu; Pei-Yun Hsueh; Jennifer Lai; Mirweis Sangin; Marc-Antoine Nüssli; Pierre Dillenbourg

In this paper, we analyze complex gaze tracking data in a collaborative task and apply machine learning models to automatically predict skill-level differences between participants. Specifically, we present findings that address the two primary challenges for this prediction task: (1) extracting meaningful features from the gaze information, and (2) casting the prediction task as a machine learning (ML) problem. The results show that our approach based on profile hidden Markov models are up to 96% accurate and can make the determination as fast as one minute into the collaboration, with only 5% of gaze observations registered. We also provide a qualitative analysis of gaze patterns that reveal the relative expertise level of the paired users in a collaborative learning user study.


computer supported collaborative learning | 2009

Collaboration and abstract representations: towards predictive models based on raw speech and eye-tracking data

Marc-Antoine Nüssli; Patrick Jermann; Mirweis Sangin; Pierre Dillenbourg

This study aims to explore the possibility of using machine learning techniques to build predictive models of performance in collaborative induction tasks. More specifically, we explored how signal-level data, like eye-gaze data and raw speech may be used to build such models. The results show that such low level features have effectively some potential to predict performance in such tasks. Implications for future applications design are shortly discussed.


Journal of Computer Assisted Learning | 2008

The Effects of Animations on Verbal Interaction in Computer Supported Collaborative Learning

Mirweis Sangin; Pierre Dillenbourg; Cyril Rebetez; Mireille Bétrancourt; Gaëlle Molinari

This paper focuses on the interaction patterns of learners studying in pairs who were provided with multimedia learning material. In a previous article, we reported that learning scores were higher for dyads of an ‘animations’ condition than for dyads of a ‘static pictures’ condition. Results also showed that offering a persistent display of one snapshot of each animated sequence hindered collaborative learning. In the present paper, further analyses of verbal interactions within learning dyads were performed in order to have a better understanding of both the beneficial effect of animations and the detrimental effect of the presence of persistent snapshots of critical steps on collaborative learning. Results did not show any differences in terms of verbal categories between the two versions of the instructional material, that is, static versus animated pictures. Pairs who were provided with persistent snapshots of the multimedia sequences produced fewer utterances compared to participants without the snapshots. In addition, the persistent snapshots were detrimental both in terms of providing information about the learning content and in terms of producing utterances solely for the purpose of managing the interaction. In this study, evidence also showed that these two verbal categories were positively related to learning performances. Finally, mediation analyses revealed that the negative effect of persistent snapshots was mediated by the fact that peers of the snapshots condition produced less information providing and interaction management utterances. Results are interpreted using a psycholinguistic framework applied to computer-supported collaborative learning (CSCL) literature and general guidelines are derived for the use of dynamic material and persistency tools in the design of CSCL environments.


computer supported collaborative learning | 2007

Partner modeling is mutual

Mirweis Sangin; Nicolas Nova; Gaëlle Molinari; Pierre Dillenbourg

It has been hypothesized that collaborative learning is related to the cognitive effort made by co-learners to build a shared understanding. The process of constructing this shared understanding requires that each team member builds some kind of representation of the behavior, beliefs, knowledge or intentions of other group members. In two empirical studies, we measured the accuracy of the mutual model, i.e. the difference between what A believes B knows, has done or intends to do and what B actually knows, has done or intends to do. In both studies, we found a significant correlation between the accuracy of As model of B and the accuracy of Bs model of A. This leads us to think that the process of modeling ones partners does not simply reflect individual attitudes or skills but emerges as a property of group interactions. We describe on-going studies that explore these preliminary results.


computer supported collaborative learning | 2016

The Symmetry of Partner Modelling.

Pierre Dillenbourg; Séverin Lemaignan; Mirweis Sangin; Nicolas Nova; Gaëlle Molinari

Collaborative learning has often been associated with the construction of a shared understanding of the situation at hand. The psycholinguistics mechanisms at work while establishing common grounds are the object of scientific controversy. We postulate that collaborative tasks require some level of mutual modelling, i.e. that each partner needs some model of what the other partners know/want/intend at a given time. We use the term “some model” to stress the fact that this model is not necessarily detailed or complete, but that we acquire some representations of the persons we interact with. The question we address is: Does the quality of the partner model depend upon the modeler’s ability to represent his or her partner? Upon the modelee’s ability to make his state clear to the modeler? Or rather, upon the quality of their interactions? We address this question by comparing the respective accuracies of the models built by different team members. We report on 5 experiments on collaborative problem solving or collaborative learning that vary in terms of tasks (how important it is to build an accurate model) and settings (how difficult it is to build an accurate model). In 4 studies, the accuracy of the model that A built about B was correlated with the accuracy of the model that B built about A, which seems to imply that the quality of interactions matters more than individual abilities when building mutual models. However, these findings do not rule out the fact that individual abilities also contribute to the quality of modelling process.


l'interaction homme-machine | 2005

Collaborer pour mieux apprendre d'une animation

Cyril Rebetez; Mireille Bétrancourt; Mirweis Sangin; Pierre Dillenbourg

Studying multimedia animations for learning leads to question their ability to enhance comprehension of complex materials in comparison with static graphics. An often raised difficulty is their changing nature and the burden of higher amounts of information to process at the same time. We suggest using a permanent summary of visual information through snapshots, accessible on the screen. We also studied the impact of static or dynamic presentations, and collaborative (peers) or individual learning conditions. The results of this experimental study show benefits of dynamic presentations for memorization. Deep learning is also better in animated condition but only for peers. Last, snapshots are helping individual learners but not the peers. We discuss and explain these results on the basis of guidelines and a splitinteraction hypothesis.


european conference on technology enhanced learning | 2008

When Co-learners Work on Complementary Texts: Effects on Outcome Convergence

Gaëlle Molinari; Mirweis Sangin; Pierre Dillenbourg

In this paper, we examined the effect of knowledge interdependence among co-learners on knowledge convergence outcomes. Prior to collaboration, two partners read the same text in the independent condition while each of them read one of two complementary texts in the interdependent condition. In the remote collaboration phase, partners were asked to build a collaborative concept map. While interacting, they were provided with visualizations (concept maps) of both their own- and their partners knowledge. No effect of interdependence could be found with respect to both outcome knowledge equivalence and shared outcome knowledge. In the independence condition, convergence was mainly due to the fact that partners received the same text before collaboration. In the interdependence condition, shared knowledge did not occur as a result of social interaction. It seems that participants were able to individually link what they learnt from text to complementary information provided by their partners map.


intelligent user interfaces | 2013

REGARD: Remote Gaze-Aware Reference Detector

Marc-Antoine Nüssli; Patrick Jermann; Mirweis Sangin; Pierre Dillenbourg

Previous studies have shown that people tend to look at a visual referent just before saying the corresponding word, and similarly, listeners look at the referent right after hearing the name of the object. We first replicated these results in an ecologically valid situation in which collaborators are engaged in an unconstrained dialogue. Secondly, building upon these findings, we developed a model, called REGARD, which monitors speech and gaze during collaboration in order to automatically detect associations between words and objects of the shared workspace. The results are very promising showing that the model is actually able to detect correctly most of the references made by the collaborators. Perspectives of applications are briefly discussed.


Instructional Science | 2010

Learning from animation enabled by collaboration

Cyril Rebetez; Mireille Bétrancourt; Mirweis Sangin; Pierre Dillenbourg

Collaboration


Dive into the Mirweis Sangin's collaboration.

Top Co-Authors

Avatar

Pierre Dillenbourg

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Gaëlle Molinari

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc-Antoine Nüssli

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Nova

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Patrick Jermann

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Liu

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge