Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Indira Thouvenin is active.

Publication


Featured researches published by Indira Thouvenin.


International Journal of Product Development | 2007

Knowledge integration for annotating in virtual environments

Stéphane Aubry; Indira Thouvenin; Dominique Lenne; Shigeki Okawa

Annotations in virtual environment are the means to improve communication in distant collaborative design. In the Matrics project, we develop a platform to support annotation-based collaboration and associate those annotations with a knowledge model. We present the collaborative design situations we are working on and split them into three subsituations: symmetrical collaboration, workflows and project re-exploitation. In these situations, collaboration problems often arise. We present a solution that consists in enabling the designers to directly annotate their model in the 3D-environment as well as its implementation in an application for annotating in VR environments. Then, we discuss tests on this application and the need to link the annotations to a knowledge model. We present the knowledge model, the ways to exploit it as well as experiments in order to evaluate the benefits of knowledge integration in the annotating environment.


Engineering Applications of Artificial Intelligence | 2014

GULLIVER: A decision-making system based on user observation for an adaptive training in informed virtual environments

Loïc Fricoteaux; Indira Thouvenin; Daniel Mestre

Abstract Modern training through virtual environments is widely used in transport in order to provide a high level of precision and more and more complex situations. These virtual environments provide training scenarios with automatic and repetitive feedback to the trainees. Experienced learners receive too many aids and novice learners receive too few. In this research work, inspired by trial and error pedagogy, we have designed and evaluated a fluvial-navigation virtual training system which includes our GULLIVER module to determine the most appropriate level of feedback to display for learner guiding. GULLIVER is based on a decision-making module integrating uncertain data coming from observation of the learner by the system. An evidential network with conditional belief functions is used by the system for making decision. Several sensors and a predictive model are used to collect data in real time. Metaphors of visualization are displayed to the user in an immersive virtual reality platform as well as audio feedback. GULLIVER was evaluated on 60 novice participants. The experiment was based on a navigation case repetition. Two major results are the following: (i) the learners get experience and error awareness from the virtual navigation with our system and (ii) they show their capacity to navigate after the training and the better performance of the GULLIVER system.


computer supported cooperative work in design | 2005

Knowledge integration in early design stages for collaboration on a virtual mock up

Indira Thouvenin; Dominique Lenne; Anne Guénand; Stéphane Aubry

With the intensification and the expansion of collaboration modes between designers and engineers for better time and quality mastering, the communication efficiency between these two communities of practice have to be thought taking in to account both designers and engineers representation systems. While designers participate to the concept research and deliver the products scenario of use through images, sketches and renderings, the engineers participate to the physical definition and to the materialization of these concepts. There is a need for a common representation of the concepts and the product, i.e. explicit link between the subjective data coming from the designer and the objective data materializing the product. Nevertheless semantic product characterization have specific tools and methods usually found in 2D environments and 3D CAD systems do not answer to the needs of exchanges in a multi disciplinary design team. In this paper we introduce a virtual environment to integrate, capitalize and explore knowledge around the virtual mock up in order to facilitate subjective and objective product characterization for designers and engineers collaboration. This virtual environment (MATRICS) allows the user to post multi-media annotations on a virtual 3D mock up, to support intentions transferred from the designer to the engineer when communicating. Another aspect of this environment is to provide a shared visualization of knowledge connected to the model and users interaction with knowledge representation increasing the collaboration level while designing.


computer supported cooperative work in design | 2007

A knowledge model to read 3D annotations on a virtual mock-up for collaborative design

Stéphane Aubry; Indira Thouvenin; Dominique Lenne; Jerome Olive

This article describes the concept of knowledge integration in a 3D collaborative virtual environment. This concept characterizes collaborative design in terms of two models introduced in this article -an annotation model with a semantic metadata notion - and a knowledge model to capitalize and to manage annotations - MATRICS environment. We briefly describe this approach. Then we present the results of an empirical study in which subjects performed annotation tasks with the support of an ontology visualization 2D tool and through the navigation in the 3D annotations. This study shows that the knowledge model improves the quality of the understanding of design projects.


Computers in Human Behavior | 2017

Contributions of mixed reality in a calligraphy learning task: Effects of supplementary visual feedback and expertise on cognitive load, user experience and gestural performance

Emilie Loup-Escande; Rémy Frenoy; Guillaume Poplimont; Indira Thouvenin; Olivier Gapenne; Olga Megalakaki

The learning of handwriting and calligraphy can be improved by supplementary sensory feedback. Emerging technologies such as mixed reality devices can implement appropriate visual, auditory, or proprioceptive feedback. Yet there have been few studies on supplementary visual feedback. Our study was designed to fill this gap, by comparing the effects of two visual feedback types implemented on a graphic tablet with touchscreen (PenWidthFB versus ColoredVelocityFB feedback), in two groups with different levels of expertise in calligraphy (novice versus expert). We collected measures of cognitive load, user experience and gestural performance. Results showed that 1) there was no significant difference in cognitive load between experts and novices, but PenWidthFB feedback created a higher cognitive load than ColoredVelocityFB feedback, 2) for the user experience, there were no obvious differences between experts and novices, and between the two feedback types, 3) concerning the gestural performance, the experts were faster than the novices, and the applied pressure with ColoredVelocityFB feedback was higher than with PenWidthFB feedback.


robot and human interactive communication | 2013

Human gesture segmentation based on change point model for efficient gesture interface

Emmanuel Bernier; Ryad Chellali; Indira Thouvenin

Interacting naturally with artificial agents and environments including virtual avatars and robots rely mainly on gestures. However, the later require heavy setup procedures before any effective use. Specific and personalized training sessions are needed. In addition, performers have to indicate clear separations between gestures, namely, specifying pre- and post-strokes as well as the gesture used to perform the command. Thus, time series segmentation appears as a central problem. Indeed, clustering human motions into meaningful segments or isolating meaningful segments forming a continuous movement flow present the same problem: how to find the post and the pre strokes. For machine learning, this problem is solved by having training sets of carefully labeled data. Good segmentation improves the quality of the gesture recognition-based interface. In our contribution, we focus on developing a non-parametric stochastic segmentation algorithm. Once the segmentation has been validated, we show how any novice user can create in a semi-supervised way, his or her, own gestures library. In addition, we show how the obtained system is efficient in finding meaningful gestures (the once learned earlier) within continuous movements flow, thus removing the constraint of performing manual specification of respectively the beginning of the movement and its end. The proposed technique is assessed through a real-life example, where a novice user creates an ad-hoc interface to control a robot in a natural way.


ieee intelligent vehicles symposium | 2015

Estimation of driver awareness of pedestrian based on Hidden Markov Model

Minh Tien Phan; Vincent Fremont; Indira Thouvenin; Mohamed Sallak; Veronique Cherfaoui

Understanding driver behaviors is an important need for the Advanced Driver Assistance Systems. In particular, the pedestrian detection systems become extremely distracting and annoying when they inform the driver with unnecessary warning messages. In this paper, we propose to study the driver behaviors whenever a pedestrian appears in front of the vehicle. A method based on the driving actions and the Hidden Markov Model (HMM) algorithm is developed to classify the driver awareness of pedestrian and the driver unawareness of pedestrian. The method is successfully validated using the collected data from the experiments that are conducted on a driving simulator. Furthermore, two simple methods based on the static parameters such as the Time-To-Collision and the Required Deceleration Parameter are also applied to our problem and are compared to the proposed method. The result shows a significant improvement of the HMM-based method compared to the simple ones.


international conference on intelligent transportation systems | 2014

Recognizing Driver Awareness of Pedestrian

Minh Tien Phan; Vincent Fremont; Indira Thouvenin; Mohamed Sallak; Véronique Cherfaoui

In this paper, we propose a novel approach to recognize the awareness or the unawareness that a driver has of a pedestrian appearing on the road in front of the vehicle. Based on the theory of situation awareness and the collected driving data from the on board sensors, a suitable Hidden Markov Model (HMM) is used to model the “Driver Awareness of Pedestrian” and the “Driver Unawareness of Pedestrian”. These behaviors are then recognized by using a maximum-likelihood decision method. A real-time validation taken on a driving simulator shows that the model and the output decisions are accurate and efficient.


Proceedings of SPIE | 2013

Nomad devices for interactions in immersive virtual environments

Paul George; Andras Kemeny; Frédéric Merienne; Jean-Rémy Chardonnet; Indira Thouvenin; Javier Posselt; Emmanuel Icart

Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault’s CAVE™ aims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look’n’feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.


International Journal of Humanities and Arts Computing | 2009

Exploring informed virtual sites through Michel Foucault's heterotopias

Francis Rousseaux; Indira Thouvenin

This papers starts with some mysterious contribution by Michel Foucault (1967) about heterotopias as special epistemological sites. With a recent case-study – an immersive virtual reality art project dealing with some ancient abbey reconstruction and managed by a French engineering school – we analyse the successive attempts to satisfy the system users by extending Foucaults heterotopology, which appears to be useful and creative for the Virtual Reality research communities.

Collaboration


Dive into the Indira Thouvenin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loïc Fricoteaux

University of Technology of Compiègne

View shared research outputs
Top Co-Authors

Avatar

Minh Tien Phan

University of Technology of Compiègne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francis Rousseaux

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Stéphane Aubry

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jerome Olive

University of Technology of Compiègne

View shared research outputs
Researchain Logo
Decentralizing Knowledge