Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandre Benoit is active.

Publication


Featured researches published by Alexandre Benoit.


ubiquitous computing | 2009

Multimodal focus attention and stress detection and feedback in an augmented driver simulator

Alexandre Benoit; Laurent Bonnaud; Alice Caplier; Phillipe Ngo; Lionel Lawson; Daniela Gorski Trevisan; Vjekoslav Levacic; Céline Mancas; Guillaume Chanel

This paper presents a driver simulator, which takes into account the information about the user’s state of mind (level of attention, fatigue state, stress state). The user’s state of mind analysis is based on video data and biological signals. Facial movements such as eyes blinking, yawning, head rotations, etc., are detected on video data: they are used in order to evaluate the fatigue and the attention level of the driver. The user’s electrocardiogram and galvanic skin response are recorded and analyzed in order to evaluate the stress level of the driver. A driver simulator software is modified so that the system is able to appropriately react to these critical situations of fatigue and stress: some audio and visual messages are sent to the driver, wheel vibrations are generated and the driver is supposed to react to the alert messages. A multi-threaded system is proposed to support multi-messages sent by the different modalities. Strategies for data fusion and fission are also provided. Some of these components are integrated within the first prototype of OpenInterface: the multimodal similar platform.


Journal on Multimodal User Interfaces | 2007

Multimodal Signal Processing and Interaction for a Driving Simulator: Component-based Architecture

Alexandre Benoit; Laurent Bonnaud; Alice Caplier; Frédéric Jourde; Laurence Nigay; Marcos Serrano; Ioannis G. Damousis; Dimitrios Tzovaras; Jean-Yves Lionel Lawson

In this paper we focus on the software design of a multimodal driving simulator that is based on both multimodal driver’s focus of attention detection as well as driver’s fatigue state detection and prediction. Capturing and interpreting the driver’s focus of attention and fatigue state is based on video data (e.g., facial expression, head movement, eye tracking). While the input multimodal interface relies on passive modalities only (also called attentive user interface), the output multimodal user interface includes several active output modalities for presenting alert messages including graphics and text on a mini-screen and in the windshield, sounds, speech and vibration (vibration wheel). Active input modalities are added in the meta-User Interface to let the user dynamically select the output modalities. The driving simulator is used as a case study for studying its software architecture based on multimodal signal processing and multimodal interaction components considering two software platforms, OpenInterface and ICARE.


international conference on image processing | 2005

Head nods analysis: interpretation of non verbal communication gestures

Alexandre Benoit; Alice Caplier

This paper proposes a real time frequency method to detect 2D rigid rotations of pan or tilt of a moving head. We aim at interpreting head nods involved in the non verbal communication process in the same way as human being: direction of the rotation is estimated but not its precise amplitude. The idea of the method is to analyze the image spectrum in the log polar domain where global 2D head rotations are transformed into simple energy translations. In order to make the log polar spectrum easy to interpret, a prefiltering stage inspired from the biological model of the human retina is applied: mobile contours are enhanced and static contours are attenuated, high frequency noise is eliminated and variations of illumination are cancelled. Estimated rotations are integrated in a data fusion process able to detect and to interpret in real time head nods of approbation or negation.


IEEE Transactions on Image Processing | 2012

PDE-Based Enhancement of Color Images in RGB Space

Salim Bettahar; Amine Boudghene Stambouli; Patrick Lambert; Alexandre Benoit

A novel method for color image enhancement is proposed as an extension of the scalar-diffusion-shock-filter coupling model, where noisy and blurred images are denoised and sharpened. The proposed model is based on using the single vectors of the gradient magnitude and the second derivatives as a manner to relate different color components of the image. This model can be viewed as a generalization of the Bettahar-Stambouli filter to multivalued images. The proposed algorithm is more efficient than the mentioned filter and some previous works at color images denoising and deblurring without creating false colors.


advanced video and signal based surveillance | 2005

Hypovigilence analysis: open or closed eye or mouth? Blinking or yawning frequency?

Alexandre Benoit; Alice Caplier

This paper proposes a frequency method to estimate the state open or closed of eye and mouth and to detect associated motion events such as blinking and yawning. The context of that work is the detection of hypovigilence state of a user such as a driver, a pilot. In A. Benoit and Caplier (2005) we proposed a method for motion detection and estimation which is based on the processing achieved by the human visual system. The motion analysis algorithm the filtering step occurring at the retina level and the analysis done at the visual cortex level. This method is used to estimate the motion of eye and mouth: blinking is related to fast vertical motion of the eyelid and yawning is related to large vertical mouth opening. The detection of the open or closed state of the feature is based on the analysis of the total energy of the image at the output of the retina filter: this energy is higher for open features. The absolute level of energy associated to a specific state being different from a person to another and for different illumination conditions, the energy level associated to each state open or closed is adaptive and is updated each time a motion event (blinking or yawning) is detected. No constraint about motion is required. The system is working in real time and under all type of lighting conditions since the retina filtering is able to cope with illumination variations. This allows to estimate blinking and yawning frequencies which are clues of hypovigilance.


content based multimedia indexing | 2013

Retina enhanced SIFT descriptors for video indexing

Sabin Tiberius Strat; Alexandre Benoit; Patrick Lambert

This paper investigates how the detection of diverse high-level semantic concepts (objects, actions, scene types, persons etc.) in videos can be improved by applying a model of the human retina. A large part of the current approaches for Content-Based Image/Video Retrieval (CBIR/CBVR) relies on the Bag-of-Words (BoW) model, which has shown to perform well especially for object recognition in static images. Nevertheless, the current state-of-the-art framework shows its limits when applied to videos because of the added temporal information. In this paper, we enhance a BoW model based on the classical SIFT local spatial descriptor, by preprocessing videos with a model of the human retina. This retinal preprocessing allows the SIFT descriptor to become aware of temporal information. Our proposed descriptors extend the SIFT genericity to spatio-temporal content, making them interesting for generic video indexing. They also benefit from the retinal spatio-temporal “robustness” to various disturbances such as noise, compression artifacts, luminance variations or shadows. The proposed approaches are evaluated on the TRECVID 2012 Semantic Indexing task dataset.


artificial intelligence applications and innovations | 2006

Multimodal Focus Attention and Stress Detection and feedback in an Augmented Driver Simulator

Alexandre Benoit; Laurent Bonnaud; Alice Caplier; Phillipe Ngo; Lionel Lawson; Daniela Gorski Trevisan; Vjekoslav Levacic; Céline Mancas; Guillaume Chanel

This paper presents a driver simulator, which takes into account information about the user’s state of mind (level of attention, fatigue state, stress state). The user’s state of mind analysis is based on video data and biological signals. Facial movements such as eyes blinking, yawning, head rotations... are detected on video data: they are used in order to evaluate the fatigue and attention level of the driver. The user’s electrocardiogram and galvanic skin response are recorded and analyzed in order to evaluate the stress level of the driver. A driver simulator software is modified so that the system is able to appropriately react to these critical situations of fatigue and stress: some audio and visual messages are sent to the driver, wheel vibrations are generated and the driver is supposed to react to the alert messages. A multi threaded system is proposed to support multi messages sent by different modalities. Strategies for data fusion and fission are also provided.


advanced concepts for intelligent vision systems | 2008

Open or Closed Mouth State Detection: Static Supervised Classification Based on Log-Polar Signature

Christian Bouvier; Alexandre Benoit; Alice Caplier; Pierre-Yves Coulon

The detection of the state open or closed of mouth is an important information in many applications such as hypo-vigilance analysis, face features segmentation or emotions recognition. In this work we propose a supervised classification method for mouth state detection based on retina filtering and cortex analysis inspired by the human visual system. The first stage of the method is the learning of reference signatures (Log Polar Spectrums) from some open and closed mouth images manually classified. The signatures are constructed by computing the amplitude log-polar spectrum of the retina filtered images. Principal Components Analysis (PCA ) is then performed using the Log Polar Spectrum as feature vectors to reduce the number of dimension by keeping 95 % of the total variance. Finally a binary SVM classifier is trained using the projections the principal components given by the PCA in order to classify the mouth.


content-based multimedia indexing | 2014

Bags of Trajectory Words for video indexing

Sabin Tiberius Strat; Alexandre Benoit; Patrick Lambert

A semantic indexing system capable of detecting both spatial appearance and motion-related semantic concepts requires the use of both spatial and motion descriptors. However, extracting motion descriptors on very large video collections requires great computational resources, which has caused most approaches to limit themselves to a spatial description. This paper explores the use of motion descriptors to complement such spatial descriptions and improve the overall performance of a generic semantic indexing system. We propose a framework for extracting and describing trajectories of tracked points that keeps computational cost manageable, then we construct Bag of Words representations with these trajectories. After supervised classification, a late fusion step combines information from spatial descriptors with that from our proposed Bag of Trajectory Words descriptors to improve overall results. We evaluate our approach in the very difficult context of the TRECVid Semantic Indexing (SIN) dataset.


Materials Science Forum | 2014

Metallurgical Study of Friction Stir Welded High Strength Steels for Shipbuilding

Marion Allart; Alexandre Benoit; Pascal Paillard; Guillaume Rückert; Myriam Chargy

Friction Stir Welding (FSW) is one of the most recent welding processes, invented in 1991 by The Welding Institute. Recent developments, mainly using polycrystalline cubic boron nitride (PCBN) tools, broaden the range of use of FSW to harder materials, like steels. Our study focused on the assembly of high yield strength steels for naval applications by FSW, and its consequences on the metallurgical properties. The main objectivewas to analyze the metallurgical transformations occurring during welding. Welding tests were conducted on three steels: 80HLES, S690QL and DH36. For each welded sample, macrographs, micrographs and micro-hardness maps were performed to characterize the variation of microstructures through the weld.

Collaboration


Dive into the Alexandre Benoit's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alice Caplier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Pascal Paillard

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Laurent Bonnaud

Grenoble Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georges Quénot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Jean-François Castagne

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge