Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alice Caplier is active.

Publication


Featured researches published by Alice Caplier.


systems man and cybernetics | 2012

On-Line Detection of Drowsiness Using Brain and Visual Information

Antoine Picot; Sylvie Charbonnier; Alice Caplier

A drowsiness detection system using both brain and visual activity is presented in this paper. The brain activity is monitored using a single electroencephalographic (EEG) channel. An EEG-based drowsiness detector using diagnostic techniques and fuzzy logic is proposed. Visual activity is monitored through blinking detection and characterization. Blinking features are extracted from an electrooculographic (EOG) channel. Features are merged using fuzzy logic to create an EOG-based drowsiness detector. The features used by the EOG-based detector are voluntary restricted to the features that can be automatically extracted from a video analysis of the same accuracy. Both detection systems are then merged using cascading decision rules according to a medical scale of drowsiness evaluation. Merging brain and visual information makes it possible to detect three levels of drowsiness: “awake,” “drowsy,” and “very drowsy.” One major advantage of the system is that it does not have to be tuned for each driver. The system was tested on driving data from 20 different drivers and reached 80.6% correct classifications on three drowsiness levels. The results show that EEG and EOG detectors are redundant: EEG-based detections are used to confirm EOG-based detection and thus enable the false alarm rate to be reduced to 5% while the true positive rate is not decreased, compared with a single EOG-based detector.


international conference of the ieee engineering in medicine and biology society | 2008

On-line automatic detection of driver drowsiness using a single electroencephalographic channel

Antoine Picot; Sylvie Charbonnier; Alice Caplier

In this paper, an on-line drowsiness detection algorithm using a single electroencephalographic (EEG) channel is presented. This algorithm is based on a means comparison test to detect changes of the alpha relative power ([8–12]Hz band). The main advantage of the method proposed is that the detection threshold is completely independent of drivers and does not need to be tuned for each person. This algorithm, which works on-line, has been tested on a huge dataset representing 60 hours of driving and give good results with nearly 85% of good detections and 20% of false alarms.


workshop on applications of computer vision | 2009

Comparison between EOG and high frame rate camera for drowsiness detection

Antoine Picot; Alice Caplier; Sylvie Charbonnier

Drowsiness is responsible for a large number car crashes. Blinks analysis from electrooculogram (EOG) signal brings reliable information on drowsiness but EOG recording condition can be really disturbing for the driver. On the other hand, video approaches seem a lot more practical but the standard acquisition rate does not give the same accuracy than EOG for blinks analysis. So, a high frame rate camera seems a good compromise. The purpose of this paper is to study to what extent a high speed camera could replace the EOG for the extraction of blinks features in order to design a system to detect drowsiness. An original method to detect and characterize blinks from the video is presented. This method uses two energy signals extracted from the video analysis: one related to the contours of the eyes and the other one to the moving contours. A comparison between the different features extracted from the EOG and from the video is then performed. This study shows that duration, frequency, PERCLOS 80 and dynamic features extracted from the EOG and from the video signals are highly correlated. The frame rate influence on the accuracy of the different features extracted is also studied.


machine vision applications | 2012

Using retina modelling to characterize blinking: comparison between EOG and video analysis

Antoine Picot; Sylvie Charbonnier; Alice Caplier; Ngoc-Son Vu

A large number of car crashes are caused by drowsiness every year. The analysis of eye blinks provides reliable information about drowsiness. This paper proposes to study the relation between electrooculogram (EOG) and video analysis for blink detection and characterization. An original method to detect and characterize blinks from a video analysis is presented here. The method uses different filters based on the human retina modelling. A illumination robust filter is first used to normalize illumination variations of the video input. Then, Outer and an Inner Plexiform Layer filters are used to extract energy signals from eye area. The eye detection is processed mixing gradient and projection methods which makes it able to detect even closed eyes. The illumination robust filter makes it possible to detect the eyes even in night conditions, without using external lighting. The video analysis extracts two signals from the eye area measuring the quantity of static edges and moving edges. Blinks are then detected and characterized from these two signals. A comparison between the features extracted from the EOG and the same features extracted from the video analysis is then performed on a database of 14 different people. This study shows that some blink features extracted from the video are highly correlated with their EOG equivalent: the duration, the duration at 50%, the frequency, the percentage of eye closure at 80% and the amplitude velocity ratio. The frame rate influence on the accuracy of the different features extracted is also studied and enlightens on the need of a high frame rate camera to detect and characterize accurately blinks from a video analysis.


Archive | 2009

Monitoring drowsiness on-line using a single encephalographic channel

Antoine Picot; Sylvie Charbonnier; Alice Caplier

In this paper, an on-line drowsiness detection algorithm using a single electroencephalographic (EEG) channel is presented. This algorithm is based on a means comparison test to detect changes of the alpha relative power ([8-12]Hz band) and the beta relative power ([12-20]Hz band). The detection on these two bands are then merged using fuzzy logic. The main advantage of the method proposed is that the detection threshold is completely independent of drivers and does not need to be tuned for each person. An artefact detection is also processed on the EEG signal to avoid false detection. This algorithm, which works on-line, has been tested on a huge dataset representing 60 hours of driving and give good results with nearly 85% of good detections and 20% of false alarms.


Multimedia Tools and Applications | 2014

Retina enhanced SURF descriptors for spatio-temporal concept detection

Sabin Tiberius Strat; Alexandre Benoit; Patrick Lambert; Alice Caplier

This paper proposes to investigate the potential benefit of the use of low-level human vision behaviors in the context of high-level semantic concept detection. A large part of the current approaches relies on the Bag-of-Words (BoW) model, which has proven itself to be a good choice especially for object recognition in images. Its extension from static images to video sequences exhibits some new problems to cope with, mainly the way to use the temporal information related to the concepts to detect (swimming, drinking...). In this study, we propose to apply a human retina model to preprocess video sequences before constructing the State-Of-The-Art BoW analysis. This preprocessing, designed in a way that enhances relevant information, increases the performance by introducing robustness to traditional image and video problems, such as luminance variation, shadows, compression artifacts and noise. Additionally, we propose a new segmentation method which enables a selection of low-level spatio-temporal potential areas of interest from the visual scene, without slowing the computation as much as a high-level saliency model would. These approaches are evaluated on the TrecVid 2010 and 2011 Semantic Indexing Task datasets, containing from 130 to 346 high-level semantic concepts. We also experiment with various parameter settings to check their effect on performance.


Annales Des Télécommunications | 2014

Video viewing: do auditory salient events capture visual attention?

Antoine Coutrot; Nathalie Guyader; Gelu Ionescu; Alice Caplier

We assess whether salient auditory events contained in soundtracks modify eye movements when exploring videos. In a previous study, we found that, on average, nonspatial sound contained in video soundtracks impacts on eye movements. This result indicates that sound could play a leading part in visual attention models to predict eye movements. In this research, we go further and test whether the effect of sound on eye movements is stronger just after salient auditory events. To automatically spot salient auditory events, we used two auditory saliency models: the discrete energy separation algorithm and the energy model. Both models provide a saliency time curve, based on the fusion of several elementary audio features. The most salient auditory events were extracted by thresholding these curves. We examined some eye movement parameters just after these events rather than on all the video frames. We showed that the effect of sound on eye movements (variability between eye positions, saccade amplitude, and fixation duration) was not stronger after salient auditory events than on average over entire videos. Thus, we suggest that sound could impact on visual exploration not only after salient events but in a more global way.


international conference on image processing | 2012

Adaptive appearance face tracking with alignment feedbacks

Weiyuan Ni; Alice Caplier

Adaptive appearance approaches are popular for tracking non-rigid objects, such as faces. However, these approaches usually lack direct mechanisms for correcting spatial misalignments (e.g., translation, scaling and rotation errors) existing in the tracking outputs. The unwanted errors are then accumulated in the targets appearance model. This inevitably has negative effects on tracking performance. Besides, many of these approaches rely on video-specific parameter setting. In this paper, we first adopt a self-adaptive dynamical model to predict the candidates of target. Hence, our tracker is able to work with identical parameters for various situations. Moreover, we introduce a multi-view joint face alignment stage to decrease the impact of mis-alignment. Aligned faces are further used as feedbacks to update the appearance model. We test the proposed algorithm on outdoor surveillance videos and real-world YouTube videos. Experimental results prove the effectiveness of our method in tracking faces under uncontrolled conditions.


Image and Vision Computing | 2012

Lucas-Kanade based entropy congealing for joint face alignment

Weiyuan Ni; Ngoc-Son Vu; Alice Caplier

Entropy Congealing is an unsupervised joint image alignment method, in which the transformation parameters are obtained by minimizing a sum-of-entropy function. Our previous work presented a forward formulation of entropy Congealing to estimate all the transformation parameters at the same time. In this paper, we propose an inverse compositional Lucas-Kanade formulation of entropy Congealing. This yields constant parts in Jacobian and Hessian which can be precomputed to decrease the computational complexity. Moreover, we combine Congealing with POEM descriptor to catch more information about face. Experimental results indicate that the proposed algorithm performs better than other alignment methods, regarding several evaluation criteria on different databases. Concerning the complexity, the proposed algorithm is more efficient than other considered approaches. Also, compared to the forward formulation, the inverse method produces a speed improvement of 20%.


IFAC Proceedings Volumes | 2011

EOG-based drowsiness detection: Comparison between a fuzzy system and two supervised learning classifiers

Antoine Picot; Sylvie Charbonnier; Alice Caplier

Abstract Drowsiness is a serious problem, which causes a large number of car crashes every year. This paper presents an original drowsiness detection method based on the fuzzy merging of several eye blinking features extracted from an electrooculogram (EOG). These features are computed each second using a sliding window. This method is compared to two supervised learning classifiers: a prototype nearest neighbours and a multilayer perceptron. The comparison has been carried out on a substantial database containing 60 hours of driving data from 20 different drivers. The method proposed reaches very good performances with 82% of true detections and 13% of false alarms on 20 different drivers without tuning any parameters. The best results obtained by the supervised learning classification methods are only 72% of true detection and 26% of false alarms, which is far worse than the fuzzy method. It is shown that the fuzzy method overtakes the other methods because it is able to take into account the fact that drowsiness symptoms occur simultaneously and in a repetitive way on the different features during the epoch to classify, which is of importance in the drowsiness decision-making process.

Collaboration


Dive into the Alice Caplier's collaboration.

Top Co-Authors

Avatar

Sylvie Charbonnier

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandre Benoit

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Nathalie Guyader

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ngoc-Son Vu

University of Grenoble

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weiyuan Ni

University of Grenoble

View shared research outputs
Researchain Logo
Decentralizing Knowledge