Nacim Ihaddadene
university of lille
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nacim Ihaddadene.
2010 International Conference on Machine and Web Intelligence | 2010
Taner Danisman; Ioan Marius Bilasco; Chabane Djeraba; Nacim Ihaddadene
This paper presents an automatic drowsy driver monitoring and accident prevention system that is based on monitoring the changes in the eye blink duration. Our proposed method detects visual changes in eye locations using the proposed horizontal symmetry feature of the eyes. Our new method detects eye blinks via a standard webcam in real-time at 110fps for a 320×240 resolution. Experimental results in the JZU [3] eye-blink database showed that the proposed system detects eye blinks with a 94% accuracy with a 1% false positive rate.
acm multimedia | 2006
Djeraba Chabane; Stanislas Lew; Dan A. Simovici; Sylvain Mongy; Nacim Ihaddadene
Our demo focuses on eye tracking on web, image and video data. We use some state-of-the-art measurements, such as scan path, to determine how the user sees web documents, images and videos. Our approach is characterised by automatic eye/gaze tracking with non intrusive sensors, mainly infrared cameras of web, image and video documents. We analyse eye/gaze tracking concerns spatial regions of static documents (images and web pages) and spatial zones of dynamic documents (video, sequence of web pages hyperlinked). In the context of dynamic documents, the eye/gaze tracking is processed image-per -image in video documents, and page-per-page in web documents. The result is more rough on video, and more accurate on images and hyperlinked web pages. Eye/gaze tracking on video is relatively new and unexplored in the literature.
2010 International Conference on Machine and Web Intelligence | 2010
Rémi Auguste; Ahmed El Ghini; Marius Bilasco; Nacim Ihaddadene; Chabane Djeraba
The analysis and interpretation of video contents is an important component of modern vision applications such as surveillance, motion synthesis and web-based user interfaces. A requirement shared by these very different applications is the ability to learn statistical models of appearance and motion from a collection of videos, and then use them for recognizing actions or persons in a new video. Measuring the similarity and dissimilarity between video sequences is crucial in any video sequences analysis and decision-making process. Furthermore, many data analysis processes effectively deal with moving objects and need to compute the similarity between trajectories. In this paper, we propose a similarity measure for multivariate time series using the Euclidean distance based on Vector Autoregressive (VAR) models. The proposed approach allows us to identify and recognize actions of persons in video sequences. The performance of our methodology is tested on a real dataset.
Journal of Multimedia | 2010
Md. Haidar Sharif; Nacim Ihaddadene; Chaabane Djeraba
We propose a methodology that first extracts features of video emanates and detects eccentric events in a crowded environment. Afterwards eccentric events are indexed for retrieval. The motivation of the framework is the discrimination of features which are independent from the application domains. Low-level as well as midlevel features are generic and independent of the type of abnormality. High-level features are dependent and used to detect eccentric events, whereas both mid-level and highlevel features are run through the indexing scheme for retrieval. To demonstrate the interest of the methodology, we primarily present the obtained results on collapsing events in real videos amassed by single camera installed an airport escalator exits to monitor the circumstances of those regions.
international conference on computer vision theory and applications | 2018
R. Belmonte; Chaabane Djeraba; Pierre Tirilly; Nacim Ihaddadene; Marius Bilasco
Face alignment is an essential task for many applications. Its objective is to locate feature points on the face, in order to identify its geometric structure. Under unconstrained conditions, the different variations that may occur in the visual context, together with the instability of face detection, make it a difficult problem to solve. While many methods have been proposed, their performances under these constraints are still not satisfactory. In this article, we claim that face alignment should be studied using image sequences rather than still images, as it has been done so far. We show the importance of taking into consideration the temporal information under unconstrained conditions.
computer vision and pattern recognition | 2009
Yassine Benabbas; Nacim Ihaddadene; Chabane Djeraba
CORESA | 2017
Romain Belmonte; Nacim Ihaddadene; Pierre Tirilly; Ioan Marius Bilasco; Chaabane Djeraba
Archive | 2011
Yassine Benabbas; Nacim Ihaddadene; Jacques Boonaert; Anis Chaari; Chaabane Djeraba
Extraction et Gestion des Connaissances (EGC), Tunisie | 2010
Yassine Benabbas; Nacim Ihaddadene; Thierry Urruty; Chabane Djeraba
9th International Symposium on Programming and Systems (ISPS 2009) | 2009
Nacim Ihaddadene; Adel Lablack; Chaabane Djeraba