Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adel Lablack is active.

Publication


Featured researches published by Adel Lablack.


international conference on pattern recognition | 2008

Analysis of human behaviour in front of a target scene

Adel Lablack; Chabane Djeraba

In this paper we present an application of computer vision techniques to obtain specific information about the behaviour of the people passing in front of a target scene. This is done by analyzing videos captured by cameras monitoring an area under surveillance. The target scene can be a large plasma screen, a projected image, an advertising poster or a shop window. An example of the type of information that can be extracted is the number of people passing in the area (possibly even making a stop), those who are interested in the target scene (i.e. looking in its direction), and the specific locations of interest inside the target scene. The person detection counts the number of persons subsequently followed by the person tracking which determines who is stopping and who is moving. The head pose estimation denotes whether they are looking or not at the target scene. Finally the projection of the visual field extracts the location of interest in the target scene. All these tasks are improved by taking in account the environmental information.


international conference on pattern recognition | 2010

Visual Gaze Estimation by Joint Head and Eye Information

Roberto Valenti; Adel Lablack; Nicu Sebe; Chaabane Djeraba; Theo Gevers

In this paper, we present an unconstrained visual gaze estimation system. The proposed method extracts the visual field of view of a person looking at a target scene in order to estimate the approximate location of interest (visual gaze). The novelty of the system is the joint use of head pose and eye location information to fine tune the visual gaze estimated by the head pose only, so that the system can be used in multiple scenarios. The improvements obtained by the proposed approach are validated using the Boston University head pose dataset, on which the standard deviation of the joint visual gaze estimation improved by 61:06% horizontally and 52:23% vertically with respect to the gaze estimation obtained by the head pose only. A user study shows the potential of the proposed system.


Archive | 2010

Multi-Modal User Interactions in Controlled Environments

Chaabane Djeraba; Adel Lablack; Yassine Benabbas

Multi-Modal User Interactions in Controlled Environments investigates the capture and analysis of users multimodal behavior (mainly eye gaze, eye fixation, eye blink and body movements) within a real controlled environment (controlled-supermarket, personal environment) in order to adapt the response of the computer/environment to the user. Such data is captured using non-intrusive sensors (for example, cameras in the stands of a supermarket) installed in the environment. This multi-modal video based behavioral data will be analyzed to infer user intentions while assisting users in their day-to-day tasks by adapting the systems response to their requirements seamlessly. This book also focuses on the presentation of information to the user. Multi-Modal User Interactions in Controlled Environments is designed for professionals in industry, including professionals in the domains of security and interactive web television. This book is also suitable for graduate-level students in computer science and electrical engineering.


International Workshop on FFER (Face and Facial Expression Recognition from Real World Videos)-ICPR 2014 | 2014

Positive/Negative Emotion Detection from RGB-D upper Body Images

Lahoucine Ballihi; Adel Lablack; Boulbaba Ben Amor; Ioan Marius Bilasco; Mohamed Daoudi

The ability to identify users’mental states represents a valuable asset for improving human-computer interaction. Considering that spontaneous emotions are conveyed mostly through facial expressions and the upper Body movements, we propose to use these modalities together for the purpose of negative/positive emotion classification. A method that allows the recognition of mental states from videos is proposed. Based on a dataset composed with RGB-D movies a set of indictors of positive and negative is extracted from 2D (RGB) information. In addition, a geometric framework to model the depth flows and capture human body dynamics from depth data is proposed. Due to temporal changes in pixel and depth intensity which characterize spontaneous emotions dataset, the depth features are used to define the relation between changes in upper body movements and the affect. We describe a space of depth and texture information to detect the mood of people using upper body postures and their evolution across time. The experimentation has been performed on Cam3D dataset and has showed promising results.


international conference on pattern recognition | 2014

A Local Approach for Negative Emotion Detection

Adel Lablack; Taner Danisman; loan Marius Bilasco; Chaabane Djeraba

Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper, we propose a method for recognizing negative emotions through an appropriate representation of facial features from relevant face regions displayed in video streams and still images. A measure that is sensitive to facial movements is used in predefined regions of interest to detect the negative emotions. The experimentation has been performed on a standard dataset and live video streams and has showed promising results.


Archive | 2010

Head Pose Estimation Using a Texture Model Based on Gabor Wavelets

Adel Lablack; Jean Martinet; Chabane Djeraba

Head pose estimation from a monocular camera or a simple image is a challenging topic. It is the process of inferring the orientation of a human head from digital imagery. Several processing steps are performed in order to transform a pixel-based representation of the head into a high-level concept of direction. The head pose is important in a lot of domains like human-computer interfaces, video conferencing or driver monitoring. Head pose estimation is often linked with visual gaze estimation (Lablack et al., 2009) which is the ability to characterize the direction and focus of attention of a person looking to a poster (Smith et al., 2008) or to another person during meeting scenarios (Voit & Stiefelhagen, 2008) for example. The head pose provides a coarse indication of the gaze that can be estimated in situations when the eyes of a person are not visible (like low-resolution imagery, or in the presence of eye-occluding objects like sunglasses). When the eyes are visible, head pose becomes a requirement to accurately predict gaze direction (Valenti et al., 2009). The aim of our work is to analyze the behaviour of the people passing in front of a target scene (Lablack & Djeraba, 2008) in order to extract the persons location of interest. The success of this kind of system highly depends upon a correct estimation of the head pose. In this paper, we present a template based approach which considers the head pose estimation as an image classification problem. Thus, the Pointing database (Gourier et al., 2004) has been used to build and test our head pose model. The feature vectors of different persons taken at the same pose will serve to learn a head pose classifier. The texture model is learned from feature vectors composed of the properties extracted from the real, imaginary and magnitude response of Gabor wavelets (due to the evolution of the head pose in orientation) and singular Value decomposition (SVD). The head pose estimation is then applied on the testing dataset. Finally, the classification accuracy is compared to the state of the art results that used the Pointing database. The paper is organized as follows. First, we highlight in Section 2 relevant works in head pose estimation. We then describe the method used for the head pose estimation and the database associated in Section 3. Sections 4 and 5 provide two representations of feature vectors extracted from SVD and the 3 different responses of Gabor wavelets. We apply on them two supervised learning SVM and KNN and the Frobenius distance. We discuss the 18


International Journal of Multimedia Information Retrieval | 2017

OVIS: ontology video surveillance indexing and retrieval system

Mohammed Yassine Kazi Tani; Abdelghani Ghomari; Adel Lablack; Ioan Marius Bilasco

Nowadays, the diversity and large deployment of video recorders result in a large volume of video data, whose effective use requires a video indexing process. However, this process generates a major problem consisting in the semantic gap between the extracted low-level features and the ground truth. The ontology paradigm provides a promising solution to overcome this problem. However, no naming syntax convention has been followed in the concept creation step, which constitutes another problem. In this paper, we have considered these two issues and have developed a full video surveillance ontology following a formal naming syntax convention and semantics that addresses queries of both academic research and industrial applications. In addition, we propose an ontology video surveillance indexing and retrieval system (OVIS) using a set of semantic web rule language (SWRL) rules that bridges the semantic gap problem. Currently, the existing indexing systems are essentially based on low-level features and the ontology paradigm is used only to support this process with representing surveillance domain. In this paper, we developed the OVIS system based on the SWRL rules and the experiments prove that our approach leads to promising results on the top video evaluation benchmarks and also shows new directions for future developments.


IEEE PSIVT-CVIM'09 | 2009

Gaze Based Quality Assessment of Visual Media Understanding

Jean Martinet; Adel Lablack; Stanislas Lew; Chabane Djeraba


2017 Computing Conference | 2017

An audio indexing and retrieval approach using a video surveillance ontology

Mohammed Yassine Kazi Tani; Abdelghani Ghomari; Lamia Fatiha Dali Youcef; Adel Lablack; Ioan Marius Bilasco


EmpaTex - ACM TVX Workshop | 2014

Data analysis of TV Shows viewers

Ioan Marius Bilasco; Adel Lablack; Taner Danisman

Collaboration


Dive into the Adel Lablack's collaboration.

Top Co-Authors

Avatar

Ioan Marius Bilasco

Laboratoire d'Informatique Fondamentale de Lille

View shared research outputs
Top Co-Authors

Avatar

Chaabane Djeraba

Laboratoire d'Informatique Fondamentale de Lille

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yassine Benabbas

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge