Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hazem Wannous is active.

Publication


Featured researches published by Hazem Wannous.


international conference on image analysis and processing | 2013

Space-Time Pose Representation for 3D Human Action Recognition

Maxime Devanne; Hazem Wannous; Stefano Berretti; Pietro Pala; Mohamed Daoudi; Alberto Del Bimbo

3D human action recognition is an important current challenge at the heart of many research areas lying to the modeling of the spatio-temporal information. In this paper, we propose representing human actions using spatio-temporal motion trajectories. In the proposed approach, each trajectory consists of one motion channel corresponding to the evolution of the 3D position of all joint coordinates within frames of action sequence. Action recognition is achieved through a shape trajectory representation that is learnt by a K-NN classifier, which takes benefit from Riemannian geometry in an open curve shape space. Experiments on the MSR Action 3D and UTKinect human action datasets show that, in comparison to state-of-the-art methods, the proposed approach obtains promising results that show the potential of our approach.


IEEE Transactions on Medical Imaging | 2011

Enhanced Assessment of the Wound-Healing Process by Accurate Multiview Tissue Classification

Hazem Wannous; Yves Lucas; Sylvie Treuillet

With the widespread use of digital cameras, freehand wound imaging has become common practice in clinical settings. There is however still a demand for a practical tool for accurate wound healing assessment, combining dimensional measurements and tissue classification in a single user-friendly system. We achieved the first part of this objective by computing a 3-D model for wound measurements using uncalibrated vision techniques. We focus here on tissue classification from color and texture region descriptors computed after unsupervised segmentation. Due to perspective distortions, uncontrolled lighting conditions and view points, wound assessments vary significantly between patient examinations. The main contribution of this paper is to overcome this drawback with a multiview strategy for tissue classification, relying on a 3-D model onto which tissue labels are mapped and classification results merged. The experimental classification tests demonstrate that enhanced repeatability and robustness are obtained and that metric assessment is achieved through real area and volume measurements and wound outline extraction. This innovative tool is intended for use not only in therapeutic follow-up in hospitals but also for telemedicine purposes and clinical research, where repeatability and accuracy of wound assessment are critical.


international conference of the ieee engineering in medicine and biology society | 2007

Supervised Tissue Classification from Color Images for a Complete Wound Assessment Tool

Hazem Wannous; Sylvie Treuillet; Yves Lucas

This work is part of the ESCALE project dedicated to the design of a complete 3D and color wound assessment tool using a simple free handled digital camera. The first part was concerned with the computation of a 3D model for wound measurements using uncalibrated vision techniques. This paper presents the second part which deals with color classification of wound tissues, a prior step before to combine shape and color analysis in a single tool for real tissue surface measurements. As direct pixel classification proved to be inefficient for tissue wound labeling, we have adopted an original approach based on unsupervised segmentation prior to classification, to improve the robustness of the labeling step by considering spatial continuity and homogeneity. A ground truth is first provided by merging the images collected and labeled by clinicians. Then, color and texture tissue descriptors are extracted on labeled regions of this learning database to design a SVM region classifier, achieving 88% success overlap score. Finally, we apply unsupervised color region segmentation on test images and classify the regions. Compared to the ground truth, segmentation driven classification and clinician labeling achieve similar performance, around 75 % for granulation and 60 % for slough.


acm multimedia | 2010

The IMMED project: wearable video monitoring of people with age dementia

Rémi Mégret; Vladislavs Dovgalecs; Hazem Wannous; Svebor Karaman; Jenny Benois-Pineau; Elie Khoury; Julien Pinquier; Philippe Joly; Régine André-Obrecht; Yann Gaëstel; Jean-François Dartigues

In this paper, we describe a new application for multimedia indexing, using a system that monitors the instrumental activities of daily living to assess the cognitive decline caused by dementia. The system is composed of a wearable camera device designed to capture audio and video data of the instrumental activities of a patient, which is leveraged with multimedia indexing techniques in order to allow medical specialists to analyze several hour long observation shots efficiently.


Journal of Electronic Imaging | 2010

Robust tissue classification for reproducible wound assessment in telemedicine environments

Hazem Wannous; Sylvie Treuillet; Yves Lucas

In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.


international conference on pattern recognition | 2014

Grassmannian Representation of Motion Depth for 3D Human Gesture and Action Recognition

Rim Slama; Hazem Wannous; Mohamed Daoudi

Recently developed commodity depth sensors open up new possibilities of dealing with rich descriptors, which capture geometrical features of the observed scene. Here, we propose an original approach to represent geometrical features extracted from depth motion space, which capture both geometric appearance and dynamic of human body simultaneously. In this approach, sequence features are modeled temporally as subspaces lying on the Grassmann manifold. Classification task is carried out via computation of probability density functions on tangent space of each class tacking benefit from the geometric structure of the Grassmann manifold. The experimental evaluation is performed on three existing datasets containing various challenges, including MSR-action 3D, UT-kinect and MSR-Gesture3D. Results reveal that our approach outperforms the state-of-the-art methods, with accuracy of 98.21% on MSR-Gesture3D and 95.25% on UT-kinect, and achieves a competitive performance of 86.21% on MSR-action 3D.


international conference on image processing | 2008

A complete 3D wound assessment tool for accurate tissue classification and measurement

Hazem Wannous; Yves Lucas; Sylvie Treuillet; Benjamin Albouy

This paper presents the complete 3D and color wound assessment tool, designed using a simple freely handled digital camera inside the ESCALE project. Combining a 3D model of the captured wound images using uncalibrated vision techniques with unsupervised tissue segmentation, it gives access to enhanced tissue classification and measurement. As a result, the tissue classification is directly mapped on the mesh surface of the wound to measure real tissue growth and changes. Clinical tests demonstrate that the monitoring of the healing process is very accurate compared to single view analysis.


Journal of Electronic Imaging | 2012

Improving color correction across camera and illumination changes by contextual sample selection

Hazem Wannous; Yves Lucas; Sylvie Treuillet; Alamin Mansouri; Yvon Voisin

In many tasks of machine vision applications, it is important that recorded colors remain constant, in the real world scene, even under changes of the illuminants and the cameras. Contrary to the human vision system, a machine vision system exhibits inadequate adaptability to the variation of lighting conditions. Automatic white bal- ance control available in commercial cameras is not sufficient to pro- vide reproducible color classification. We address this problem of color constancy on a large image database acquired with varying digi- tal cameras and lighting conditions. A device-independent color repre- sentation may be obtained by applying a chromatic adaptation transform, from a calibrated color checker pattern included in the field of view. Instead of using the standard Macbeth color checker, we suggest selecting judicious colors to design a customized pattern from contextual information. A comparative study demonstrates that this approach ensures a stronger constancy of the colors-of- interest before vision control thus enabling a wide variety of applica- tions.


Image and Vision Computing | 2014

3D human motion analysis framework for shape similarity and retrieval

Rim Slama; Hazem Wannous; Mohamed Daoudi

3D shape similarity from video is a challenging problem lying at the heart of many primary research areas in computer graphics and computer vision applications. In this paper, we address within a new framework the problem of 3D shape representation and shape similarity in human video sequences. Our shape representation is formulated using extremal human curve (EHC) descriptor extracted from the body surface. It allows taking benefits from Riemannian geometry in the open curve shape space and therefore computing statistics on it. It also allows subject pose comparison regardless of geometrical transformations and elastic surface change. Shape similarity is performed by an efficient method which takes advantage of a compact EHC representation in open curve shape space and an elastic distance measure. Thanks to these main assets, several important exploitations of the human action analysis are performed: shape similarity computation, video sequence comparison, video segmentation, video clustering, summarization and motion retrieval. Experiments on both synthetic and real 3D human video sequences show that our approach provides an accurate static and temporal shape similarity for pose retrieval in video, compared with the state-of-the-art approaches. Moreover, local 3D video retrieval is performed using motion segmentation and dynamic time warping (DTW) algorithm in the feature vector space. The obtained results are promising and show the potential of this approach.


Pattern Recognition | 2017

Motion segment decomposition of RGB-D sequences for human behavior understanding

Maxime Devanne; Stefano Berretti; Pietro Pala; Hazem Wannous; Mohamed Daoudi; Alberto Del Bimbo

In this paper, we propose a framework for analyzing and understanding human behavior from depth videos. The proposed solution first employs shape analysis of the human pose across time to decompose the full motion into short temporal segments representing elementary motions. Then, each segment is characterized by human motion and depth appearance around hand joints to describe the change in pose of the body and the interaction with objects. Finally , the sequence of temporal segments is modeled through a Dynamic Naive Bayes classifier, which captures the dynamics of elementary motions characterizing human behavior. Experiments on four challenging datasets evaluate the potential of the proposed approach in different contexts, including gesture or activity recognition and online activity detection. Competitive results in comparison with state of the art methods are reported.

Collaboration


Dive into the Hazem Wannous's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yves Lucas

University of Orléans

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pietro Pala

University of Florence

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge