Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arsalane Zarghili is active.

Publication


Featured researches published by Arsalane Zarghili.


Journal of Real-time Image Processing | 2018

Blocking artifact removal using partial overlapping based on exact Legendre moments computation

Zaineb Bahaoui; Khalid Zenkouar; Hakim El Fadili; Hassan Qjidaa; Arsalane Zarghili

In this paper, we present the design of a partial overlapping block using exact Legendre moment computation (POBRELM) for gray-level image reconstruction. We address and solve the problem of artifact caused by independent reconstruction of the block affecting the visual image quality. This new approach takes advantage of only partial information of the neighbors of each block in spite of using global overlapping. Processing less neighborhood area during the exact computation of Legendre moments leads to significant improvement in the processing time. Simulation results show that the proposed method achieves not only considerable enhancement in terms of reconstruction error, but also drastic reduction in computation time. This makes our method attractive for real-time applications.


2015 Intelligent Systems and Computer Vision (ISCV) | 2015

A gender classification approach based on 3D depth-radial curves and fuzzy similarity based classification

Soufiane Ezghari; Naouar Belghini; Azeddine Zahi; Arsalane Zarghili

We propose in this paper, a gender recognition solution under the presence of occlusion and using the very restrict samples in the learning base. The developed approach is based on the extraction of pertinent 3D depth-radial curves that cover the nose region and combined dimensionality reduction using sparse random projection method; furthermore we propose an extension of similarity based classification approach to handle recognition task. Experimental results approve the effectiveness of our approach and show that the proposed method is also effective in the presence of variations such as facial expressions and rotation.


2011 Colloquium in Information Science and Technology | 2011

Color facial authentication system based on neural network

Naouar Belghini; Arsalane Zarghili; Jamal Kharroubi; Aicha Majda

Face recognition can be defined as the ability of a system to classify or describe a human face. The motivation for such system is to enable computers to do things like humans do and to apply computers to solve problems that involve analysis and classification. Face recognition systems require less user cooperation than systems based on other biometrics (e.g. fingerprints and iris), it is one of the most widely investigated biometric techniques for human identification and it can be used in applications such as access control, passport control, surveillance, criminal justice and human computer interaction. Face recognition is a specific case of object recognition. It is not a unique and rigid object. Indeed, Global features are sensitive to variations caused by emotional expressions, illumination, pose and occlusions. Neural networks have been widely used for applications related to face recognition and Backpropagation Neural Network (BPNN) is one of the most widely used methods in this domain. In this paper we present 3 solutions related to neural network for color face recognition. First we introduce learning-based dimension reduction algorithms. In the literature many methods are used to reduce the dimensionality of the subspace in which faces are presented. Recently, Random Projection (RP) has emerged as a powerful method for dimensionality reduction. It represents a computationally simple and efficient method that preserves the structure of the data without introducing very significant distortion. Our focus was to investigate the dimensionality reduction offered by RP and perform an artificial intelligent system for face recognition. According to the experimental results, we conclude that random projection is an optimal method of dimensionality reduction. In the case of our study, obtaining a higher FR rate depends, among others, on the choice of the random projection matrix and the dimension of the feature vector of original data. Secondly, we propose a hybrid method to achieve face recognition purpose using semi supervised BPNN. Traditionally, BPNN needs supervised training to learn how to predict results from desired data, the idea of our approach is to get the desired output of the network from an exterior classifier (SOM) and then apply the back propagation algorithm to recognize facial data. Experiments show that the results are satisfying in comparison with the supervised BPNN. Furthermore, we can deduce that the unlabeled vector in the training DB generally does not influence the recognition task and due to its generation ability the neural net can even correct some misclassified vectors. The third study concerns the use of Bhattacharyya distance to calculate the total error of the network. The error function generally used to train the neural network is Mean Square Error (MSE) based on Euclidean distance measure. In the experimental section we compare how the algorithm converge using the Mean Square Error and the Bhataccharyya distance and results indicated that the image faces can be recognized by the proposed system effectively and swiftly.


international conference wireless technologies embedded and intelligent systems | 2017

Comparison between Euclidean and Manhattan distance measure for facial expressions classification

Latifa Greche; Maha Jazouli; Najia Es-Sbai; Aicha Majda; Arsalane Zarghili

In this paper, we compare classification results, of six facial expressions including joy, surprise, sadness, anger, disgust, and fear, relying on two different methods of distance computing between 121 landmark points on the face. Facial features were computed using L1 norm (Manhattan distance) in the first case and L2 norm (Euclidean distance) in the second case. Training and test data have been collected using kinect sensor. Labelled dataset contains sequences of 121 landmark points extracted from the face of each subject while displaying six facial expressions including joy, surprise, sadness, anger, disgust, and fear. Classification has been realized using multi-layer feed forward neural network with one hidden layer. Good recognition rates have been achieved in the early stages of training regarding Euclidean facial distances.


Signal, Image and Video Processing | 2017

Exact Zernike and pseudo-Zernike moments image reconstruction based on circular overlapping blocks and Chamfer distance

Zaineb Bahaoui; Hakim El Fadili; Khalid Zenkouar; Arsalane Zarghili

This study aims to explore a novel approach to reconstruct multi-gray-level images based on circular blocks reconstruction method using two exact and fast moments: Zernike (CBR-EZM) and pseudo-Zernike (CBR-EPZM): An image is first divided into a set of sub-images which are then reconstructed independently. We also introduced Chamfer distance (CD) to capitalize on the use of discrete distance instead of Euclidean one. The combination of our methods and CD leads to CBR-EZM-CD and CBR-EPZM-CD methods. Obviously, image partitioning offers significant advantages, but an undesirable circular blocking effect can occur. To mitigate this effect, we have implemented overlapping feature to our new methods leading to OCBR-EZM-CD and OCBR-EPZM-CD, by exploiting neighborhood information of the circular blocks. The main motivation of this novel approach is to explore new applications of Zernike and pseudo-Zernike moments. One of the fields is feature extraction for pattern recognition: Zernike and pseudo-Zernike moments are well known to capture only the global features, but thanks to the circular block reconstruction, we can now use those moments to extract also local features.


ieee international colloquium on information science and technology | 2014

Global overlapping block based reconstruction using exact Legendre moments

Zaineb Bahaoui; Khalid Zenkouar; Arsalane Zarghili; Hakim El Fadili; Hassan Qjidaa

In this paper, we propose a novel method for reconstruction of the multi-gray level images using the exact computation of Legendre moments. The purpose of this method is to ensure high accuracy and low computation time, by dividing the image into a set of blocks instead of treating the whole image. For mitigating the blocking artifact involved in the reconstruction process, we propose a new approach based on block Reconstruction using a Global Overlapping Block and Exact Legendre Moments computation (GOBRELM). A typical comparison of the proposed method with the conventional ones concludes to very impressive and promising results concerning image quality and time consuming. The main motivation of the proposed approach is to allow fast and efficient reconstruction algorithm, with improvement of the reconstructed image quality and time consumption. This makes our method attractive for real time applications.


computer science on-line conference | 2017

An Improved Speaker Identification System Using Automatic Split-Merge Incremental Learning (A-SMILE) of Gaussian Mixture Models

Ayoub Bouziane; Jamal Kharroubi; Arsalane Zarghili

In this paper, a new model-based clustering algorithm is introduced for optimal speaker modeling in speaker identification systems. The introduced algorithm can estimates the optimal number of mixture components using a cross-validation methodology, as well as, overcome the initialization sensitivity and local maxima problems of classical EM algorithm using a split & merge incremental learning approach. The performed experiments in speaker identification task demonstrate the efficiency and effectivity of the proposed algorithm compared to the commonly used Expectation-Maximization (EM) algorithm.


International Journal of Biometrics | 2017

Fuzzy similarity-based classification method for gender recognition using 3D facial images

Soufiane Ezghari; Naouar Belghini; Azeddine Zahi; Arsalane Zarghili

In this paper, we propose a new fuzzy similarity-based classification (FSBC) method for the task of gender recognition. The proposed method characterises each individual by extracting geometrical features from a 3D facial image using pertinent radial curves. Our approach includes representing the extracted features using fuzzy sets to handle imprecision in its values. Also, the proposed FSBC method recognises the gender of a new person by evaluating his similarity to the male and female samples pre-set as gender representatives set, then we aggregate the obtained similarities to compute the scores of belonging to each gender. In the end, we ascribe to each new person the gender with the higher score. With the proposed method, two main advantages are obtained: First, we used the OWA operator and RIM quantifier to define the percentage of significant features for the similarity assessment. Second, the aggregation process was performed using compensatory operators to ensure the selected gender has high similarities. Experiments were conducted using FRAV3D database, by considering only one frontal pose in the gender representatives set. The obtained gender recognition rate of the proposed method was very promising compared to other classification method.


2017 Intelligent Systems and Computer Vision (ISCV) | 2017

A

Maha Jazouli; Aicha Majda; Arsalane Zarghili

Autism is a developmental disorder involving qualitative impairments in social interaction. One source of those impairments are difficulties with facial expressions of emotion. Autistic people often have difficulty to recognize or to understand other peoples emotions and feelings, or expressing their own. This work proposes a method to automatically recognize seven basic emotions among autistic children in real time: Happiness, Anger, Sadness, Surprise, Fear, Disgust, and Neutral. The method uses the Microsoft Kinect sensor to track and identify points of interest from the 3D face model and it is based on the


2016 International Conference on Engineering & MIS (ICEMIS) | 2016

P recognizer for automatic facial emotion recognition using Kinect sensor

Safae Elhoufi; Maha Jazouli; Aicha Majda; Arsalane Zarghili; Rachid Aalouane

P point-cloud recognizer to identify multi-stroke emotions as point-clouds. The experimental results show that our system can achieve above 94.28% recognition rate. Our study provides a novel clinical tool to help children with autism to assisting doctors in operating rooms.

Collaboration


Dive into the Arsalane Zarghili's collaboration.

Top Co-Authors

Avatar

Jamal Kharroubi

Sidi Mohamed Ben Abdellah University

View shared research outputs
Top Co-Authors

Avatar

Naouar Belghini

Sidi Mohamed Ben Abdellah University

View shared research outputs
Top Co-Authors

Avatar

Aicha Majda

Sidi Mohamed Ben Abdellah University

View shared research outputs
Top Co-Authors

Avatar

Anissa Bouzalmat

Sidi Mohamed Ben Abdellah University

View shared research outputs
Top Co-Authors

Avatar

Mustapha Khalfi

Sidi Mohamed Ben Abdellah University

View shared research outputs
Top Co-Authors

Avatar

Ouafae Nahli

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vito Pirrelli

National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge