Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Farid Boussaid is active.

Publication


Featured researches published by Farid Boussaid.


IEEE Transactions on Circuits and Systems | 2011

A CMOS Single-Chip Gas Recognition Circuit for Metal Oxide Gas Sensor Arrays

Kwan Ting Ng; Farid Boussaid; Amine Bermak

This paper presents a CMOS single-chip gas recognition circuit, which encodes sensor array outputs into a unique sequence of spikes with the firing delay mapping the strength of the stimulation across the array. The proposed gas recognition circuit examines the generated spike pattern of relative excitations across the population of sensors and looks for a match within a library of 2-D spatio-temporal spike signatures. Each signature is drift insensitive, concentration invariant and is also a unique characteristic of the target gas. This VLSI friendly approach relies on a simple spatio-temporal code matching instead of existing computationally expensive pattern matching statistical techniques. In addition, it relies on a novel sensor calibration technique that does not require control or prior knowledge of the gas concentration. The proposed gas recognition circuit was implemented in a 0.35 μm CMOS process and characterized using an in-house fabricated 4 × 4 tin oxide gas sensor array. Experimental results show a correct detection rate of 94.9% when the gas sensor array is exposed to propane, ethanol and carbon monoxide.


IEEE Signal Processing Letters | 2014

3-D Face Recognition Using Curvelet Local Features

S. Elaiwat; Mohammed Bennamoun; Farid Boussaid; Amar A. El-Sallam

In this letter, we present a robust single modality feature-based algorithm for 3-D face recognition. The proposed algorithm exploits Curvelet transform not only to detect salient points on the face but also to build multi-scale local surface descriptors that can capture highly distinctive rotation/displacement invariant local features around the detected keypoints. This approach is shown to provide robust and accurate recognition under varying illumination conditions and facial expressions. Using the well-known and challenging FRGC v2 dataset, we report a superior performance compared to other algorithms, with a 97.83% verification rate for probes with all facial expressions.


computer vision and pattern recognition | 2017

A New Representation of Skeleton Sequences for 3D Action Recognition

Qiuhong Ke; Mohammed Bennamoun; Senjian An; Ferdous Ahmed Sohel; Farid Boussaid

This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition.


IEEE Transactions on Power Electronics | 2015

A Highly Efficient P-SSHI Rectifier for Piezoelectric Energy Harvesting

Shaohua Lu; Farid Boussaid

A highly efficient P-SSHI-based rectifier for piezoelectric energy harvesting is presented in this paper. The proposed rectifier utilizes the voltages at the two ends of the piezoelectric device (PD) to detect the polarity change of the current produced by the PD. The inversion process of the voltage across the PD is automatically controlled by diodes along the oscillating network. In contrast to prior works, the proposed rectifier exhibits several advantages in terms of efficiency, circuit simplicity, compatibility with commercially available PDs, and standalone operation. Experimental results show that the proposed rectifier can provide a 5.8× boost in harvested energy compared to the conventional full-wave bridge rectifier.


Pattern Recognition | 2015

A Curvelet-based approach for textured 3D face recognition

S. Elaiwat; Mohammed Bennamoun; Farid Boussaid; Amar A. El-Sallam

In this paper, we present a fully automated multimodal Curvelet-based approach for textured 3D face recognition. The proposed approach relies on a novel multimodal keypoint detector capable of repeatably identifying keypoints on textured 3D face surfaces. Unique local surface descriptors are then constructed around each detected keypoint by integrating Curvelet elements of different orientations, resulting in highly descriptive rotation invariant features. Unlike previously reported Curvelet-based face recognition algorithms which extract global features from textured faces only, our algorithm extracts both texture and 3D local features. In addition, this is achieved across a number of frequency bands to achieve robust and accurate recognition under varying illumination conditions and facial expressions. The proposed algorithm was evaluated using three well-known and challenging datasets, namely FRGC v2, BU-3DFE and Bosphorus datasets. Reported results show superior performance compared to prior art, with 99.2%, 95.1% and 91% verification rates at 0.001 FAR for FRGC v2, BU-3DFE and Bosphorus datasets, respectively. HighlightsIdentifying distinctive keypoints on textured 3D face surfaces rich with features.These keypoints are identified in the Curvelet domain across mid-frequency bands.The repeatability of these keypoints is high in both neutral and nonneutral faces.Building local surface descriptors around the keypoints in the Curvelet domain.Reported results show superior performance on three datasets, namely FRGC, BU-3DFE and Bosphorus, compared to prior art.


Neurocomputing | 2016

Iterative deep learning for image set based face and object recognition

Syed Afaq Ali Shah; Mohammed Bennamoun; Farid Boussaid

We present a novel technique for image set based face/object recognition, where each gallery and query example contains a face/object image set captured from different viewpoints, background, facial expressions, resolution and illumination levels. While several image set classification approaches have been proposed in recent years, most of them represent each image set as a single linear subspace, mixture of linear subspaces or Lie group of Riemannian manifold. These techniques make prior assumptions in regards to the specific category of the geometric surface on which images of the set are believed to lie. This could result in a loss of discriminative information for classification. This paper alleviates these limitations by proposing an Iterative Deep Learning Model (IDLM) that automatically and hierarchically learns discriminative representations from raw face and object images. In the proposed approach, low level translationally invariant features are learnt by the Pooled Convolutional Layer (PCL). The latter is followed by Artificial Neural Networks (ANNs) applied iteratively in a hierarchical fashion to learn a discriminative non-linear feature representation of the input image sets. The proposed technique was extensively evaluated for the task of image set based face and object recognition on YouTube Celebrities, Honda/UCSD, CMU Mobo and ETH-80 (object) dataset, respectively. Experimental results and comparisons with state-of-the-art methods show that our technique achieves the best performance on all these datasets.


Pattern Recognition | 2016

A spatio-temporal RBM-based model for facial expression recognition

S. Elaiwat; Mohammed Bennamoun; Farid Boussaid

The ability to recognize facial expressions will be an important characteristic of next generation human computer interfaces. Towards this goal, we propose a novel RBM-based model to learn effectively the relationships (or transformations) between image pairs associated with different facial expressions. The proposed model has the ability to disentangle these transformations (e.g. pose variations and facial expressions) by encoding them into two different hidden sets, namely facial-expression morphlets, and non-facial-expression morphlets. The first hidden set is used to encode facial-expression morphlets through a factored four-way sub-model conditional to label units. The second hidden set is used to encode non-facial-expression morphlets through a factored three-way sub-model. With such a strategy, the proposed model can learn transformations between image pairs while disentangling facial-expression transformations from non-facial-expression transformations. This is achieved using an algorithm, dubbed Quadripartite Contrastive Divergence. Reported experiments demonstrate the superior performance of the proposed model compared to state-of-the-art. HighlightsIntroducing a novel RBM-based model to capture transformations between image pairs.Disentangling FER transformations from other transformations using two hidden sets.Introducing a Quadripartite Contrastive Divergence algorithm to learn our model.


international conference on image processing | 2013

3D-Div: A novel local surface descriptor for feature matching and pairwise range image registration

Syed Afaq Ali Shah; Mohammed Bennamoun; Farid Boussaid; Amar A. El-Sallam

This paper presents a novel local surface descriptor, called 3D-Div. The proposed descriptor is based on the concept of 3D vector fields divergence, extensively used in electromagnetic theory. To generate a 3D-Div descriptor of a 3D surface, a keypoint is first extracted on the 3D surface, then a local patch of a certain size is selected around that keypoint. A Local Reference Frame (LRF) is then constructed at the keypoint using all points forming the patch. A normalized 3D vector field is then computed at each point in the patch and referenced with LRF vectors. The 3D-Div descriptors are finally generated as the divergence of the reoriented 3D vector field. We tested our proposed descriptor on the low resolution Washington RGB-D (Kinect) object dataset. Performance was evaluated for the tasks of feature matching and pairwise range image registration. Experimental results showed that the proposed 3D-Div is 88% more computationally efficient and 47% more accurate than commonly used Spin Image (SI) descriptors.


IEEE Signal Processing Letters | 2017

SkeletonNet: Mining Deep Part Features for 3-D Action Recognition

Qiuhong Ke; Senjian An; Mohammed Bennamoun; Ferdous Ahmed Sohel; Farid Boussaid

This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition. Given a skeleton sequence, the spatial structure of the skeleton joints in each frame and the temporal information between multiple frames are two important factors for action recognition. We first extract body-part-based features from each frame of the skeleton sequence. Compared to the original coordinates of the skeleton joints, the proposed features are translation, rotation, and scale invariant. To learn robust temporal information, instead of treating the features of all frames as a time series, we transform the features into images and feed them to the proposed deep learning network, which contains two parts: one to extract general features from the input images, while the other to generate a discriminative and compact representation for action recognition. The proposed method is tested on the SBU kinect interaction dataset, the CMU dataset, and the large-scale NTU RGB+D dataset and achieves state-of-the-art performance.


Pattern Recognition | 2015

A novel 3D vorticity based approach for automatic registration of low resolution range images

Syed Afaq Ali Shah; Mohammed Bennamoun; Farid Boussaid

This paper tackles the problem of feature matching and range image registration. Our approach is based on a novel set of discriminating three-dimensional (3D) local features, named 3D-Vor (Vorticity). In contrast to conventional local feature representation techniques, which use the vector field (i.e. surface normals) to just construct their local reference frames, the proposed feature representation exploits the vorticity of the vector field computed at each point of the local surface to capture the distinctive characteristics at each point of the underlying 3D surface. The 3D-Vor descriptors of two range images are then matched using a fully automatic feature matching algorithm which identifies correspondences between the two range images. Correspondences are verified in a local validation step of the proposed algorithm and used for the pairwise registration of the range images. Quantitative results on low resolution Kinect 3D data (Washington RGB-D dataset) show that our proposed automatic registration algorithm is accurate and computationally efficient. The performance evaluation of the proposed descriptor was also carried out on the challenging low resolution Washington RGB-D (Kinect) object dataset, for the tasks of automatic range image registration. Reported experimental results show that the proposed local surface descriptor is robust to resolution, noise and more accurate than state-of-the-art techniques. It achieves 90% registration accuracy compared to 50%, 69.2% and 52% for spin image, 3D SURF and SISI/LD-SIFT descriptors, respectively. HighlightsA novel local surface descriptor (3D-Vor) is proposed for surface representation.The proposed 3D-Vor exploits the vector field?s vorticity.A novel pairwise registration algorithm is also proposed.3D-Vor is tested on low resolution dataset for range image registration.3D-Vor based registration achieves 90% accuracy on low resolution data and outperforms state-of-the-art techniques.

Collaboration


Dive into the Farid Boussaid's collaboration.

Top Co-Authors

Avatar

Mohammed Bennamoun

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Senjian An

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Syed Afaq Ali Shah

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Amar A. El-Sallam

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Qiuhong Ke

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

S. Elaiwat

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

Dominique Martinez

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Gary A. Kendrick

University of Western Australia

View shared research outputs
Researchain Logo
Decentralizing Knowledge