Lahouari Ghouti
King Fahd University of Petroleum and Minerals
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lahouari Ghouti.
IEEE Transactions on Signal Processing | 2006
Lahouari Ghouti; Ahmed Bouridane; Mohammad K. Ibrahim; Said Boussakta
In this paper, a robust watermarking algorithm using balanced multiwavelet transform is proposed. The latter transform achieves simultaneous orthogonality and symmetry without requiring any input prefiltering. Therefore, considerable reduction in computational complexity is possible, making this transform a good candidate for real-time watermarking implementations such as audio broadcast monitoring and DVD video watermarking. The embedding scheme is image adaptive using a modified version of a well-established perceptual model. Therefore, the strength of the embedded watermark is controlled according to the local properties of the host image. This has been achieved by the proposed perceptual model, which is only dependent on the image activity and is not dependent on the multifilter sets used, unlike those developed for scalar wavelets. This adaptivity is a key factor for achieving the imperceptibility requirement often encountered in watermarking applications. In addition, the watermark embedding scheme is based on the principles of spread-spectrum communications to achieve higher watermark robustness. The optimal bounds for the embedding capacity are derived using a statistical model for balanced multiwavelet coefficients of the host image. The statistical model is based on a generalized Gaussian distribution. Limits of data hiding capacity clearly show that balanced multiwavelets provide higher watermarking rates. This increase could also be exploited as a side channel for embedding watermark synchronization recovery data. Finally, the analytical expressions are contrasted with experimental results where the robustness of the proposed watermarking system is evaluated against standard watermarking attacks.
adaptive hardware and systems | 2006
Khalil Zebbiche; Lahouari Ghouti; Fouad Khelifi; Ahmed Bouridane
A motivation for the use of watermarking techniques in biometric systems has been the need to provide increased security to the biometrics data themselves. We introduce an application of wavelet-based watermarking method to hide the fingerprint minutiae data in fingerprint images. The application provides a high security to both hidden data (i.e. fingerprint minutiae) that have to be transmitted and the host image (i.e. fingerprint). The original unmarked fingerprint image is not required to extract the minutiae data. The method is essentially introduced to increase the security of fingerprint minutiae transmission and can also used to protect the original fingerprint image
Information & Software Technology | 2015
Issam H. Laradji; Mohammad Alshayeb; Lahouari Ghouti
Abstract Context Several issues hinder software defect data including redundancy, correlation, feature irrelevance and missing samples. It is also hard to ensure balanced distribution between data pertaining to defective and non-defective software. In most experimental cases, data related to the latter software class is dominantly present in the dataset. Objective The objectives of this paper are to demonstrate the positive effects of combining feature selection and ensemble learning on the performance of defect classification. Along with efficient feature selection, a new two-variant (with and without feature selection) ensemble learning algorithm is proposed to provide robustness to both data imbalance and feature redundancy. Method We carefully combine selected ensemble learning models with efficient feature selection to address these issues and mitigate their effects on the defect classification performance. Results Forward selection showed that only few features contribute to high area under the receiver-operating curve (AUC). On the tested datasets, greedy forward selection (GFS) method outperformed other feature selection techniques such as Pearson’s correlation. This suggests that features are highly unstable. However, ensemble learners like random forests and the proposed algorithm, average probability ensemble (APE), are not as affected by poor features as in the case of weighted support vector machines (W-SVMs). Moreover, the APE model combined with greedy forward selection (enhanced APE) achieved AUC values of approximately 1.0 for the NASA datasets: PC2, PC4, and MC1. Conclusion This paper shows that features of a software dataset must be carefully selected for accurate classification of defective components. Furthermore, tackling the software data issues, mentioned above, with the proposed combined learning model resulted in remarkable classification performance paving the way for successful quality control.
Procedia Computer Science | 2013
Lahouari Ghouti; Tarek R. Sheltami; Khaled Al-Utaibi
Abstract Recent advances in wireless technology and computing have paved the way to the unprecedented rapid growth in de- mand and availability of mobile networking and services coupled with diverse system/network applications. Such advances triggered the emergence of future generation wireless networks and services to address the increasingly strin- gent requirements of quality-of-service (QoS) at various levels. The expected growth in wireless network activity and the number of wireless users will enable similar growth in bandwidth-crunching wireless applications to meet the QoS requirements. Mobility prediction of wireless users and units plays a major role in efficient planning and manage- ment of the bandwidth resources available in wireless networks. In return, this efficiency will allow better planning and improved overall QoS in terms of continuous service availability and efficient power management. In this paper, we propose extreme learning machines (ELMs), known for universal approximation, to model and predict mobility of arbitrary nodes in a mobile ad hoc network (MANET). MANETs use mobility prediction in location-aided routing and mobility aware topology control protocols. In these protocols, each mobile node is assumed to know its current mobility information (position, speed and movement direction angle). In this way, future node positions are predicted along with future distances between neighboring nodes. Unlike multilayer perceptrons (MLPs), ELMs capture better the existing interaction/correlation between the cartesian coordinates of the arbitrary nodes leading to more realistic and accurate mobility prediction based on several standard mobility models. Simulation results using standard mobility models illustrate how the proposed prediction method can lead to a significant improvement over conventional methods based on MLPs. Moreover, the proposed solution circumvents the prediction accuracy limitations in current algorithms when predicting future distances between neighboring nodes. The latter prediction is required by some applications like mobility aware topology control protocols.
international conference on image processing | 2013
Issam H. Laradji; Lahouari Ghouti; El-Hebri Khiari
This paper presents a new perceptual image hashing approach that exploits the image color information using hypercomplex (quaternionic) representations. Unlike grayscale-based techniques, the proposed approach preserves the color interaction between the image components that have a significant contribution in the generated perceptual image hash codes. Having a robust image hash function optimizes a wide range of applications including content-based retrieval, image authentication, and image watermarking. Initially, the input color image is processed in a “holistic” manner using the hypercomplex representation where the red, green and blue (RGB) components are handled as a single entity. Then, non-overlapping 8 × 8 image blocks are processed using the Quaternion Fourier transform (QFT). Binary image hash codes are generated by comparing the block mean frequency energy to the global mean frequency energy. For retrieval purposes, the Hamming distance (HD) is used as the comparison metric to retrieve perceptually similar images. The performance of the proposed perceptual hashing for color image is compared to that based on the conventional complex Fourier transform (CFT). Simulation results clearly indicate the superior retrieval performance of the proposed QFT-based perceptual hashing technique in term of HD values of intra-and inter-class image samples. Moreover, the performance improvement of the QFT-based technique is achieved at a computational complexity similar to the CFT-based scheme.
Archive | 2011
Husni Al-Muhtaseb; Yousef Elarian; Lahouari Ghouti
Training and testing data for optical character recognition are cumbersome to obtain. If large amounts of data can be produced from small amounts, much time and effort can be saved. This paper presents an approach to synthesize Arabic handwriting. We segment word images into labeled characters and then use these in synthesizing arbitrary words. The synthesized text should look natural; hence, we define some criteria to decide on what is acceptable as natural-looking. The text that is synthesized by using the naturallooking constrain is compared to text that is synthesized without using the natural-looking constrain for evaluation.
international conference on acoustics, speech, and signal processing | 2014
Lahouari Ghouti
Perceptual hashing provides compact and efficient representations for image retrieval, authentication and tamper detection applications. However, most of existing perceptual hashing algorithms are designed for gray-level images and, therefore, color correlation and interaction are simply ignored. In this paper, we propose a novel perceptual hashing for color images using the quaternion singular value decomposition (Q-SVD). In this algorithm, color images are processed through randomized dimensionality reduction which results in secure and robust hashing codes. The motivation behind our work is twofold: 1) a compact representation of color images where the red, green and blue (RGB) components are handled as a single entity using hypercomplex representations and 2) the ability of Q-SVD decomposition to provide the best low-rank approximation of quaternion matrices in the sense of Frobenius norm. Possible geometric attacks are properly modeled as an independent and identically-distributed hypercomplex noise on the singular vectors. Such modeling simplifies the hash code detector design. Finally, the hashing robustness against geometric attacks is evaluated over a large set of standard test images using the receiver operating characteristics analysis. The proposed scheme outperforms SVD-based hashing algorithms in terms of lower miss and false alarm probabilities by orders of magnitude.
2009 Symposium on Bio-inspired Learning and Intelligent Systems for Security | 2009
Lahouari Ghouti; Fares S. Al-Qunaieer
In this paper, we propose a new algorithm for color iris recognition using quaternion phase correlation. The proposed scheme leads to new hypercomplex phase-based color iris recognition and matching method that is based on the joint modeling of the iris color components using the “holistic” representation provided by the quaternion representations. The efficiency of the proposed iris recognition method is characterized by greater accuracy and flexibility in capturing the color iris information. Using a database of 1877 color iris images, the experimental results indicate that the proposed method significantly improves iris recognition rates compared with traditional color approaches, while it enjoys a similar computational complexity.
adaptive hardware and systems | 2006
Walid Riad Boukabou; Lahouari Ghouti; Ahmed Bouridane
Face recognition is a challenging field of research not only because of the complexity of this subject, but also because of its numerous practical applications. Much progress has been made towards recognising faces under controlled conditions, especially under normalised pose and lighting conditions and with neutral expression. However, the recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. This is due to the fact that most of face recognition methods assume that the pose of the face is known. In this paper, we propose the use of a Gabor Filter Bank to extract an augmented Gabor-face vector to solve the pose estimation problem, extract some statistical features such as means and variances. And then the classification is performed using the nearest neighbour algorithm with the Euclidean distance. Finally, experimental results are reported to show the robustness of the extracted feature vectors for the recognition problem
Ultrasonics | 1997
Ahmed Yamani; Maamar Bettayeb; Lahouari Ghouti
In ultrasonic nondestructive testing (NDT) of materials, pulse-echo measurements are masked by the characteristics of the measuring instruments, the propagation paths taken by the ultrasonic pulses, and noise. This measured pulse-echo signal is modeled by the convolution of the defect impulse response and the measurement system response, added to noise. The deconvolution operation, therefore, seeks to undo the effect of the convolution and extract the defect impulse response which is essential for defect identification. In this contribution, we show that the defect ultrasonic model can be formulated in the higherorder-spectra (HOS) domain in which the processing is more suitable to unravel the effect of the measurement system and the additive Gaussian noise. In addition, a new technique is developed to faithfully recover the impulse response signal from its HOS. Synthesized ultrasonic signals as well as real signals obtained from artificial defects are used to show that the proposed technique is superior to conventional second-order statistics-based deconvolution techniques commonly used in NDT.
Collaboration
Dive into the Lahouari Ghouti's collaboration.
Abdulaziz Mohammad Alkhoraidly
King Fahd University of Petroleum and Minerals
View shared research outputs