Ahmed Guessoum
University of Science and Technology Houari Boumediene
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ahmed Guessoum.
world conference on information systems and technologies | 2015
Riadh Belkebir; Ahmed Guessoum
In recent years, research in text summarization has become very active for many languages. Unfortunately, looking at the effort devoted to Arabic text summarization, we find much fewer attention paid to it. This paper presents a Machine Learning-based approach to Arabic text summarization which uses AdaBoost. This technique is employed to predict whether a new sentence is likely to be included in the summary or not. In order to evaluate the approach, we have used a corpus of Arabic articles. This approach was compared against other Machine Learning approaches and the results obtained show that the approach we suggest using AdaBoost outperforms other existing approaches.
Engineering Applications of Artificial Intelligence | 2016
Aicha Boutorh; Ahmed Guessoum
Recently, various techniques have been applied to classify Single Nucleotide Polymorphisms (SNP) data as they have been shown to be implicated in various human diseases. One of the major problems related to SNP sets is the large p, small n problem which refers to the high number of features and the small number of samples, which makes the classification task complex. In this paper, a new hybrid intelligent technique based on Association Rule Mining (ARM) and Neural Networks (NN) which uses Evolutionary Algorithms (EA) is proposed to deal with the dimensionality problem. On the one hand, ARM optimized by Grammatical Evolution (GE) is used to select the most informative features and to reduce the dimensionality by parallel extraction of associations between SNPs in two separate datasets of case and control samples. On the other hand, and to complement the previous task, a NN is used for efficient classification. The Genetic Algorithm (GA) is used for setting up the parameters of the two combined techniques. The proposed GA-NN-GEARM approach has been applied on four different SNP datasets obtained from the NCBI Gene Expression Omnibus (GEO) website. The created model has reached a high classification accuracy, reaching in some cases 100%, and has outperformed several feature selection techniques when combined with different classifiers.
2013 ACS International Conference on Computer Systems and Applications (AICCSA) | 2013
Riadh Belkebir; Ahmed Guessoum
Automatic categorization of documents has become an important task, especially with the rapid growth of the number of documents available online. Automatic categorization of documents consists in assigning a category to a text based on the information it contains. It aims to automate the association of a document with a category. Automatic categorization can allow solving several problems such as identifying the language of a document, the filtering and detection of spam (junk mail), the routing and forwarding of emails to their recipients, etc. In this paper, we present the results of Arabic text categorization based on three different approaches: artificial neural networks, support vector machines (SVMs) and a hybrid approach BSO-CHI-SVM. We explain the approach and present the results of the implementation and evaluation using two types of representations: root-based stemming and light stemming. The evaluation in each case was done on the Open Source Arabic Corpora (OSAC) using different performance measures.
computer science and its applications | 2015
Mohamed Seghir Hadj Ameur; Youcef Moulahoum; Ahmed Guessoum
Arabic texts are generally written without diacritics. This is the case for instance in newspapers, contemporary books, etc., which makes automatic processing of Arabic texts more difficult. When diacritical signs are present, Arabic script provides more information about the meanings of words and their pronunciation. Vocalization of Arabic texts is a complex task which may involve morphological, syntactic and semantic text processing.
applications of natural language to data bases | 2017
Asma Aouichat; Ahmed Guessoum
In this paper, we describe the development of TALAA-AFAQ, a Corpus of Arabic Factoid Question Answers that is developed to be used in the training modules of an Arabic Question Answering System (AQAS). The process of building our corpus consists of five steps, in which we extract syntactic, semantic features and other information. In addition, we extract a set of answer patterns for each question from the web. The corpus contains 2002 question answer pairs. Out of these, 618 question-answer pairs have their answer-patterns. The corpus is divided into four main classes and 34 finer categories. All answer patterns and features have been validated by experts on Arabic. To the best of our knowledge, this is the first corpus of Arabic Factoid Question Answers which is specifically built to support the development of Arabic QASs (AQAS).
Procedia Computer Science | 2017
Mohamed Seghir Hadj Ameur; Farid Meziane; Ahmed Guessoum
Abstract Transliteration is the process of converting words from a given source language alphabet to a target language alphabet, in a way that best preserves the phonetic and orthographic aspects of the transliterated words. Even though an important effort has been made towards improving this process for many languages such as English, French and Chinese, little research work has been accomplished with regard to the Arabic language. In this work, an attention-based encoder-decoder system is proposed for the task of Machine Transliteration between the Arabic and English languages. Our experiments proved the efficiency of our proposal approach in comparison to some previous research developed in this area.
acs/ieee international conference on computer systems and applications | 2015
Riadh Belkebir; Ahmed Guessoum
A lot of work has been performed for many languages other than Arabic in sentence compression. Unfortunately, there is a lack of effort devoted to Arabic sentence compression. One of the reasons behind the lack of work in Arabic sentence compression is the absence of Arabic sentence compression corpora. In order to build and evaluate sentence compression systems, parallel corpora consisting of source sentences and their corresponding compressions are needed. In this paper, we present TALAA-ASC, the first Arabic sentence compression corpus. We present the methodology we followed in order to construct the corpus. We also give the different statistics and analyses that we have performed on this corpus.
Modeling Approaches and Algorithms for Advanced Computer Applications | 2013
Lamia Berkani; Lydia Nahla Driff; Ahmed Guessoum
The present paper introduces an original approach for the validation of learning objects (LOs) within an online Community of Practice (CoP). A social validation has been proposed based on two features: (1) the members’ assessments, which we have formalized semantically, and (2) an expertise-based learning approach, applying a machine learning technique. As a first step, we have chosen Neural Networks because of their efficiency in complex problem solving. An experimental study of the developed prototype has been conducted and preliminary tests and experimentations show that the results are significant.
Archive | 2018
Riadh Belkebir; Ahmed Guessoum
Text summarization is one of the most challenging and difficult tasks in natural language processing, and artificial intelligence more generally. Various approaches have been proposed in the literature. Text summarization is classified into two categories: extractive text summarization and abstractive text summarization. The vast majority of work in the literature followed the extractive approach, probably due to the complexity of the abstractive one. To the best of our knowledge, the work presented here is the first work on Arabic that handles both the extractive and abstractive aspects. Indeed, while the literature lacks summarization frameworks that allow the integration of various operations within the same system, this work proposes a novel approach where we design a general framework which integrates several operations within the same system. It also provides a mechanism that allows the assignment of the suitable operation to each portion of the source text which is to be summarized, and this is achieved in an iterative process.
International Conference on Arabic Language Processing | 2017
Mohamed Seghir Hadj Ameur; Ahlem Chérifa Khadir; Ahmed Guessoum
This paper introduces an automatic method to extend existing WordNets via machine translation. Our proposal relies on the hierarchical skeleton of the English Princeton WordNet (PWN) as a backbone to extend their taxonomies. Our proposal is applied to the Arabic WordNet (AWN) to enrich it by adding new synsets, and also by providing vocalizations and usage examples for each inserted lemma. Around 12000 new potential synsets can be added to AWN with a precision of at least \(93\%\). As such the coverage of AWN in terms of synsets can be increased from 11269 to around 24000 a very promising achievement on the path of enriching the Arabic WordNet.