Hatem Haddad
Université libre de Bruxelles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hatem Haddad.
conference on intelligent text processing and computational linguistics | 2016
Calkin Suero Montero; Hatem Haddad; Maxim Mozgovoy; Chedi Bechikh Ali
Understanding the causes of spikes in the emotion flow of influential social media users is a key component when analyzing the diffusion and adoption of opinions and trends. Hence, in this work we focus on detecting the likely reasons or causes of spikes within influential Twitter users’ emotion flow. To achieve this, once an emotion spike is identified we use linguistic and statistical analyses on the tweets surrounding the spike in order to reveal the spike’s likely explanations or causes in the form of keyphrases. Experimental evaluation on emotion flow visualization, emotion spikes identification and likely cause extraction for several influential Twitter users shows that our method is effective for pinpointing interesting insights behind the causes of the emotion fluctuation. Implications of our work are highlighted by relating emotion flow spikes to real-world events and by the transversal application of our technique to other types of timestamped text.
conference on intelligent text processing and computational linguistics | 2015
Chedi Bechikh Ali; Rui Wang; Hatem Haddad
In this paper, we present a new two-level approach to extract KeyPhrases from textual documents. Our approach relies on a linguistic analysis to extract candidate KeyPhrases and a statistical analysis to rank and filter the final KeyPhrases. We evaluated our approach on three publicly available corpora with documents of varying lengths, domains and languages including English and French. We obtained improvement of Precision, Recall and F-measure. Our results indicate that our approach is independent of the length, the domain and the language.
international conference on computational linguistics | 2017
Mourad Gridach; Hatem Haddad
The previous Named Entity Recognition (NER) models for Modern Standard Arabic (MSA) rely heavily on the use of features and gazetteers, which is time consuming. In this paper, we introduce a novel neural network architecture based on bidirectional Gated Recurrent Unit (GRU) combined with Conditional Random Fields (CRF). Our neural network uses minimal features: pretrained word representations learned from unannotated corpora and also character-level embeddings of words. This novel architecture allowed us to eliminate the need for most of handcrafted engineering features. We evaluate our system on a publicly available dataset where we were able to achieve comparable results to previous best-performing systems.
Communications in computer and information science | 2017
Mourad Gridach; Hatem Haddad; Hala Mulki
Sentiment analysis is the Natural Language Processing (NLP) task that aims to classify text to different classes such as positive, negative or neutral. In this paper, we focus on sentiment analysis for Arabic language. Most of the previous works use machine learning techniques combined with hand engineering features to do Arabic sentiment analysis (ASA). More recently, Deep Neural Networks (DNNs) were widely used for this task especially for English languages. In this work, we developed a system called CNN-ASAWR where we investigate the use of Convolutional Neural Networks (CNNs) for ASA on 2 datasets: ASTD and SemEval 2017 datasets. We explore the importance of various unsupervised word representations learned from unannotated corpora. Experimental results showed that we were able to outperform the previous state-of-the-art systems on the datasets without using any kind of hand engineering features.
signal processing and communications applications conference | 2014
Tajuddeen R. Gwadabe; Muhammad Lawan Aliyu; Mujahid A. Alkassim; Mohammad Shukri Salman; Hatem Haddad
In this paper, a new sparse adaptive filtering algorithm is proposed. The proposed algorithm introduces a log-sum penalty term into the cost function of a mixed norm leaky least-mean-square (NLLMS) algorithm. The cost function of the NLLMS algorithm is expressed in terms of sum of exponentials with a leakage factor. As a result of the log-sum penalty, the performance of the proposed algorithm is high in sparse system identification settings, especially, when the unknown system is highly sparse. The performance of the proposed algorithm is compared to those of the reweighted-zero-attracting LMS (RZA-LMS) and the p-norm variable step-size LMS (PNVSSLMS) algorithms in sparse system identification settings. The proposed algorithm shows superior performance compared to the aforementioned algorithms.
signal processing and communications applications conference | 2018
Hala Mulki; Hatem Haddad; Chedi Bechikh Ali; İsmail Babaoğlu
north american chapter of the association for computational linguistics | 2018
Hala Mulki; Chedi Bechikh Ali; Hatem Haddad; İsmail Babaoğlu
CLEF (Working Notes) | 2018
Chedi Bechikh Ali; Hatem Haddad
meeting of the association for computational linguistics | 2017
Hala Mulki; Hatem Haddad; Mourad Gridach; İsmail Babaoğlu
empirical methods in natural language processing | 2017
Mourad Gridach; Hatem Haddad; Hala Mulki