Abdelmajid Ben Hamadou
University of Sfax
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Abdelmajid Ben Hamadou.
Pattern Recognition Letters | 2010
Yousri Kessentini; Thierry Paquet; Abdelmajid Ben Hamadou
In this paper, we present a multi-stream approach for off-line handwritten word recognition. The proposed approach combines low level feature streams namely, density based features extracted from 2 different sliding windows with different widths, and contour based features extracted from upper and lower contours. The multi-stream paradigm provides an interesting framework for the integration of multiple sources of information and is compared to the standard combination strategies namely fusion of representations and fusion of decisions. We investigate the extension of 2-stream approach to N streams (N=2,...,4) and analyze the improvement in the recognition performance. The computational cost of this extension is discussed. Significant experiments have been carried out on two publicly available word databases: IFN/ENIT benchmark database (Arabic script) and IRONOFF database (Latin script). The multi-stream framework improves the recognition performance in both cases. Using 2-stream approach, the best recognition performance is 79.8%, in the case of the Arabic script, on a 2100-word lexicon consisting of 946 Tunisian town/village names. In the case of the Latin script, the proposed approach achieves a recognition rate of 89.8% using a lexicon of 196 words.
Knowledge and Information Systems | 2014
Mohamed Ali Hadj Taieb; Mohamed Ben Aouicha; Abdelmajid Ben Hamadou
Computing semantic similarity/relatedness between concepts and words is an important issue of many research fields. Information theoretic approaches exploit the notion of Information Content (IC) that provides for a concept a better understanding of its semantics. In this paper, we present a complete IC metrics survey with a critical study. Then, we propose a new intrinsic IC computing method using taxonomical features extracted from an ontology for a particular concept. This approach quantifies the subgraph formed by the concept subsumers using the depth and the descendents count as taxonomical parameters. In a second part, we integrate this IC metric in a new parameterized multistrategy approach for measuring word semantic relatedness. This measure exploits the WordNet features such as the noun “is a” taxonomy, the nominalization relation allowing the use of verb “is a” taxonomy and the shared words (overlaps) in glosses. Our work has been evaluated and compared with related works using a wide set of benchmarks conceived for word semantic similarity/relatedness tasks. Obtained results show that our IC method and the new relatedness measure correlated better with human judgments than related works.Computing semantic similarity/relatedness between concepts and words is an important issue of many research fields. Information theoretic approaches exploit the notion of Information Content (IC) that provides for a concept a better understanding of its semantics. In this paper, we present a complete IC metrics survey with a critical study. Then, we propose a new intrinsic IC computing method using taxonomical features extracted from an ontology for a particular concept. This approach quantifies the subgraph formed by the concept subsumers using the depth and the descendents count as taxonomical parameters. In a second part, we integrate this IC metric in a new parameterized multistrategy approach for measuring word semantic relatedness. This measure exploits the WordNet features such as the noun “is a” taxonomy, the nominalization relation allowing the use of verb “is a” taxonomy and the shared words (overlaps) in glosses. Our work has been evaluated and compared with related works using a wide set of benchmarks conceived for word semantic similarity/relatedness tasks. Obtained results show that our IC method and the new relatedness measure correlated better with human judgments than related works.
international conference on computational linguistics | 2003
Maher Jaoua; Abdelmajid Ben Hamadou
We propose in this paper a summarization method that creates indicative summaries from scientific papers. Unlike conventional methods that extract important sentences, our method considers the extract as the minimal unit for extraction and uses two steps: the generation and the classification. The first step combines text sentences to produce a population of extracts. The second step evaluates each extract using global criteria in order to select the best one. In this case, the criteria are defined according to the whole extract rather than sentences. We have developed a prototype of the summarization system for French language called ExtraGen that implements a genetic algorithm simulating the mechanism of generation and classification.
software engineering and formal methods | 2003
Nadia Bouassida; Hanêne Ben-Abdallah; Faiez Gargouri; Abdelmajid Ben Hamadou
Frameworks offer reuse through the generality they have to encompass. This same property, however, often makes a framework design fairly complex, hard to understand and, hence, to reuse. This paper briefly presents the F-UML design. It then focuses on the definition of the formal semantics of F-UML. This latter is defined through a translation of the meta-model of F-UML to Object-Z. The Object-Z semantics allows a designer to prove the syntactic well-formedness of an F-UML design. In addition, it allows the verification of several design properties through a theorem prover.
international conference on multimedia and expo | 2006
Bassem Bouaziz; Walid Mahdi; Mohsen Ardabilain; Abdelmajid Ben Hamadou
In this paper we present a new texture feature extraction approach. Existing methods are generally time consuming and sensible to image complexity in terms of textures regularity, directionality and coarseness. So that we propose a method which provide both rapidity and accuracy to extract and characterize texture features. Its based on Hough transform technique combined with an extremity segments neighbourhood analysis and a new computation algorithm to extract segments and detect regularity. Experimental results show that this approach is robust and can be applied not only to texture analysis but also to detect text within video images
data and knowledge engineering | 2012
Mohamed Ali Hadj Taieb; Mohamed Ben Aouicha; Mohamed Tmar; Abdelmajid Ben Hamadou
Computing semantic relatedness is a key component of information retrieval tasks and natural processing language applications. Wikipedia provides a knowledge base for computing word relatedness with more coverage than WordNet. In this paper we use a new intrinsic information content (IC) metric with Wikipedia category graph (WCG) to measure the semantic relatedness between words. Indeed, we have developed a performed algorithm to extract the categories assigned to a given word from the WCG. Moreover, this extraction strategy is coupled with a new intrinsic information content metric based on the subgraph composed of hypernyms of a given concept. Also, we have developed a process to quantify the information content subgraph. When tested on common benchmark of similarity ratings the proposed approach shows a good correlation value compared to other computational models.
The Journal of Object Technology | 2004
Hanêne Ben-Abdallah; Nadia Bouassida; Faiez Gargouri; Abdelmajid Ben Hamadou
Object-oriented frameworks offer reuse at a high design level promising several benefits to the development of complex systems. However, framework design remains a difficult task due to the generality and variability frameworks must encompass. In addition, traditional object-oriented design methods only deal with the design of specific applications and do not facilitate the design of frameworks. In this paper, we present a UML-based framework design method called FBDM. The method offers a design language, called F-UML, and a semi-automatic design process both of which supported by a CASE environment. The design language F-UML visually distinguishes among the fixed components and the adaptable components of a framework. The design process for F-UML is based on stepwise, bottom-up unification rules that apply a set of comparison criteria on various applications in the framework domain. The design method is illustrated and evaluated through the design of a framework for electronic commerce brokers.
Applied Intelligence | 2016
Mohamed Ben Aouicha; Mohamed Ali Hadj Taieb; Abdelmajid Ben Hamadou
Computing the semantic similarity/relatedness between terms is an important research area for several disciplines, including artificial intelligence, cognitive science, linguistics, psychology, biomedicine and information retrieval. These measures exploit knowledge bases to express the semantics of concepts. Some approaches, such as the information theoretical approaches, rely on knowledge structure, while others, such as the gloss-based approaches, use knowledge content. Firstly, based on structure, we propose a new intrinsic Information Content (IC) computing method which is based on the quantification of the subgraph formed by the ancestors of the target concept. Taxonomic measures including the IC-based ones consume the topological parameters that must be extracted from taxonomies considered as Directed Acyclic Graphs (DAGs). Accordingly, we propose a routine of graph algorithms that are able to provide some basic parameters, such as depth, ancestors, descendents, Lowest Common Subsumer (LCS). The IC-computing method is assessed using several knowledge structures which are: the noun and verb WordNet “is a” taxonomies, Wikipedia Category Graph (WCG), and MeSH taxonomy. We also propose an aggregation schema that exploits the WordNet “is a” taxonomy and WCG in a complementary way through the IC-based measures to improve coverage capacity. Secondly, taking content into consideration, we propose a gloss-based semantic similarity measure that operates based on the noun weighting mechanism using our IC-computing method, as well as on the WordNet, Wiktionary and Wikipedia resources. Further evaluation is performed on various items, including nouns, verbs, multiword expressions and biomedical datasets, using well-recognized benchmarks. The results indicate an improvement in terms of similarity and relatedness assessment accuracy.
international conference on multimedia and information technology | 2010
Emna Fendri; Hanêne Ben-Abdallah; Abdelmajid Ben Hamadou
Face to an ever growing video collection, user are in a quest for a technology to effectively browse videos in a short time without missing important content. In this paper, we present a new approach to extract semantic video information from a soccer video and create personalized summaries. Our approach builds upon segmentation and indexation steps that rely on both low-level (graphic) and text-based treatment of the soccer video. The video summarization operates in three steps: identification of pertinent segments to appear in the summary, identification of the ratio of participation of each pertinent segment, and identification of frames to participate in the summary. We present different methods for summarization along with their experimental evaluations.
international conference on document analysis and recognition | 2009
Yousri Kessentini; Thierry Paquet; Abdelmajid Ben Hamadou
Generally, handwritten word recognition systems use script specific methodologies. In this paper, we present a unified approach for multi-lingual recognition of alphabetic scripts. The proposed system operates independently of the nature of the script using the multi-stream paradigm. The experiments have been carried out on a multi-script database composed of Arabic and Latin handwritten words from the IFN/ENIT and the IRONOFF public databases and show interesting recognition performances with only1.5% of script confusion and an overall word recognition rate of 84.5% using a multi-script lexicon of 1142 words.
Collaboration
Dive into the Abdelmajid Ben Hamadou's collaboration.
French Institute for Research in Computer Science and Automation
View shared research outputs