Mohamed Tmar
University of Sfax
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohamed Tmar.
data and knowledge engineering | 2012
Mohamed Ali Hadj Taieb; Mohamed Ben Aouicha; Mohamed Tmar; Abdelmajid Ben Hamadou
Computing semantic relatedness is a key component of information retrieval tasks and natural processing language applications. Wikipedia provides a knowledge base for computing word relatedness with more coverage than WordNet. In this paper we use a new intrinsic information content (IC) metric with Wikipedia category graph (WCG) to measure the semantic relatedness between words. Indeed, we have developed a performed algorithm to extract the categories assigned to a given word from the WCG. Moreover, this extraction strategy is coupled with a new intrinsic information content metric based on the subgraph composed of hypernyms of a given concept. Also, we have developed a process to quantify the information content subgraph. When tested on common benchmark of similarity ratings the proposed approach shows a good correlation value compared to other computational models.
acm symposium on applied computing | 2002
Mohand Boughanem; Mohamed Tmar
This paper proposes an adaptive filtering process. Adaptive filtering consists on receiving documents over time and compare them to the user profile. Filtering is improved over time by updating the user profile and the dissemination threshold, the profile and the threshold are the principle elements in the filtering decision function. In this paper, a linear system under constraints is resolved when a relevant document is retrieved, the solution to this system is used to improve the user profile. This allows to reinforce the relevance of each relevant retrieved document. The constraints are a form of Tf*Idf (Term frequency*Inverse document frequency). A gradient distribution approach is used, based on information extracted from relevant filtered documents to update the dissemination threshold. Experiments are undertaken into a dataset provided by TREC (Text REtrieval Conference) in order to simulate and evaluate a filtering process.
computational intelligence and security | 2011
Mohamed Ali Hadj Taieb; Mohamed Ben Aouicha; Mohamed Tmar; Abdelmajid Ben Hamadou
Semantic similarity techniques are used to compute the semantic similarity (common shared information) between two concepts according to certain language or domain resources like ontologies, taxonomies, corpora, etc. Semantic similarity techniques constitute important components in most Information Retrieval (IR) and knowledge-based systems. Taking semantics into account passes by the use of external semantic resources coupled with the initial documentation on which it is necessary to have semantic similarity measurements to carry out comparisons between concepts. This paper presents a new approach for measuring semantic relatedness between words and concepts. It combines a new information content (IC) metric using the WordNet thesaurus and the nominalization relation provided by the Java WordNet Library (JWNL). Specifically, the proposed method offers a thorough use of the relation hypernym/hyponym (noun and verb “is a” taxonomy) without external corpus statistical information. Mainly, we use the subgraph formed by hypernyms of the concerned concept which inherits the whole features of its hypernyms and we quantify the contribution of each concept pertaining to this subgraph in its information content. When tested on a common data set of word pair similarity ratings, the proposed approach outperforms other computational models. It gives the highest correlation value 0.70 with a benchmark based on human similarity judgments and especially a large dataset composed of 260 Finkelstein word pairs (Appendix 1 and 2).
acm symposium on applied computing | 2010
Mohamed Ben Aouicha; Mohamed Tmar; Mohand Boughanem
The goal of an XML retrieval system is to select from a set of XML documents all elements (nodes) that fit the user information need, usually expressed by a set of keywords with some structural conditions. Structural conditions are simply given by an ordered list of tag names that gives the target element where to search for relevant content. Consequently a potential relevant node should not only contain similar text to the query but also its localization path should fit the structural conditions. We describe in this paper a new approach for ranking XML content-and-structure queries based on a probabilistic combination of two independent scores assigned to each XML element: content score and structural score. Content score measures the content similarity between an element and a query, the structural score measures the path similarity between an element path and the structural conditions of a query. We showed experimentally that both scores follow well-known distributions. We then proposed a probabilistic combination of these distributions in order to assign a final score to each node. Some experiments have been undertaken on a dataset provided by INEX to show the effectiveness of our approach. We emphasize our experiments on the VVCAS task which is appropriate to our model.
acs/ieee international conference on computer systems and applications | 2014
Hanen Karamti; Mohamed Tmar; Faiez Gargouri
Visual information retrieval has become a major research area due to increasing rate at which images are generated in many application. This paper addresses an important problems related to the content-based images retrieval. It concerns the vector representation of images and its proper use in image retrieval. Indeed, we propose a new model of content-based image retrieval allowing to integrate theories of neural network on a vector space model, where each low level query can be transformed into a score vector. Preliminary results obtained show that our proposed model is effective in a comparative study on two dataset Corel and Caltech-UCSD.
database and expert systems applications | 2013
Nahla Haddar; Mohamed Tmar; Faiez Gargouri
In recent years, many data-driven workflow modeling approaches has been developed, but none of them can insure data integration, process verification and automatic data-driven execution in a comprehensive way. Based on these needs, we introduced, in previous works, a data-driven approach for workflow modeling and execution. In this paper, we extend our approach to ensure a correct definition and execution of our workflow model, and we implement this extension in our Framework Opus.
computational science and engineering | 2012
Nahla Haddar; Mohamed Tmar; Faiez Gargouri
In this paper, we present an approach for data-driven workflow modeling based on Petri Net model. The conceived workflow process can be analysed to verify its correctness before implementation. This workflow modeling approach has been implemented into a workflow management system that provides a set of graphical interfaces to model and execute the business process tasks.
international conference on knowledge based and intelligent information and engineering systems | 2009
Jihen Majdoubi; Mohamed Tmar; Faiez Gargouri
This paper proposes an automatic method using a MeSH (Medical Subject Headings) thesaurus for generating a semantic annotation of medical articles. First, our approach uses NLP (Natural Language Processing) techniques to extract the indexing terms. Second, it extracts the Mesh concepts from this set of indexing terms. Then, these concepts are weighed based on their frequencies, locations in the article and their semantic relationships according to MeSH. Next, a refinement phase is triggered in order to upgrade the frequent ontologys concepts and determine the ones which will be integrated in the annotation. Finally, the structured result annotation is built.
engineering of computer-based systems | 2008
M. Ben Aouicha; Mohand Boughanem; Mohamed Tmar; Mohamed Abid
This paper presents an information retrieval model in XML documents based on tree matching. Queries and documents are represented by extended trees. Therefore only one level separates between each node and its indirect descendants. This allows to compare easily structural constraints of the user query and the document structure with flexibility. Thus document fragments (elements) returned in response to the query are the most similar ones to the query tree.
Multimedia Tools and Applications | 2018
Hanen Karamti; Mohamed Tmar; Muriel Visani; Thierry Urruty; Faiez Gargouri
Image retrieval is an important problem for researchers in computer vision and content-based image retrieval (CBIR) fields. Over the last decades, many image retrieval systems were based on image representation as a set of extracted low-level features such as color, texture and shape. Then, systems calculate similarity metrics between features in order to find similar images to a query image. The disadvantage of this approach is that images visually and semantically different may be similar in the low level feature space. So, it is necessary to develop tools to optimize retrieval of information. Integration of vector space models is one solution to improve the performance of image retrieval. In this paper, we present an efficient and effective retrieval framework which includes a vectorization technique combined with a pseudo relevance model. The idea is to transform any similarity matching model (between images) to a vector space model providing a score. A study on several methodologies to obtain the vectorization is presented. Some experiments have been undertaken on Wang, Oxford5k and Inria Holidays datasets to show the performance of our proposed framework.