Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamed Nazih Omri is active.

Publication


Featured researches published by Mohamed Nazih Omri.


association for information science and technology | 2016

Indexing biomedical documents with a possibilistic network

Wiem Chebil; Lina Fatima Soualmia; Mohamed Nazih Omri; Stéfan Jacques Darmoni

In this article, we propose a new approach for indexing biomedical documents based on a possibilistic network that carries out partial matching between documents and biomedical vocabulary. The main contribution of our approach is to deal with the imprecision and uncertainty of the indexing task using possibility theory. We enhance estimation of the similarity between a document and a given concept using the two measures of possibility and necessity. Possibility estimates the extent to which a document is not similar to the concept. The second measure can provide confirmation that the document is similar to the concept. Our contribution also reduces the limitation of partial matching. Although the latter allows extracting from the document other variants of terms than those in dictionaries, it also generates irrelevant information. Our objective is to filter the index using the knowledge provided by the Unified Medical Language System®. Experiments were carried out on different corpora, showing encouraging results (the improvement rate is +26.37% in terms of main average precision when compared with the baseline).


artificial intelligence in medicine in europe | 2015

Biomedical Concepts Extraction Based on Possibilistic Network and Vector Space Model

Wiem Chebil; Lina Fatima Soualmia; Mohamed Nazih Omri; Stéfan Jacques Darmoni

This paper proposes a new approach for indexing biomedical documents based on the combination of a Possibilistic Network and a Vector Space Model. This later carries out partial matching between documents and biomedical vocabularies. The main contribution of the proposed approach is to combine the cosine similarity and the two measures of possibility and necessity to enhance the estimation of the similarity between a document and a given concept. The possibility estimates the extent to which a document is not similar to the concept. The necessity allows the confirmation that the document is similar to the concept. Experiments were carried out on the OSHUMED corpora and showed encouraging results.


International Journal of Information Retrieval Research (IJIRR) | 2012

Information Retrieval from Deep Web Based on Visual Query Interpretation

Radhouane Boughammoura; Mohamed Nazih Omri; Lobna Hlaoua

ABSTRACT Deep Web is growing rapidly. More than 90% of relevant information in web comes from deep Web. Users are usually interested by products which satisfy their needs at the best prices and quality of service .Hence, user’s needs concerns not only one service but many competitive services at the same time. However, for commercial reasons, there is no way to compare all web services products. Each web service is a black box which accepts queries through its own query interface and returns results. As consequence, users ask separately different web services and spend a lot of time comparing products in order to find the best one. This is a burden for novice users. In this paper, the authors propose a new approach which integrates query interfaces of many web services into one universal web service. The new interface describes visually the universal query and is used to ask many web services at the same time. The authors have evaluated their approach on standard datasets and have proved good performances.


workshops on enabling technologies: infrastracture for collaborative enterprises | 2017

Multi-tenancy Aware Configurable Service Discovery Approach in Cloud Computing

Jalel Eddine Hajlaoui; Mohamed Nazih Omri; Djamal Benslimane

The multi-tenancy aware discovery of configurable Cloud services is one of the most important and difficult issues, because of multiplicity and non-standardization of their description in the Cloud. In this paper, relying on a feature model based specification of configurable WSDL services, we develop a multi-tenancy aware approach for their discovery. Our approach empowers multiple tenants to discover their desired service configured variants, considering individual variations. To do so, we reduce the problem of configurable matching to a tree matching problem and we adapt existing algorithms for this aim. The experimental results show the feasibility of our approach.


International Journal of Approximate Reasoning | 2018

Towards an understanding of cloud services under uncertainty: A possibilistic approach

Asma Omri; Karim Benouaret; Djamal Benslimane; Mohamed Nazih Omri

Abstract With the development of Web technologies and the increasing use of the Internet, more and more web services are being deployed. This gave birth to what called cloud services, which are widely used for building distributed cloud applications. With cloud-based service delivery, it seems hard for users to find the right service for their needs. However, if a single cloud service cannot meet all user requirements, a cloud service composition is required. With the rise of Web services on the Internet, the quality of response has been considered as an important criterion to choose the most relevant answer. The performance of web services may vary depending on the dynamic Internet environment, which makes the quality of service uncertain. Then, the need to deal with uncertainty in the operation of cloud services. Over the past decade, several approaches to service composition have been proposed to cope with this challenge. In this article, we provide a flexible and efficient cloud services composition framework that can respond to user queries and improve cloud services composition results by exploiting data uncertainty. We then introduce an effective strategy that allows modeling, invocation, and composition of services by dealing with data uncertainty. The experimental evaluation of a real case study demonstrates the effectiveness of our proposed strategy.


international conference on web services | 2017

QoS Based Framework for Configurable IaaS Cloud Services Discovery

Jalel Eddine Hajlaoui; Mohamed Nazih Omri; Djamal Benslimane; Mahmoud Barhamgi

This paper presents a Configurable Cloud Service Discovery and Selection System (C2SDS2) that aims to guide Cloud users in retrieving configurations of IaaS Cloud resources over the Internet. The C2SDS2 takes into account both user functional and non-functional requirements in the retrieval and selection process. In this work, configurable services are designed as directed Cloud extended feature graphs inspired by graph structures and feature models. The discovery-based matching is performed in two steps. In the first step, the structural matching is performed by adapting two heuristics: (1) Hungarian and (2) VG (Volgenant-Jonker) which is an improved Hungarian algorithm. In the second step, the QoS matching and ranking are achieved using three different methods of directed weighted graph matching namely the Eigen-decomposition, the Symmetric polynomial transform and the Linear programming methods. We show the efficiency and effectiveness of our system through an experimental study conducted on a configurable IaaS services collection. The experiment results show the performance and the efficiency of the algorithms combinations.


international conference on software and data technologies | 2017

Estimating the Survival Rate of Mutants.

Imen Marsit; Mohamed Nazih Omri; Ali Mili

Mutation testing is often used to assess the quality of a test suite by analyzing its ability to distinguish between a base program and its mutants. The main threat to the validity/ reliability of this assessment approach is that many mutants may be syntactically distinct from the base, yet functionally equivalent to it. The problem of identifying equivalent mutants and excluding them from consideration is the focus of much recent research. In this paper we argue that it is not necessary to identify individual equivalent mutants and count them; rather it is sufficient to estimate their number. To do so, we consider the question: what makes a program prone to produce equivalent mutants? Our answer is: redundancy does. Consequently, we introduce a number of program metrics that capture various dimensions of redundancy in a program, and show empirically that they are statistically linked to the rate of equivalent mutants. 1 EQUIVALENT MUTANTS Mutation testing is typically used to assess the effectiveness of test suites, by analyzing to what extent a test suite T can distinguish between a base program P and a set of mutants thereof, say P1, P2, ... PN (Debroy and Wong, 2010; Debroy and Wong, 2013; V. and W.E., 2010; Ma et al., 2005; Zemı́n et al., 2015). A recurring source of aggravation in mutation testing is the presence of equivalent mutants: some mutants may be syntactically distinct from the base program, yet be functionally indistinguishable from it (i.e. compute the same function). Equivalent mutants distort the analysis of the test suite’s effectiveness, because when a test suite fails to distinguish a mutant Pi from the base program P, we do not know whether this is because Pi is equivalent to P, or because T is not sufficiently effective at detecting faults. Ideally, we want to quantify the effectiveness of a test suite T , not by the ratio of the mutants it distinguishes over the total number of mutants (N), but rather by the ratio of the mutants it distinguishes over the number of non-equivalent (distinguishable) mutants. Many researchers (Aadamopoulos et al., 2004; Papadakis et al., 2014; Schuler and Zeller, 2010; Just et al., 2013; Nica and Wotawa, 2012) have addressed this problem by proposing means to identify (and exclude from consideration) mutants that are equivalent to the base program. In this paper, we propose an alternative approach, which does not seek to identify which mutants are equivalent to the base, but merely to estimate their number. To do so, we consider the research question (RQ3) raised by Yao et al (Yao et al., 2014): What are the causes of mutant equivalence? Our answer: Redundancy in the program. Consequently, we define a number of software metrics that capture various forms of redundancy, discuss why we believe they are prone to produce equivalent mutants, then run an experiment that appears to bear out conjecture out. This is work in progress; we are fairly confident that our analytical arguments are sound, and we are encouraged by the preliminary empirical results. Following common usage, we say that a mutant has been killed by a test data set T if and only if execution of the mutant on T exhibits a distinct behavior from the original program P; consequently, when a mutant goes through test data T and shows the same behavior as P, we say that it has survived the test. Also, we use the term survival rate of a program P to designate the ratio (or percentage) of mutants of P that are found to survive the execution of test data T . For the purpose of our study, we use the semantic metrics introduced in (Mili et al., 2014), which we briefly review in section 2. In section 3 we discuss why we feel that the semantic metrics introduced in section 2 are good indicators of the number of potentially equivalent mutants that a program P may yield.


international conference on high performance computing and simulation | 2017

Information Retrieval Based on Description Logic: Application to Biomedical Documents

Kabil Boukhari; Mohamed Nazih Omri

The document indexing is a fairly sensitive phase in the information retrieval. However, terms presented in a document are not sufficient to completely represent it. Then, the exploitation of the implicit information, through external resources, is necessary for better indexing. For this purpose, a new indexing model for biomedical documents based on description logics has been proposed to generate relevant indexes. The documents and the external resource are represented by descriptive expressions; a first statistical phase consists in assigning an importance degree to each term in the document and a semantic part to extract the most important concepts of the MESH thesaurus (Medical Subject Headings). The concept extraction step uses the description logics to combine the statistical and semantic approaches followed by a cleaning part to select the most important indexes for the document representation. For the experiments phase we used the OHSUMED collection, which showed the effectiveness of the proposed approach and the importance of using description logics for the indexing process.


international conference on high performance computing and simulation | 2017

Collaborative Information Retrieval Model Based on Fuzzy Clustering

Fatiha Naouar; Lobna Hlaoua; Mohamed Nazih Omri

The collaborative approach has shown interest in several fields of application, particularly in information retrieval to satisfy a need for shared information. Despite this collaboration, the search for relevant information is always a tedious task as long as the mass of information continues to increase, part of which is a source, while other parties represent comments on these sources. It is obvious that nowadays we witness an explosion of multimedia documents so that multimedia information retrieval techniques remain insufficient to satisfy the needs of the user despite the collaborative framework: multimedia-type documents cannot be rich in information and more specifically the video documents. We consider, therefore, annotations as a new source of information. In addition to their relevance, we notice that annotations express generally brief ideas using some words that they cannot be comprehensible independently of his context. To use them, a classification is considered necessary. The emergence of new annotations should be considered and therefore the classification should be extended. A centroid is determined in a virtual way to represent each annotation class. From where, the interest to use the fuzzy classification to know which elements can belong to several clusters. It consists, in a calculation of the center of gravity of all the existing classes. This is the reason why; we proposed a fuzzy clustering-based annotation. In the experiments, we tried to consider a relevance feedback system based on confidence network considering new relevant classified annotations as a source of information. To validate this model, we have carried out a set of experiments and we have obtained encouraging results.


international conference on high performance computing and simulation | 2017

Bayesian Network Based Information Retrieval Model

Kamel Garrouch; Mohamed Nazih Omri

Information Retrieval Models (IRM) that integrate term dependencies are based on the assumption that the retrieval performance of an Information Retrieval System (IRS) usually increases when the relationships among the terms, contained in a given document collection, is used. These models have to deal with two problems. The first is how to obtain a set of relevant dependence relationships efficiently form a document collection. The second problem is how best to use the set of the obtained dependencies to retrieve relevant documents, given a user query. In this work, a new information retrieval model based on Bayesian networks is proposed. Its aim is to achieve a good retrieval performance by restricting the set of dependencies between terms to most relevant ones. In order to achieve this objective, this model searches for dependence relationships within each document in the collection. Then, it creates a final list of dependencies by merging the set of lists obtained locally form each document. Experiments carried out on four standard document collections have proven the efficiency of the proposed model.

Collaboration


Dive into the Mohamed Nazih Omri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Salem Benferhat

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge