Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamed Morsey is active.

Publication


Featured researches published by Mohamed Morsey.


Semantic Web | 2015

DBpedia – A large-scale, multilingual knowledge base extracted from Wikipedia

Jens Lehmann; Robert Isele; Max Jakob; Anja Jentzsch; Dimitris Kontokostas; Pablo N. Mendes; Sebastian Hellmann; Mohamed Morsey; Patrick van Kleef; Sören Auer

The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110 Wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things. The DBpedia project maps Wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties. The mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different Wikipedia editions to be combined. The project publishes releases of all DBpedia knowledge bases for download and provides SPARQL query access to 14 out of the 111 language editions via a global network of local DBpedia chapters. In addition to the regular releases, the project maintains a live knowledge base which is updated whenever a page in Wikipedia changes. DBpedia sets 27 million RDF links pointing into over 30 external data sources and thus enables data from these sources to be used together with DBpedia data. Several hundred data sets on the Web publish RDF links pointing to DBpedia themselves and make DBpedia one of the central interlinking hubs in the Linked Open Data (LOD) cloud. In this system report, we give an overview of the DBpedia community project, including its architecture, technical implementation, maintenance, internationalisation, usage statistics and applications.


international semantic web conference | 2011

DBpedia SPARQL benchmark: performance assessment with real queries on real data

Mohamed Morsey; Jens Lehmann; Sören Auer; Axel-Cyrille Ngonga Ngomo

Triple stores are the backbone of increasingly many Data Web applications. It is thus evident that the performance of those stores is mission critical for individual projects as well as for data integration on the Data Web in general. Consequently, it is of central importance during the implementation of any of these applications to have a clear picture of the weaknesses and strengths of current triple store implementations. In this paper, we propose a generic SPARQL benchmark creation procedure, which we apply to the DBpedia knowledge base. Previous approaches often compared relational and triple stores and, thus, settled on measuring performance against a relational database which had been converted to RDF by using SQL-like queries. In contrast to those approaches, our benchmark is based on queries that were actually issued by humans and applications against existing RDF data not resembling a relational schema. Our generic procedure for benchmark creation is based on query-log mining, clustering and SPARQL feature analysis. We argue that a pure SPARQL benchmark is more useful to compare existing triple stores and provide results for the popular triple store implementations Virtuoso, Sesame, Jena-TDB, and BigOWLIM. The subsequent comparison of our results with other benchmark results indicates that the performance of triple stores is by far less homogeneous than suggested by previous benchmarks.


international conference on semantic systems | 2013

User-driven quality evaluation of DBpedia

Amrapali Zaveri; Dimitris Kontokostas; Mohamed Ahmed Sherif; Lorenz Bühmann; Mohamed Morsey; Sören Auer; Jens Lehmann

Linked Open Data (LOD) comprises of an unprecedented volume of structured datasets on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowdsourced and even extracted data of relatively low quality. We present a methodology for assessing the quality of linked data resources, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia. We identified 17 data quality problem types and 58 users assessed a total of 521 resources. Overall, 11.93% of the evaluated DBpedia triples were identified to have some quality issues. Applying the semi-automatic component yielded a total of 222,982 triples that have a high probability to be incorrect. In particular, we found that problems such as object values being incorrectly extracted, irrelevant extraction of information and broken links were the most recurring quality problems. With this study, we not only aim to assess the quality of this sample of DBpedia resources but also adopt an agile methodology to improve the quality in future versions by regularly providing feedback to the DBpedia maintainers.


Program: Electronic Library and Information Systems | 2012

DBpedia and the live extraction of structured data from Wikipedia

Mohamed Morsey; Jens Lehmann; Sören Auer; Claus Stadler; Sebastian Hellmann

Purpose – DBpedia extracts structured information from Wikipedia, interlinks it with other knowledge bases and freely publishes the results on the web using Linked Data and SPARQL. However, the DBpedia release process is heavyweight and releases are sometimes based on several months old data. DBpedia‐Live solves this problem by providing a live synchronization method based on the update stream of Wikipedia. This paper seeks to address these issues.Design/methodology/approach – Wikipedia provides DBpedia with a continuous stream of updates, i.e. a stream of articles, which were recently updated. DBpedia‐Live processes that stream on the fly to obtain RDF data and stores the extracted data back to DBpedia. DBpedia‐Live publishes the newly added/deleted triples in files, in order to enable synchronization between the DBpedia endpoint and other DBpedia mirrors.Findings – During the realization of DBpedia‐Live the authors learned that it is crucial to process Wikipedia updates in a priority queue. Recently‐upd...


international semantic web conference | 2012

DeFacto - deep fact validation

Jens Lehmann; Daniel Gerber; Mohamed Morsey; Axel-Cyrille Ngonga Ngomo

One of the main tasks when creating and maintaining knowledge bases is to validate facts and provide sources for them in order to ensure correctness and traceability of the provided knowledge. So far, this task is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents. In this article, we present DeFacto (Deep Fact Validation) --- an algorithm for validating facts by finding trustworthy sources for it on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of webpages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact.


scalable information systems | 2015

The Open-Multinet Upper Ontology Towards the Semantic-based Management of Federated Infrastructures

Alexander Willner; Chrysa A. Papagianni; Mary Giatili; Paola Grosso; Mohamed Morsey; Yahya Al-Hazmi; Ilya Baldin

The Internet remains an unfinished work. There are several approaches to enhancing it that have been experimentally validated within federated testbed environments. To best gain scientific knowledge from these studies, reproducibility and automation are needed in all areas of the experiment life cycle. Within the GENI and FIRE context, several architectures and protocols have been developed for this purpose. However, a major open research issue remains, namely the description and discovery of the heterogeneous resources involved. To remedy this, we propose a semantic information model that can be used to allow declarative interoperability, build dependency graphs, validate requests, infer knowledge and conduct complex queries. The requirements for such an information model have been extracted from current international Future Internet research projects and the practicality of the model is being evaluated through initial implementations. The main outcome of this work is the definition of the Open-Multinet Upper Ontology and related sub-ontologies, which can be used to describe and manage federated infrastructures and their resources.


international semantic web conference | 2017

Iguana: A Generic Framework for Benchmarking the Read-Write Performance of Triple Stores

Felix Conrads; Jens Lehmann; Muhammad Saleem; Mohamed Morsey; Axel-Cyrille Ngonga Ngomo

The performance of triples stores is crucial for applications driven by RDF. Several benchmarks have been proposed that assess the performance of triple stores. However, no integrated benchmark-independent execution framework for these benchmarks has yet been provided. We propose a novel SPARQL benchmark execution framework called Iguana. Our framework complements benchmarks by providing an execution environment which can measure the performance of triple stores during data loading, data updates as well as under different loads and parallel requests. Moreover, it allows a uniform comparison of results on different benchmarks. We execute the FEASIBLE and DBPSB benchmarks using the Iguana framework and measure the performance of popular triple stores under updates and parallel user requests. We compare our results (See https://doi.org/10.6084/m9.figshare.c.3767501.v1) with state-of-the-art benchmarking results and show that our benchmark execution framework can unveil new insights pertaining to the performance of triple stores.


international conference on computer communications | 2016

DBcloud: Semantic Dataset for the cloud

Mohamed Morsey; Alexander Willner; Robyn Loughnane; Mary Giatili; Chrysa A. Papagianni; Ilya Baldin; Paola Grosso; Yahya Al-Hazmi

In cloud environments, the process of matching requests from users with the available computing resources is a challenging task. This is even more complex in federated environments, where multiple providers cooperate to offer enhanced services, suitable for distributed applications. In order to resolve these issues, a powerful modeling methodology can be adopted to facilitate expressing both the request and the available computing resources. This, in turn, leads to an effective matching between the request and the provisioned resources. For this purpose, the Open-Multinet ontologies were developed, which leverage the expressive power of Semantic Web technologies to describe infrastructure components and services. These ontologies have been adopted in a number of federated testbeds. In this article, DBcloud is presented, a system that provides access to Open-Multinet open data via endpoints. DBcloud can be used to simplify the process of discovery and provisioning of cloud resources and services.


international conference on cloud computing and services science | 2016

SemNaaS: Semantic Web for Network as a Service

Mohamed Morsey; Hao Zhu; Isart Canyameres; Samuel Norbury; Paola Grosso; Miroslav Zivkovic

Cloud Computing has several provisioning models, namely Infrastructure as a service (IaaS), Platform as a service (PaaS), and Software as a service (SaaS). However, cloud users (tenants) have limited or no control over the underlying network resources and services. Network as a Service (NaaS) is emerging as a novel model to bridge this gap. However, NaaS requires an approach capable of modeling the underlying network resources and capabilities in abstracted and vendor-independent form. In this paper we elaborate on SemNaaS, a Semantic Web based approach for supporting network management in NaaS systems. Our contribution is three-fold. First, we adopt and improve the Network Markup Language (NML) ontology for describing NaaS infrastructures. Second, based on that ontology, we develop a network modeling system that is integrated with the existing OpenNaaS framework. Third, we demonstrate the benefits that Semantic Web adds to the Network as a Service paradigm by applying SemNaaS operations to a specific NaaS use case.


european semantic web conference | 2015

SemNaaS: Add Semantic Dimension to the Network as a Service

Mohamed Morsey; Hao Zhu; Isart Canyameres; Paola Grosso

Cloud Computing has several provision models, e.g. Infrastructure as a service IaaS. However, cloud users tenants have limited or no control over the underlying network resources. Network as a Service NaaS is emerging as a novel model to fill this gap. However, NaaS requires an approach capable of modeling the underlying network resources capabilities in abstracted and vendor-independent form. In this paper we elaborate on SemNaaS, a Semantic Web based approach for developing and supporting operations of NaaS systems. SemNaaS can work with any NaaS provider. We integrated it with the existing OpenNaaS framework. We propose Network Markup Language NML as the ontology for describing networking infrastructures.Based on that ontology, we develop a network modeling system and integrate with OpenNaaS. Furthermore, we demonstrate the capabilities that Semantic Web can add to the NaaS paradigm by applying SemNaaS operations to a specific NaaS use case.

Collaboration


Dive into the Mohamed Morsey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paola Grosso

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Alexander Willner

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chrysa A. Papagianni

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Mary Giatili

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Ilya Baldin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yahya Al-Hazmi

Technical University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge