Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Saeedeh Shekarpour is active.

Publication


Featured researches published by Saeedeh Shekarpour.


Journal of Web Semantics | 2010

Modeling and evaluation of trust with an extension in semantic web

Saeedeh Shekarpour; S.D. Katebi

The aim of this paper is two folds. Firstly, some of the well known methods of trust modeling and trust evaluation that relates mainly to the semantic web structure are reviewed and analyzed. A categorization for calculation of trust and an analytical view of possible models of trust rating through a chain of acquaintances are presented. Based on experimental results the well known methods are compared and contrasted. Secondly a new method for evaluating trust is also proposed. This new model has the advantages of simplicity in calculation and enhanced accuracy. The method is associated with two algorithms, an algorithm for propagation and another for aggregation. The propagation algorithm utilizes statistical techniques and the aggregation algorithm is based on a weighting mechanism. The technique is named Max-weight method and is also implemented and the results are compared based on a designed accuracy metric. The proposed method may be employed as a subsystem for trust management in semantic web and trust evaluation in human interaction in a social networks as well as machines (artificial agents). Experimental results illustrate the efficiency and effectiveness of the proposed method.


Semantic Web - The Personal and Social Semantic Web archive | 2014

An architecture of a distributed semantic social network

Sebastian Tramp; Philipp Frischmuth; Timofey Ermilov; Saeedeh Shekarpour; Sören Auer

Online social networking has become one of the most popular services on the Web. However, current social networks are like walled gardens in which users do not have full control over their data, are bound to specific usage terms of the social network operator and suffer from a lock-in effect due to the lack of interoperability and standards compliance between social networks. In this paper we propose an architecture for an open, distributed social network, which is built solely on Semantic Web standards and emerging best practices. Our architecture combines vocabularies and protocols such as WebID, FOAF, Semantic Pingback and PubSubHubbub into a coherent distributed semantic social network, which is capable to provide all crucial functionalities known from centralized social networks. We present our reference implementation, which utilizes the OntoWiki application framework and take this framework as the basis for an extensive evaluation. Our results show that a distributed social network is feasible, while it also avoids the limitations of centralized solutions.


international world wide web conferences | 2013

Question answering on interlinked data

Saeedeh Shekarpour; Axel-Cyrille Ngonga Ngomo; Sören Auer

The Data Web contains a wealth of knowledge on a large number of domains. Question answering over interlinked data sources is challenging due to two inherent characteristics. First, different datasets employ heterogeneous schemas and each one may only contain a part of the answer for a certain question. Second, constructing a federated formal query across different datasets requires exploiting links between the different datasets on both the schema and instance levels. We present a question answering system, which transforms user supplied queries (i.e. natural language sentences or keywords) into conjunctive SPARQL queries over a set of interlinked data sources. The contribution of this paper is two-fold: Firstly, we introduce a novel approach for determining the most suitable resources for a user-supplied query from different datasets (disambiguation). We employ a hidden Markov model, whose parameters were bootstrapped with different distribution functions. Secondly, we present a novel method for constructing a federated formal queries using the disambiguated resources and leveraging the linking structure of the underlying datasets. This approach essentially relies on a combination of domain and range inference as well as a link traversal method for constructing a connected graph which ultimately renders a corresponding SPARQL query. The results of our evaluation with three life-science datasets and 25 benchmark queries demonstrate the effectiveness of our approach.


web intelligence | 2011

Keyword-Driven SPARQL Query Generation Leveraging Background Knowledge

Saeedeh Shekarpour; Sören Auer; Axel-Cyrille Ngonga Ngomo; Daniel Gerber; Sebastian Hellmann; Claus Stadler

The search for information on the Web of Data is becoming increasingly difficult due to its dramatic growth. Especially novice users need to acquire both knowledge about the underlying ontology structure and proficiency in formulating formal queries (e. g. SPARQL queries) to retrieve information from Linked Data sources. So as to simplify and automate the querying and retrieval of information from such sources, we present in this paper a novel approach for constructing SPARQL queries based on user-supplied keywords. Our approach utilizes a set of predefined basic graph pattern templates for generating adequate interpretations of user queries. This is achieved by obtaining ranked lists of candidate resource identifiers for the supplied keywords and then injecting these identifiers into suitable positions in the graph pattern templates. The main advantages of our approach are that it is completely agnostic of the underlying knowledge base and ontology schema, that it scales to large knowledge bases and is simple to use. We evaluate17 possible valid graph pattern templates by measuring their precision and recall on 53 queries against DBpedia. Our results show that 8 of these basic graph pattern templates return results with a precision above 70%. Our approach is implemented as a Web search interface and performs sufficiently fast to return instant answers to the user even with large knowledge bases.


ieee international conference semantic computing | 2013

Keyword Query Expansion on Linked Data Using Linguistic and Semantic Features

Saeedeh Shekarpour; Konrad Höffner; Jens Lehmann; Sören Auer

Effective search in structured information based on textual user input is of high importance in thousands of applications. Query expansion methods augment the original query of a user with alternative query elements with similar meaning to increase the chance of retrieving appropriate resources. In this work, we introduce a number of new query expansion features based on semantic and linguistic inferencing over Linked Open Data. We evaluate the effectiveness of each feature individually as well as their combinations employing several machine learning approaches. The evaluation is carried out on a training dataset extracted from the QALD question answering benchmark. Furthermore, we propose an optimized linear combination of linguistic and lightweight semantic features in order to predict the usefulness of each expansion candidate. Our experimental study shows a considerable improvement in precision and recall over baseline approaches.


ieee international conference semantic computing | 2013

Large-Scale RDF Dataset Slicing

Edgard Marx; Saeedeh Shekarpour; Sören Auer; Axel-Cyrille Ngonga Ngomo

In the last years an increasing number of structured data was published on the Web as Linked Open Data (LOD). Despite recent advances, consuming and using Linked Open Data within an organization is still a substantial challenge. Many of the LOD datasets are quite large and despite progress in RDF data management their loading and querying within a triple store is extremely time-consuming and resource-demanding. To overcome this consumption obstacle, we propose a process inspired by the classical Extract-Transform-Load (ETL) paradigm. In this article, we focus particularly on the selection and extraction steps of this process. We devise a fragment of SPARQL dubbed SliceSPARQL, which enables the selection of well-defined slices of datasets fulfilling typical information needs. SliceSPARQL supports graph patterns for which each connected sub graph pattern involves a maximum of one variable or IRI in its join conditions. This restriction guarantees the efficient processing of the query against a sequential dataset dump stream. As a result our evaluation shows that dataset slices can be generated an order of magnitude faster than by using the conventional approach of loading the whole dataset into a triple store and retrieving the slice by executing the query against the triple stores SPARQL endpoint.


Web Intelligence and Agent Systems: An International Journal | 2013

Generating SPARQL queries using templates

Saeedeh Shekarpour; Sören Auer; Axel-Cyrille Ngonga Ngomo; Daniel Gerber; Sebastian Hellmann; Claus Stadler

The search for information on the Web of Data is becoming increasingly difficult due to its considerable growth. Especially novice users need to acquire both knowledge about the underlying ontology structure and proficiency in formulating formal queries e.g. SPARQL queries to retrieve information from Linked Data sources. So as to simplify and automate the querying and retrieval of information from such sources, this paper presents an approach for constructing SPARQL queries based on user-supplied keywords. Our approach utilizes a set of predefined basic graph pattern templates for generating adequate interpretations of user queries. This is achieved by obtaining ranked lists of candidate resource identifiers for the supplied keywords and then injecting these identifiers into suitable positions in the graph pattern templates. The main advantages of our approach are that it is completely agnostic of the underlying knowledge base and ontology schema, that it scales to large knowledge bases and is simple to use. We evaluate all 17 possible valid graph pattern templates by measuring their precision and recall on 53 queries against DBpedia. Our results show that 8 of these basic graph pattern templates return results with a precision above 70%. Our approach is implemented as a Web search interface and performs sufficiently fast to provide answers within an acceptable time frame even when used on large knowledge bases.


international world wide web conferences | 2018

Why Reinvent the Wheel – Let’s Build Question Answering Systems Together

Kuldeep Singh; Arun Sethupat Radhakrishna; Andreas Both; Saeedeh Shekarpour; Ioanna Lytra; Ricardo Usbeck; Akhilesh Vyas; Akmal Khikmatullaev; Dharmen Punjani; Christoph Lange; Maria-Esther Vidal; Jens Lehmann; Sören Auer

Modern question answering (QA) systems need to flexibly integrate a number of components specialised to fulfil specific tasks in a QA pipeline. Key QA tasks include Named Entity Recognition and Disambiguation, Relation Extraction, and Query Building. Since a number of different software components exist that implement different strategies for each of these tasks, it is a major challenge to select and combine the most suitable components into a QA system, given the characteristics of a question. We study this optimisation problem and train classifiers, which take features of a question as input and have the goal of optimising the selection of QA components based on those features. We then devise a greedy algorithm to identify the pipelines that include the suitable components and can effectively answer the given question. We implement this model within Frankenstein, a QA framework able to select QA components and compose QA pipelines. We evaluate the effectiveness of the pipelines generated by Frankenstein using the QALD and LC-QuAD benchmarks. These results not only suggest that Frankenstein precisely solves the QA optimisation problem but also enables the automatic composition of optimised QA pipelines, which outperform the static Baseline QA pipeline. Thanks to this flexible and fully automated pipeline generation process, new QA components can be easily included in Frankenstein, thus improving the performance of the generated pipelines.


international semantic web conference | 2011

DC proposal: automatically transforming keyword queries to SPARQL on large-scale knowledge bases

Saeedeh Shekarpour

Most Web of Data applications focus mainly on using SPARQL for issuing queries. This leads to theWeb of Data being difficult to access for non-experts. Another problem that will intensify this challenge is when applying the algorithms on large-scale and decentralized knowledge bases. In the current thesis, firstly we focus on the methods for transforming keyword-based queries into SPARQL automatically. Secondly, we will work on improving those methods in order to apply them on (a large subset of) the Linked DataWeb. In an early phase, a heuristic method was proposed for generating SPARQL queries out of arbitrary number of keywords. Its preliminary evaluation showed promising results. So, we are working on the possible improvements for applying that on the large-scale knowledge bases.


ieee international conference semantic computing | 2017

Torpedo: Improving the State-of-the-Art RDF Dataset Slicing

Edgard Marx; Saeedeh Shekarpour; Tommaso Soru; Adrian M.P. Braşoveanu; Muhammad Saleem; Ciro Baron; Albert Weichselbraun; Jens Lehmann; Axel-Cyrille Ngonga Ngomo; Sören Auer

Over the last years, the amount of data published as Linked Data on the Web has grown enormously. In spite of the high availability of Linked Data, organizations still encounter an accessibility challenge while consuming it. This is mostly due to the large size of some of the datasets published as Linked Data. The core observation behind this work is that a subset of these datasets suffices to address the needs of most organizations. In this paper, we introduce Torpedo, an approach for efficiently selecting and extracting relevant subsets from RDF datasets. In particular, Torpedo adds optimization techniques to reduce seek operations costs as well as the support of multi-join graph patterns and SPARQL FILTERs that enable to perform a more granular data selection. We compare the performance of our approach with existing solutions on nine different queries against four datasets. Our results show that our approach is highly scalable and is up to 26% faster than the current state-of-the-art RDF dataset slicing approach.

Collaboration


Dive into the Saeedeh Shekarpour's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge