Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Archana Nottamkandath is active.

Publication


Featured researches published by Archana Nottamkandath.


web science | 2014

Crowdsourcing knowledge-intensive tasks in cultural heritage

Jasper Oosterman; Archana Nottamkandath; Chris Dijkshoorn; Alessandro Bozzon; Geert-Jan Houben; Lora Aroyo

Large datasets such as Cultural Heritage collections require detailed annotations when digitised and made available online. Annotating different aspects of such collections requires a variety of knowledge and expertise which is not always possessed by the collection curators. Artwork annotation is an example of a knowledge intensive image annotation task, i.e. a task that demands annotators to have domain-specific knowledge in order to be successfully completed. This paper describes the results of a study aimed at investigating the applicability of crowdsourcing techniques to knowledge intensive image annotation tasks. We observed a clear relationship between the annotation difficulty of an image, in terms of number of items to identify and annotate, and the performance of the recruited workers.


conference on privacy, security and trust | 2013

Semi-automated assessment of annotation trustworthiness

Davide Ceolin; Archana Nottamkandath; Wan Fokkink

Cultural heritage institutions and multimedia archives often delegate the task of annotating their collections of artifacts to Web users. The use of crowdsourced annotations from the Web gives rise to trust issues. We propose an algorithm that, by making use of a combination of subjective logic, semantic relatedness measures and clustering, automates the process of evaluation for annotations represented by means of the Open Annotation ontology. The algorithm is evaluated over two different datasets coming from the cultural heritage domain.


international conference on trust management | 2012

Automated Evaluation of Annotators for Museum Collections Using Subjective Logic

Davide Ceolin; Archana Nottamkandath; Wan Fokkink

Museums are rapidly digitizing their collections, and face a huge challenge to annotate every digitized artifact in store. Therefore they are opening up their archives for receiving annotations from experts world-wide. This paper presents an architecture for choosing the most eligible set of annotators for a given artifact, based on semantic relatedness measures between the subject matter of the artifact and topics of expertise of the annotators. We also employ mechanisms for evaluating the quality of provided annotations, and constantly manage and update the trust, reputation and expertise information of registered annotators.


Journal of Trust Management | 2014

Efficient semi-automated assessment of annotations trustworthiness

Davide Ceolin; Archana Nottamkandath; Wan Fokkink

AbstractCrowdsourcing provides a valuable means for accomplishing large amounts of work which may require a high level of expertise. We present an algorithm for computing the trustworthiness of user-contributed tags of artifacts, based on the reputation of the user, represented as a probability distribution, and on provenance of the tag. The algorithm only requires a small number of manually assessed tags, and computes two trust values for each tag, based on reputation and provenance. We moreover present a computationally cheaper adaptation of the algorithm, which clusters semantically similar tags in the training set, and builds an opinion on a new tag based on its semantic relatedness with respect to the medoids of the clusters. Also, we introduce an adaptation of the algorithm based on the use of provenance stereotypes as an alternative basis for the estimation. Two case studies from the cultural heritage domain show that the algorithms produce satisfactory results.


international world wide web conferences | 2014

Crowd vs. experts: nichesourcing for knowledge intensive tasks in cultural heritage

Jasper Oosterman; Alessandro Bozzon; Geert-Jan Houben; Archana Nottamkandath; Chris Dijkshoorn; Lora Aroyo; Mieke H. R. Leyssen; Myriam C. Traub

The results of our exploratory study provide new insights to crowdsourcing knowledge intensive tasks. We designed and performed an annotation task on a print collection of the Rijksmuseum Amsterdam, involving experts and crowd workers in the domain-specific description of depicted flowers. We created a testbed to collect annotations from flower experts and crowd workers and analyzed these in regard to user agreement. The findings show promising results, demonstrating how, for given categories, nichesourcing can provide useful annotations by connecting crowdsourcing to domain expertise.


Journal of Data and Information Quality | 2016

Combining User Reputation and Provenance Analysis for Trust Assessment

Davide Ceolin; Paul T. Groth; Valentina Maccatrozzo; Wan Fokkink; Willem Robert van Hage; Archana Nottamkandath

Trust is a broad concept that in many systems is often reduced to user reputation alone. However, user reputation is just one way to determine trust. The estimation of trust can be tackled from other perspectives as well, including by looking at provenance. Here, we present a complete pipeline for estimating the trustworthiness of artifacts given their provenance and a set of sample evaluations. The pipeline is composed of a series of algorithms for (1) extracting relevant provenance features, (2) generating stereotypes of user behavior from provenance features, (3) estimating the reputation of both stereotypes and users, (4) using a combination of user and stereotype reputations to estimate the trustworthiness of artifacts, and (5) selecting sets of artifacts to trust. These algorithms rely on the W3C PROV recommendations for provenance and on evidential reasoning by means of subjective logic. We evaluate the pipeline over two tagging datasets: tags and evaluations from the Netherlands Institute for Sound and Vision’s Waisda? video tagging platform, as well as crowdsourced annotations from the Steve.Museum project. The approach achieves up to 85% precision when predicting tag trustworthiness. Perhaps more importantly, the pipeline provides satisfactory results using relatively little evidence through the use of provenance.


international conference on trust management | 2015

Predicting Quality of Crowdsourced Annotations Using Graph Kernels

Archana Nottamkandath; Jasper Oosterman; Davide Ceolin; Gerben Klaas Dirk de Vries; Wan Fokkink

Annotations obtained by Cultural Heritage institutions from the crowd need to be automatically assessed for their quality. Machine learning using graph kernels is an effective technique to use structural information in datasets to make predictions. We employ the Weisfeiler-Lehman graph kernel for RDF to make predictions about the quality of crowdsourced annotations in Steve.museum dataset, which is modelled and enriched as RDF. Our results indicate that we could predict quality of crowdsourced annotations with an accuracy of 75 %. We also employ the kernel to understand which features from the RDF graph are relevant to make predictions about different categories of quality.


uncertainty reasoning for the semantic web | 2013

Analyzing User Demographics and User Behavior for Trust Assessment

Davide Ceolin; Paul T. Groth; Archana Nottamkandath; Wan Fokkink; Willem Robert van Hage

In many systems, the determination of trust is reduced to reputation estimation. However, reputation is just one way of determining trust. The estimation of trust can be tackled from a variety of other perspectives. In this chapter, we model trust relying on user reputation, user demographics and from provenance. We then explore the effects of combining trust computed through these different methods. Concretely, the first contribution of this chapter is a study of the correlations of demographics with trust. This study helps us to understand which categories of users are better candidates for annotation tasks in the cultural heritage domain. Secondly, we detail a procedure for computing reputation-based trust assessments. The user reputation is modeled in subjective logic based on the users performance in the system evaluated Waisda? in the case of the work presented here. The third contribution is a procedure for computing trust values based on provenance information, represented using the W3C PROV model. We show how merging the results of these procedures can be beneficial for the reliability of the estimated trust value. We evaluate the proposed procedures and their merger by estimating and verifying the trustworthiness of the tags created within the Waisda? video tagging game from the Netherlands Institute for Sound and Vision. Through a quantitative analysis of the results, we demonstrate that using provenance and demographic information is beneficial for the accuracy of trust assessments.


uncertainty reasoning for the semantic web | 2012

Trust evaluation through user reputation and provenance analysis

Davide Ceolin; Paul T. Groth; Willem Robert van Hage; Archana Nottamkandath; Wan Fokkink


uncertainty reasoning for the semantic web | 2012

Subjective logic extensions for the semantic web

Davide Ceolin; Archana Nottamkandath; Wan Fokkink

Collaboration


Dive into the Archana Nottamkandath's collaboration.

Top Co-Authors

Avatar

Wan Fokkink

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jasper Oosterman

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alessandro Bozzon

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geert-Jan Houben

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lora Aroyo

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge