Jasper Oosterman
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jasper Oosterman.
web science | 2014
Jasper Oosterman; Archana Nottamkandath; Chris Dijkshoorn; Alessandro Bozzon; Geert-Jan Houben; Lora Aroyo
Large datasets such as Cultural Heritage collections require detailed annotations when digitised and made available online. Annotating different aspects of such collections requires a variety of knowledge and expertise which is not always possessed by the collection curators. Artwork annotation is an example of a knowledge intensive image annotation task, i.e. a task that demands annotators to have domain-specific knowledge in order to be successfully completed. This paper describes the results of a study aimed at investigating the applicability of crowdsourcing techniques to knowledge intensive image annotation tasks. We observed a clear relationship between the annotation difficulty of an image, in terms of number of items to identify and annotate, and the performance of the recruited workers.
Computer Networks | 2015
Jasper Oosterman; Jie Yang; Alessandro Bozzon; Lora Aroyo; Geert-Jan Houben
Cultural heritage institutions more and more provide online access to their collections. Collections containing visual artworks need detailed and thorough annotations of the represented visual objects (e.g. plants or animals) to enable human access and retrieval. To make these suitable for access and retrieval, visual artworks need detailed and thorough annotations of the visual classes. Crowdsourcing has proven a viable tool to cater for the pitfalls of automatic annotation techniques. However, differently from traditional photographic image annotation, the artwork annotation task requires workers to possess the knowledge and skills needed to identify and recognise the occurrences of visual classes. The extent to which crowdsourcing can be effectively applied for artwork annotation is still an open research question. Based on a real-life case study from Rijksmuseum Amsterdam, this paper investigates the performance of a crowd of workers drawn from the CrowdFlower platform. Our contributions include a detailed analysis of crowd annotations based on two annotation configurations and a comparison of these crowd annotations with the ones from trusted annotators. In this study we apply a novel method for the automatic aggregation of local (i.e. bounding box) annotations, and we study how different knowledge extraction and aggregation configurations affect the identification and recognition aspects of artwork annotation. Our work sheds new light on the process of crowdsourcing artwork annotations, and shows how techniques that are effective for photographic image annotation cannot be straightforwardly applied to artwork annotation, thus paving the way for new research in the area.
acm symposium on applied computing | 2015
Simon Kassing; Jasper Oosterman; Alessandro Bozzon; Geert-Jan Houben
Social bookmarking communities are now major content production platforms. There, millions of users interact every day on a great variety of knowledge domains, creating new contents, linking to existing ones, and engaging in constructive discussions. Relevant domain-specific content is often mixed with less useful contributions, and domain experts often have to find their way through lurkers and Web trolls. Such a diversity in topics and quality is a distinctive property of this class of Web sites. This diversity interferes with the ability to locate relevant content and users, and this hinders the usage of social bookmarking communities for tasks such as structured knowledge creation, or crowdsourcing. In this paper we investigate how relevant domain-specific content, in the form of submissions shared by (expert) users, can be effectively located in the social bookmarking platform reddit. We contribute with a framework process for the identification and characterisation of domain-specific content and knowledgeable users, and apply it to the reddit platform. Our work provides novel insights into the properties and dynamics of reddit, and represents an important step towards a better use of social bookmarking communities as a source of knowledge and expertise.
international world wide web conferences | 2014
Jasper Oosterman; Alessandro Bozzon; Geert-Jan Houben; Archana Nottamkandath; Chris Dijkshoorn; Lora Aroyo; Mieke H. R. Leyssen; Myriam C. Traub
The results of our exploratory study provide new insights to crowdsourcing knowledge intensive tasks. We designed and performed an annotation task on a print collection of the Rijksmuseum Amsterdam, involving experts and crowd workers in the domain-specific description of depicted flowers. We created a testbed to collect annotations from flower experts and crowd workers and analyzed these in regard to user agreement. The findings show promising results, demonstrating how, for given categories, nichesourcing can provide useful annotations by connecting crowdsourcing to domain expertise.
international conference on trust management | 2015
Archana Nottamkandath; Jasper Oosterman; Davide Ceolin; Gerben Klaas Dirk de Vries; Wan Fokkink
Annotations obtained by Cultural Heritage institutions from the crowd need to be automatically assessed for their quality. Machine learning using graph kernels is an effective technique to use structural information in datasets to make predictions. We employ the Weisfeiler-Lehman graph kernel for RDF to make predictions about the quality of crowdsourced annotations in Steve.museum dataset, which is modelled and enriched as RDF. Our results indicate that we could predict quality of crowdsourced annotations with an accuracy of 75 %. We also employ the kernel to understand which features from the RDF graph are relevant to make predictions about different categories of quality.
international conference on web engineering | 2016
Jasper Oosterman; Geert-Jan Houben
The successful execution of knowledge crowdsourcing (KC) tasks requires contributors to possess knowledge or mastery in a specific domain. The need for expert contributors limits the capacity of online crowdsourcing marketplaces to cope with KC tasks. While online social platforms emerge as a viable alternative source of expert contributors, how to successfully invite them remains an open research question. We contribute an experiment in expert contributors invitation where we study the performance of two invitation strategies: one addressed to the individual expert contributors, and one addressed to communities of knowledge. We target reddit, a popular social bookmarking platform, to seek expert contributors in the botany and ornithology domains of knowledge, and to invite them to contribute an artwork annotation KC task. Results provide novel insights on the effectiveness of direct invitations strategies, but show how soliciting collaboration through communities yields, in the context of our experiment, more contributions.
international conference on web engineering | 2016
Jasper Oosterman; Alessandro Bozzon; Geert-Jan Houben
This demo presents the Crowd Knowledge Curator (CroKnow), a novel web-based platform that streamlines the processes required to enrich existing knowledge bases (e.g. Wikis) by tapping on the latent knowledge of expert contributors in online platforms. The platform integrates a number of tools aimed at supporting the identification of missing data from existing structured resources, the specification of strategies to identify and invite candidate experts from open communities, and the visualisation of the knowledge creation process status. CroKnow will be demonstrated through a case study focusing on the enrichment of the Rijksmuseum Amsterdams digital collection.
international conference on user modeling, adaptation, and personalization | 2012
Chris Dijkshoorn; Jasper Oosterman; Lora Aroyo; Geert-Jan Houben
S4SC'14 Proceedings of the Fifth International Conference on Semantics for Smarter Cities - Volume 1280 | 2014
Marco Balduini; Stefano Bocconi; Alessandro Bozzon; Emanuele Della Valle; Yi Huang; Jasper Oosterman; Themis Palpanas; Mikalai Tsytsarau
CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings CEUR Workshop Proceedings | 2013
Chris Dijkshoorn; Mieke H. R. Leyssen; Archana Nottamkandath; Jasper Oosterman; Myriam C. Traub; Lora Aroyo; Alessandro Bozzon; Wan Fokkink; Geert-Jan Houben; H. Hovelmann; Lizzy Jongma; J.R. van Ossenbruggen; Guus Schreiber; Jan Wielemaker