Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bob J. Wielinga is active.

Publication


Featured researches published by Bob J. Wielinga.


web science | 2011

Digital hermeneutics: Agora and the online understanding of cultural heritage

Chiel van den Akker; Susan Legêne; Marieke van Erp; Lora Aroyo; Roxane Segers; Lourens van der Meij; Jacco van Ossenbruggen; Guus Schreiber; Bob J. Wielinga; Johan Oomen; Geertje Jacobs

Cultural heritage institutions are currently rethinking access to their collections to allow the public to interpret and contribute to their collections. In this work, we present the Agora project, an interdisciplinary project in which Web technology and theory of interpretation meet. This we call digital hermeneutics. The Agora project facilitates the understanding of historical events and improves the access to integrated online history collections. In this contribution, we focus on defining and modeling prototypical object-event and event-event relationships that support the interpretation of objects in cultural heritage collections. We present a use case in which we model historical events as well as relations between objects and events for a set of paintings from the Rijksmuseum Amsterdam collection. Our use case shows how Web technology and theory of interpretation meet in the present, and what technological hurdles still need to be taken to fully support digital hermeneutics.


international conference on knowledge capture | 2011

Let's agree to disagree: on the evaluation of vocabulary alignment

Anna Tordai; Jacco van Ossenbruggen; Guus Schreiber; Bob J. Wielinga

Gold standard mappings created by experts are at the core of alignment evaluation. At the same time, the process of manual evaluation is rarely discussed. While the practice of having multiple raters evaluate results is accepted, their level of agreement is often not measured. In this paper we describe three experiments in manual evaluation and study the way different raters evaluate mappings. We used alignments generated using different techniques and between vocabularies of different type. In each experiment, five raters evaluated alignments and talked through their decisions using the think aloud method. In all three experiments we found that inter-rater agreement was low and analyzed our data to find the reasons for it. Our analysis shows which variables can be controlled to affect the level of agreement including the mapping relations, the evaluation guidelines and the background of the raters. On the other hand, differences in the perception of raters, and the complexity of the relations between often ill-defined natural language concepts remain inherent sources of disagreement. Our results indicate that the manual evaluation of ontology alignments is by no means an easy task and that the ontology alignment community should be careful in the construction and use of reference alignments.


international semantic web conference | 2010

Aligning large SKOS-Like vocabularies: two case studies

Anna Tordai; Jacco van Ossenbruggen; Guus Schreiber; Bob J. Wielinga

In this paper we build on our methodology for combining and selecting alignment techniques for vocabularies, with two alignment case studies of large vocabularies in two languages. Firstly, we analyze the vocabularies and based on that analysis choose our alignment techniques. Secondly, we test our hypothesis based on earlier work that first generating alignments using simple lexical alignment techniques, followed by a separate disambiguation of alignments performs best in terms of precision and recall. The experimental results show, for example, that this combination of techniques provides an estimated precision of 0.7 for a sample of the 12,725 concepts for which alignments were generated (of the total 27,992 concepts). Thirdly, we explain our results in light of the characteristics of the vocabularies and discuss their impact on the alignments techniques.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2013

Reflections on 25+ years of knowledge acquisition

Bob J. Wielinga

In this paper I give a short reflection on Knowledge Acquisition as a subfield of AI and Knowledge Engineering over the last 25 years or so. My major message is that knowledge modeling is an underrated but still important method to reduce the complexity problems that arise in constructing knowledge-based applications. Scale - as apparent in the Semantic Web - is another important parameter in recent developments in Knowledge Acquisition: it requires other techniques than those of the 1980s. Natural Language Processing is the most promising way forward, but also the most difficult source of the acquisition of formalized knowledge. I will argue that some of the lessons learned in building knowledge-based systems may carry over to reasoning in the Semantic Web and to knowledge acquisition from natural language web sources.


international conference on knowledge capture | 2015

A methodology for constructing the calculation model of scientific spreadsheets

Martine G. de Vos; Jan Wielemaker; Guus Schreiber; Bob J. Wielinga; Jan L. Top

Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are not able to fully understand how the research results are calculated, and trace them back to the underlying spreadsheets. This paper proposes a methodology for semi-automatically deriving the calculation workflow underlying a set of spreadsheets. The starting point of our methodology is the cell dependency graph, representing all spreadsheet cells and connections. We automatically aggregate all cells in the graph that represent instances and duplicates of the same quantities, based on analysis of the formula syntax. Subsequently, we use a set of heuristics, incorporating knowledge on spreadsheet design, computational procedures and domain knowledge, to select those quantities, that are relevant for understanding the calculation workflow. We explain and illustrate our methodology by actually applying it on three sets of spreadsheets from existing research projects in the domains of environmental and life science. Results from these case studies show that our constructed calculation models approximate the ground truth calculation workflows, both in terms of content and size, but are not a perfect match.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2017

Combining information on structure and content to automatically annotate natural science spreadsheets

Martine G. de Vos; Jan Wielemaker; Hajo Rijgersberg; Guus Schreiber; Bob J. Wielinga; Jan L. Top

In this paper we propose several approaches for automatic annotation of natural science spreadsheets using a combination of structural properties of the tables and external vocabularies. During the design process of their spreadsheets, domain scientists implicitly include their domain model in the content and structure of the spreadsheet tables. However, this domain model is essential to unambiguously interpret the spreadsheet data. The overall objective of this research is to make the underlying domain model explicit, to facilitate evaluation and reuse of these data.We present our annotation approaches by describing five structural properties of natural science spreadsheets, that may pose challenges to annotation, and at the same time, provide additional information on the content. For example, the main property we describe is that, within a spreadsheet table, semantically related terms are grouped in rectangular blocks. For each of the five structural properties we suggest an annotation approach, that combines heuristics on the property with knowledge from external vocabularies. We evaluate our approaches in a case study, with a set of existing natural science spreadsheets, by comparing the annotation results with a baseline based on purely lexical matching.Our case study results show that combining information on structural properties of spreadsheet tables with lexical matching to external vocabularies results in higher precision and recall of annotation of individual terms. We show that the semantic characterization of blocks of spreadsheet terms is an essential first step in the identification of relations between cells in a table. As such, the annotation approaches presented in this study provide the basic information that is needed to construct the domain model of scientific spreadsheets. HighlightsWe describe five structural properties of scientific spreadsheet tables.Within a spreadsheet table, semantically related terms are grouped in blocks.We annotate tables using information on their structure and external vocabularies.Including information on table structure improves annotation of spreadsheet terms.We identify relations within a table by semantically categorizing blocks of terms.


Applied Ontology | 2014

Engineering ontologies for question answering

Marten Teitsma; Jacobijn Sandberg; Guus Schreiber; Bob J. Wielinga; Willem Robert van Hage

Using an ontology to automatically generate questions for ordinary people requires a structure and concepts compliant with human thought. Here we present methods to develop a pragmatic, expert-based and a basic-level ontology and a framework to evaluate these ontologies. Comparing these ontologies shows that expert-based ontologies are most easy to construct but lack required cognitive semantic characteristics. Basic-level ontologies have structure and concepts which are better in terms of cognitive semantics but are most expensive to construct.


International Journal of Semantic Computing | 2012

TOWARDS CONCEPTUAL REPRESENTATION AND INVOCATION OF SCIENTIFIC COMPUTATIONS

Hajo Rijgersberg; Jan L. Top; Bob J. Wielinga

Computers are central in processing scientific data. This data is typically expressed as numbers and strings. Appropriate annotation of bare data is required to allow people or machines to interpret it and to relate the data to real-world phenomena. In scientific practice however, annotations are often incomplete and ambiguous — let alone machine interpretable. This holds for reports and papers, but also for spreadsheets and databases. Moreover, in practice it is often unclear how the data has been created. This hampers interpretation, reproduction and reuse of results and thus leads to suboptimal science. In this paper we focus on annotation of scientific computations. For this purpose we propose the ontology OQR (Ontology of Quantitative Research). It includes a way to represent generic scientific methods and their implementation in software packages, invocation of these methods and handling of tabular datasets. This ontology promotes annotation by humans, but also allows automatic, semantic processing of numerical data. It allows scientists to understand the selected settings of computational methods and to automatically reproduce data generated by others. A prototype application demonstrates this can be done, illustrated by a case in food research. We evaluate this case with a number of researchers in the considered domain.


database and expert systems applications | 2015

A Situation Awareness Question Generator to Determine a Crisis Situation

Marten Teitsma; Jacobijn Sandberg; Bob J. Wielinga; Guus Schreiber

In this paper we present a system that generates questions from an ontology to determine a crisis situation by ordinary people using their mobile phone: the Situation Awareness Question Generator. To generate questions from an ontology we propose a formalization based on Situation Theory and several strategies to determine a situation as quickly as possible. A suitable ontology should comply with human categorization to enhance trustworthiness. We created three ontologies, i.e. a pragmatic-based ontology, an expert-based ontology and a basic-level ontology. Several experiments, published elsewhere, showed that the basic-level ontology is most suitable.


Archive | 1989

A knowledge acquisition perspective on knowledge-level models

Bob J. Wielinga; H. Ackermans; Guus Schreiber; John Balder

Collaboration


Dive into the Bob J. Wielinga's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan L. Top

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Tordai

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hajo Rijgersberg

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge