Wouter Beek
VU University Amsterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wouter Beek.
international semantic web conference | 2014
Wouter Beek; Laurens Rietveld; Hamid R. Bazoobandi; Jan Wielemaker; Stefan Schlobach
It is widely accepted that proper data publishing is difficult. The majority of Linked Open Data (LOD) does not meet even a core set of data publishing guidelines. Moreover, datasets that are clean at creation, can get stains over time. As a result, the LOD cloud now contains a high level of dirty data that is difficult for humans to clean and for machines to process. Existing solutions for cleaning data (standards, guidelines, tools) are targeted towards human data creators, who can (and do) choose not to use them. This paper presents the LOD Laundromat which removes stains from data without any human intervention. This fully automated approach is able to make very large amounts of LOD more easily available for further processing right now. LOD Laundromat is not a new dataset, but rather a uniform point of entry to a collection of cleaned siblings of existing datasets. It provides researchers and application developers a wealth of data that is guaranteed to conform to a specified set of best practices, thereby greatly improving the chance of data actually being (re)used.
european conference on technology enhanced learning | 2010
Bert Bredeweg; J. Liem; Wouter Beek; Paulo Salles; F. Linnebank
Scaffolding is a well-known approach to bridge the gap between novice and expert capabilities in a discovery-oriented learning environment. This paper discusses a set of knowledge representations referred to as Learning Spaces (LSs) that can be used to support learners in acquiring conceptual knowledge of system behaviour. The LSs are logically self-contained, meaning that models created at a specific LS can be simulated. Working with the LSs provides scaffolding for learners in two ways. First, each LS provides a restricted set of representational primitives to express knowledge, which focus the learners knowledge construction process. Second, the logical consequences of an expression derived upon simulating, provide learners a reflective instrument for evaluating the status of their understanding, to which they can react accordingly. The work presented here is part of the DynaLearn project, which builds an Interactive Learning Environment to study a constructive approach to having learners develop a qualitative understanding of how systems behave. The work presented here thus focuses on tools to support educational research. Consequently, user-oriented evaluation of these tools is not a part of this paper.
Sprachwissenschaft | 2016
Jan Wielemaker; Wouter Beek; Michiel Hildebrand; Jacco van Ossenbruggen
ClioPatria is a comprehensive semantic web development framework based on SWI-Prolog. SWI-Prolog provides an efficient C-based main-memory RDF store that is designed to cooperate naturally and efficiently with Prolog, realizing a flexible RDF-based environment for rule based programming. ClioPatria extends this core with a SPARQL and LOD server, an extensible web frontend to manage the server, browse the data, query the data using SPARQL and Prolog and a Git-based plugin manager. The ability to query RDF using Prolog provides query composition and smooth integration with application logic. ClioPatria is primarily positioned as a prototyping platform for exploring novel ways of reasoning with RDF data. It has been used in several research projects in order to perform tasks such as data integration and enrichment and semantic search.
european semantic web conference | 2015
Laurens Rietveld; Ruben Verborgh; Wouter Beek; Miel VanderźSande; Stefan Schlobach
Ad-hoc querying is crucial to access information from Linked Data, yet publishing queryable RDF datasets on the Web is not a trivial exercise. The most compelling argument to support this claim is that the Web contains hundreds of thousands of data documents, while only 260 queryable SPARQL endpoints are provided. Even worse, the SPARQL endpoints we do have are often unstable, may not comply with the standards, and may differ in supported features. In other words, hosting data online is easy, but publishing Linked Data via a queryable API such as SPARQL appears to be too difficult. As a consequence, in practice, there is no single uniform way to query the LOD Cloud today. In this paper, we therefore combine a large-scale Linked Data publication project LOD Laundromat with a low-cost server-side interface Triple Pattern Fragments, in order to bridge the gap between the Web of downloadable data documents and the Web of live queryable data. The result is ai¾?repeatable, low-cost, open-source data publication process. To demonstrate its applicability, we made over 650,000 data documents available as datai¾?APIs, consisting of 30i¾?billion i¾?triples.
international semantic web conference | 2015
Laurens Rietveld; Wouter Beek; Stefan Schlobach
Contemporary Semantic Web research is in the business of optimizing algorithms for only a handful of datasets such as DBpedia, BSBM, DBLP and only a few more. This means that current practice does not generally take the true variety of Linked Data into account. With hundreds of thousands of datasets out in the world today the results of Semantic Web evaluations are less generalizable than they should and — this paper argues — can be. This paper describes LOD Lab: a fundamentally different evaluation paradigm that makes algorithmic evaluation against hundreds of thousands of datasets the new norm. LOD Lab is implemented in terms of the existing LOD Laundromat architecture combined with the new open-source programming interface Frank that supports Web-scale evaluations to be run from the command-line. We illustrate the viability of the LOD Lab approach by rerunning experiments from three recent Semantic Web research publications and expect it will contribute to improving the quality and reproducibility of experimental work in the Semantic Web community. We show that simply rerunning existing experiments within this new evaluation paradigm brings up interesting research questions as to how algorithmic performance relates to (structural) properties of the data.
intelligent tutoring systems | 2010
Bert Bredeweg; J. Liem; F. Linnebank; René Bühling; Michael Wißner; Jorge Gracia del Río; Paulo Salles; Wouter Beek; Asunción Gómez Pérez
DynaLearn is an Interactive Learning Environment that facilitates a constructive approach to developing a conceptual understanding of how systems work The software can be put in different interactive modes facilitating alternative learning experiences, and as such provides a toolkit for educational research.
IEEE Internet Computing | 2016
Wouter Beek; Laurens Rietveld; Stefan Schlobach; Frank van Harmelen
LOD Laundromat poses a centralized solution for todays Semantic Web problems. This approach adheres more closely to the original vision of a Web of Data, providing uniform access to a large and ever-increasing subcollection of the LOD Cloud.
international semantic web conference | 2016
Wouter Beek; Stefan Schlobach; Frank van Harmelen
Identity relations are at the foundation of the Semantic Web and the Linked Data Cloud. In many instances the classical interpretation of identity is too strong for practical purposes. This is particularly the case when two entities are considered the same in some but not all contexts. Unfortunately, modeling the specific contexts in which an identity relation holds is cumbersome and, due to arbitrary reuse and the Open World Assumption, it is impossible to anticipate all contexts in which an entity will be used. We propose an alternative semantics for owl:sameAs that partitions the original relation into a hierarchy of subrelations. The subrelation to which an identity statement belongs depends on the dataset in which the statement occurs. Adding future assertions may change the subrelation to which an identity statement belongs, resulting in a context-dependent and non-monotonic semantics. We show that this more fine-grained semantics is better able to characterize the actual use of owl:sameAs as observed in Linked Open Datasets.
web science | 2014
Wouter Beek; Paul T. Groth; Stefan Schlobach; Rinke Hoekstra
General human intelligence is needed in order to process Linked Open Data (LOD). On the Semantic Web (SW), content is intended to be machine-processable as well. But the extent to which a machine is able to navigate, access, and process the SW has not been extensively researched. We present LOD Observer, a web observatory that studies the Web from a machine processors point of view. We do this by reformulating the five star model of LOD publishing in quantifiable terms. Secondly, we built an infrastructure that allows the models criteria to be quantified over existing datasets. Thirdly, we analyze a significant snapshot of the LOD cloud using this infrastructure and discuss the main problems a machine processor encounters.
artificial intelligence in education | 2011
Wouter Beek; Bert Bredeweg; Sander Latour
We implemented three kinds of context-dependent help for a qualitative modelling and simulation workbench called DynaLearn. We show that it is possible to generate and select assistance knowledge based on the current model, simulation results and workbench state.