Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Bloem is active.

Publication


Featured researches published by Peter Bloem.


extended semantic web conference | 2018

Modeling Relational Data with Graph Convolutional Networks

Michael Sejr Schlichtkrull; Thomas N. Kipf; Peter Bloem; Rianne van den Berg; Ivan Titov; Max Welling

Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to handle the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved through the use of an R-GCN encoder model to accumulate evidence over multiple inference steps in the graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.


discovery science | 2017

The knowledge graph as the default data model for learning on heterogeneous knowledge

Xander Wilcke; Peter Bloem; Victor de Boer

In modern machine learning, raw data is the pre-ferred input for our models. Where a decade ago data scien-tists were still engineering features, manually picking out the details they thought salient, they now prefer the data in their raw form. As long as we can assume that all relevant and ir-relevant information is present in the input data, we can de-sign deep models that build up intermediate representations to sift out relevant features. However, these models are often domain specific and tailored to the task at hand, and therefore unsuited for learning on heterogeneous knowledge: informa-tion of different types and from different domains. If we can develop methods that operate on this form of knowledge, we can dispense with a great deal of ad-hoc feature engineering and train deep models end-to-end in many more domains. To accomplish this, we first need a data model capable of ex-pressing heterogeneous knowledge naturally in various do-mains, in as usable a form as possible, and satisfying as many use cases as possible. In this position paper, we argue that the knowledge graph is a suitable candidate for this data model. This paper describes current research and discusses some of the promises and challenges of this approach.


international provenance and annotation workshop | 2014

Generating Scientific Documentation for Computational Experiments Using Provenance

Adianto Wibisono; Peter Bloem; Gerben Klaas Dirk de Vries; Paul T. Groth; Adam Belloum; Marian Bubak

Electronic notebooks are a common mechanism for scientists to document and investigate their work. With the advent of tools such as IPython Notebooks and Knitr, these notebooks allow code and data to be mixed together and published online. However, these approaches assume that all work is done in the same notebook environment. In this work, we look at generating notebook documentation from multi-environment workflows by using provenance represented in the W3C PROV model. Specifically, using PROV generated from the Ducktape workflow system, we are able to generate IPython notebooks that include results tables, provenance visualizations as well as references to the software and datasets used. The notebooks are interactive and editable, so that the user can explore and analyze the results of the experiment without re-running the workflow. We identify specific extensions to PROV necessary for facilitating documentation generation. To evaluate, we recreate the documentation website for a paper which won the Open Science Award at the ECML/PKDD 2013 machine learning conference. We show that the documentation produced automatically by our system provides more detail and greater experimental insight than the original hand-crafted documentation. Our approach bridges the gap between user friendly notebook documentation and provenance generated by distributed heterogeneous components.


international semantic web conference | 2017

The MIDI Linked Data Cloud

Albert Meroño-Peñuela; Rinke Hoekstra; Aldo Gangemi; Peter Bloem; Reinier de Valk; Bas Stringer; Berit Janssen; Victor de Boer; Alo Allik; Stefan Schlobach; Kevin R. Page

The study of music is highly interdisciplinary, and thus requires the combination of datasets from multiple musical domains, such as catalog metadata (authors, song titles, dates), industrial records (labels, producers, sales), and music notation (scores). While today an abundance of music metadata exists on the Linked Open Data cloud, linked datasets containing interoperable symbolic descriptions of music itself, i.e. music notation with note and instrument level information, are scarce. In this paper, we describe the MIDI Linked Data Cloud dataset, which represents multiple collections of digital music in the MIDI standard format as Linked Data using the novel midi2rdf algorithm. At the time of writing, our proposed dataset comprises 10,215,557,355 triples of 308,443 interconnected MIDI files, and provides Web-compatible descriptions of their MIDI events. We provide a comprehensive description of the dataset, and reflect on its applications for research in the Semantic Web and Music Information Retrieval communities.


international semantic web conference | 2016

Are Names Meaningful? Quantifying Social Meaning on the Semantic Web

Steven de Rooij; Wouter Beek; Peter Bloem; Frank van Harmelen; Stefan Schlobach

According to its model-theoretic semantics, Semantic Web IRIs are individual constants or predicate letters whose names are chosen arbitrarily and carry no formal meaning. At the same time it is a well-known aspect of Semantic Web pragmatics that IRIs are often constructed mnemonically, in order to be meaningful to a human interpreter. The latter has traditionally been termed ‘social meaning’, a concept that has been discussed but not yet quantitatively studied by the Semantic Web community. In this paper we use measures of mutual information content and methods from statistical model learning to quantify the meaning that is (at least) encoded in Semantic Web names. We implement the approach and evaluate it over hundreds of thousands of datasets in order to illustrate its efficacy. Our experiments confirm that many Semantic Web names are indeed meaningful and, more interestingly, we provide a quantitative lower bound on how much meaning is encoded in names on a per-dataset basis. To our knowledge, this is the first paper about the interaction between social and formal meaning, as well as the first paper that uses statistical model learning as a method to quantify meaning in the Semantic Web context. These insights are useful for the design of a new generation of Semantic Web tools that take such social meaning into account.


algorithmic learning theory | 2014

A Safe Approximation for Kolmogorov Complexity

Peter Bloem; Francisco Mota; Steven de Rooij; Luís Antunes; Pieter W. Adriaans

Kolmogorov complexity (K) is an incomputable function. It can be approximated from above but not to arbitrary given precision and it cannot be approximated from below. By restricting the source of the data to a specific model class, we can construct a computable function \({\overline{\kappa}}\) to approximate K in a probabilistic sense: the probability that the error is greater than k decays exponentially with k. We apply the same method to the normalized information distance (NID) and discuss conditions that affect the safety of the approximation.


algorithmic learning theory | 2015

Two Problems for Sophistication

Peter Bloem; Steven de Rooij; Pieter W. Adriaans

Kolmogorov complexity measures the amount of information in data, but does not distinguish structure from noise. Kolmogorovs definition of the structure function was the first attempt to measure only the structural information in data, by measuring the complexity of the smallest model that allows for optimal compression of the data. Since then, many variations of this idea have been proposed, for which we use sophistication as an umbrella term. We describe two fundamental problems with existing proposals, showing many of them to be unsound. Consequently, we put forward the view that the problem is fundamental: it may be impossible to objectively quantify the sophistication.


Archive | 2016

Single sample statistics: Exercises in learning from just one example

Peter Bloem


KNOW@LOD | 2014

Simplifying RDF Data for Graph-Based Machine Learning.

Peter Bloem; Adianto Wibisono; G.K.D. de Vries


LD4KD'14 Proceedings of the 1st International Conference on Linked Data for Knowledge Discovery - Volume 1232 | 2014

Machine learning on linked data, a position paper

Peter Bloem; Gerben Klaas Dirk de Vries

Collaboration


Dive into the Peter Bloem's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Belloum

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bas Stringer

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge