Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan Wielemaker is active.

Publication


Featured researches published by Jan Wielemaker.


IEEE Intelligent Systems | 2001

Ontology-based photo annotation

A.T. Schreiber; B. Dubbeldam; Jan Wielemaker; Bob J. Wielinga

While technology enables the storage and distribution of photographic images on an unprecedented scale, finding what you want can be like finding the proverbial needle in a haystack. The authors describe their approach and the tool they developed to make annotating photos and searching for specific images more intelligent.


Theory and Practice of Logic Programming - Prolog Systems archive | 2012

Swi-prolog

Jan Wielemaker; Tom Schrijvers; Markus Triska; Torbjörn Lager

SWI-Prolog is neither a commercial Prolog system nor a purely academic enterprise, but increasingly a community project. The core system has been shaped to its current form while being used as a tool for building research prototypes, primarily for knowledge-intensive and interactive systems. Community contributions have added several interfaces and the constraint (CLP) libraries. Commercial involvement has created the initial garbage collector, added several interfaces and two development tools: PlDoc (a literate programming documentation system) and PlUnit (a unit testing environment). In this article, we present SWI-Prolog as an integrating tool, supporting a wide range of ideas developed in the Prolog community and acting as glue between foreign resources. This article itself is the glue between technical articles on SWI-Prolog, providing context and experience in applying them over a longer period.


Journal of Web Semantics | 2008

Semantic annotation and search of cultural-heritage collections: The MultimediaN E-Culture demonstrator

Guus Schreiber; Alia K. Amin; Lora Aroyo; Mark van Assem; Victor de Boer; Lynda Hardman; Michiel Hildebrand; Borys Omelayenko; Jacco van Osenbruggen; Anna Tordai; Jan Wielemaker; Bob Wielinga

In this article we describe a Semantic Web application for semantic annotation and search in large virtual collections of cultural-heritage objects, indexed with multiple vocabularies. During the annotation phase we harvest, enrich and align collection metadata and vocabularies. The semantic-search facilities support keyword-based queries of the graph (currently 20M triples), resulting in semantically grouped result clusters, all representing potential semantic matches of the original query. We show two sample search scenarios. The annotation and search software is open source and is already being used by third parties. All software is based on established Web standards, in particular HTML/XML, CSS, RDF/OWL, SPARQL and JavaScript.


international semantic web conference | 2004

A method for converting thesauri to RDF/OWL

Mark van Assem; Maarten Menken; Guus Schreiber; Jan Wielemaker; Bob Wielinga

This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL. The method identifies four steps in the conversion process. In each step, decisions have to be taken with respect to the syntax or semantics of the resulting representation. Each step is supported through a number of guidelines. The method is illustrated through conversions of two large thesauri: MeSH and WordNet.


international semantic web conference | 2006

MultimediaN e-culture demonstrator

Guus Schreiber; Alia K. Amin; Mark van Assem; Viktor de Boer; Lynda Hardman; Michiel Hildebrand; Laura Hollink; Zhisheng Huang; Janneke van Kersen; Marco de Niet; Borys Omelayenko; Jacco van Ossenbruggen; Ronny Siebes; Jos Taekema; Jan Wielemaker; Bob Wielinga

The main objective of the MultimediaN E-Culture project is to demonstrate how novel semantic-web and presentation technologies can be deployed to provide better indexing and search support within large virtual collections of cultural-heritage resources. The architecture is fully based on open web standards, in particular XML, SVG, RDF/OWL and SPARQL. One basic hypothesis underlying this work is that the use of explicit background knowledge in the form of ontologies/vocabularies/thesauri is in particular useful in information retrieval in knowledge-rich domains.


international semantic web conference | 2003

Prolog-based infrastructure for RDF: scalability and performance

Jan Wielemaker; Guus Schreiber; Bob J. Wielinga

The semantic web is a promising application-area for the Prolog programming language for its non-determinism and pattern-matching. In this paper we outline an infrastructure for loading and saving RDF/XML, storing triples, elementary reasoning with triples and visualization. A predecessor of the infrastructure described here has been used in various applications for ontology-based annotation of multimedia objects using semantic web languages. Our library aims at fast parsing, fast access and scalability for fairly large but not unbounded applications upto 40 million triples. The RDF parser is distributed with SWI-Prolog under the LGPL Free Software licence. The other components will be added to the distribution as they become stable and documented.


international semantic web conference | 2014

LOD Laundromat: A Uniform Way of Publishing Other People's Dirty Data

Wouter Beek; Laurens Rietveld; Hamid R. Bazoobandi; Jan Wielemaker; Stefan Schlobach

It is widely accepted that proper data publishing is difficult. The majority of Linked Open Data (LOD) does not meet even a core set of data publishing guidelines. Moreover, datasets that are clean at creation, can get stains over time. As a result, the LOD cloud now contains a high level of dirty data that is difficult for humans to clean and for machines to process. Existing solutions for cleaning data (standards, guidelines, tools) are targeted towards human data creators, who can (and do) choose not to use them. This paper presents the LOD Laundromat which removes stains from data without any human intervention. This fully automated approach is able to make very large amounts of LOD more easily available for further processing right now. LOD Laundromat is not a new dataset, but rather a uniform point of entry to a collection of cleaned siblings of existing datasets. It provides researchers and application developers a wealth of data that is guaranteed to conform to a specified set of best practices, thereby greatly improving the chance of data actually being (re)used.


Knowledge Acquisition | 1992

Shelley: computer-aided knowledge engineering

Anjo Anjewierden; Jan Wielemaker; Catherine Toussaint

Abstract This paper provides an overview of an integrated workbench for knowledge engineering (Shelley). Shelley interactively supports the analysis and design phases of the KADS KBS development methodology. Shelley is different from many other tools supporting knowledge acquisition in two respects: (1) it is based on a methodology for knowledge acquisition; and (2) it is designed to provide synergistic effects on multiple tools simultaneously providing the user with different views on the knowledge being acquired. Shelley is in actual use; an evaluation of how users view Shelley is included.


Theory and Practice of Logic Programming | 2008

Swi-prolog and the web

Jan Wielemaker; Zhisheng Huang; Lourens Van Der Meij

Prolog is an excellent tool for representing and manipulating data written in formal languages as well as natural language. Its safe semantics and automatic memory management make it a prime candidate for programming robust Web services. Although Prolog is commonly seen as a component in a Web application that is either embedded or communicates using a proprietary protocol, we propose an architecture where Prolog communicates to other components in a Web application using the standard HTTP protocol. By avoiding embedding in external Web servers, development and deployment become much easier. To support this architecture, in addition to the transfer protocol, we must also support parsing, representing and generating the key Web document types such as HTML, XML and RDF. This article motivates the design decisions in the libraries and extensions to Prolog for handling Web documents and protocols. The design has been guided by the requirement to handle large documents efficiently. The described libraries support a wide range of Web applications ranging from HTML and XML documents to Semantic Web RDF processing. The benefits of using Prolog for Web-related tasks are illustrated using three case studies.


international conference on logic programming | 2003

Native preemptive threads in SWI-prolog

Jan Wielemaker

Concurrency is an attractive property of a language to exploit multi-CPU hardware or perform multiple tasks concurrently. In recent years we see Prolog systems experimenting with multiple threads only sharing the database. Such systems are relatively easy to build and remain very close to standard Prolog while providing valuable extra functionality. This article describes the introduction of multiple threads in SWI-Prolog exploiting OS-native threading. We discuss the extra primitives available to the Prolog programmer as well as implementation issues. We explored speedup on multi-processor hardware and speed degradation when executing a single task.

Collaboration


Dive into the Jan Wielemaker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bob Wielinga

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Markus Triska

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lora Aroyo

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Wouter Beek

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge