Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ben van Werkhoven is active.

Publication


Featured researches published by Ben van Werkhoven.


Grids, Clouds and Virtualization | 2011

Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

Frank J. Seinstra; Jason Maassen; Rob V. van Nieuwpoort; Niels Drost; Timo van Kessel; Ben van Werkhoven; Jacopo Urbani; Ceriel J. H. Jacobs; Thilo Kielmann; Henri E. Bal

In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle .


Future Generation Computer Systems | 2014

Optimizing convolution operations on GPUs using adaptive tiling

Ben van Werkhoven; Jason Maassen; Henri E. Bal; Frank J. Seinstra

The research domain of Multimedia Content Analysis (MMCA) considers all aspects of the automated extraction of knowledge from multimedia data. High-performance computing techniques are necessary to satisfy the ever increasing computational demands of MMCA applications. The introduction of Graphics Processing Units (GPUs) in modern cluster systems presents application developers with a challenge. While GPUs are well known to be capable of providing significant performance improvements, the programming complexity vastly increases. To this end, we have extended a user transparent parallel programming model for MMCA, named Parallel-Horus, to allow the execution of compute intensive operations on the GPUs present in the cluster. The most important class of operations in the MMCA domain are convolutions, which are typically responsible for a large fraction of the execution time. Existing optimization approaches for CUDA kernels in general as well as those specific to convolution operations are too limited in both performance and flexibility. In this paper, we present a new optimization approach, called adaptive tiling, to implement a highly efficient, yet flexible, library-based convolution operation for modern GPUs. To the best of our knowledge, our implementation is the most optimized and best performing implementation of 2D convolution in the spatial domain available to date.


Digital Investigation | 2017

Clustering image noise patterns by embedding and visualization for common source camera detection

Sonja Georgievska; Rena Bakhshi; Anand Gavai; Alessio Sclocco; Ben van Werkhoven

Abstract We consider the problem of clustering a large set of images based on similarities of their noise patterns. Such clustering is necessary in forensic cases in which detection of common source of images is required, when the cameras are not physically available. We propose a novel method for clustering combining low dimensional embedding, visualization, and classical clustering of the dataset based on the similarity scores. We evaluate our method on the Dresden images database showing that the methodology is highly effective.


Concurrency and Computation: Practice and Experience | 2017

On the complexities of utilizing large-scale lightpath-connected distributed cyberinfrastructure

Jason Maassen; Ben van Werkhoven; Maarten van Meersbergen; Henri E. Bal; Michael Kliphuis; S.-E. Brunnabend; Henk A. Dijkstra; Gerben van Malenstein; Migiel de Vos; Sylvia Kuijpers; Sander Boele; Jules Wolfrat; Nick Hill; David Wallom; Christian Grimm; Dieter Kranzlmüller; Dinesh Ganpathi; Shantenu Jha; Yaakoub El Khamra; Frank O. Bryan; Benjamin Kirtman; Frank J. Seinstra

In Autumn 2013, we—an international team of climate scientists, computer scientists, eScience researchers, and e‐Infrastructure specialists—participated in the enlighten your research global competition, organized to showcase advanced lightpath technologies in support of state‐of‐the‐art research questions. As one of the winning entries, our enlighten your research global team embarked on a very ambitious project to run an extremely high resolution climate model on a collection of supercomputers distributed over two continents and connected using an advanced 10 G lightpath networking infrastructure. Although good progress was made, we were not able to perform all desired experiments due to a varying combination of technical problems, configuration issues, policy limitations and lack of (budget for) human resources to solve these issues. In this paper, we describe our goals, the technical and non‐technical barriers, we encountered and provide recommendations on how these barriers can be removed so future project of this kind may succeed. Copyright


advances in geographic information systems | 2016

A spatial column-store to triangulate the Netherlands on the fly.

Romulo Goncalves; Tom van Tilburg; Kostis Kyzirakos; Foteini Alvanaki; Panagiotis Koutsourakis; Ben van Werkhoven; Willem Robert van Hage

3D digital city models, important for urban planning, are currently constructed from massive point clouds obtained through airborne LiDAR (Light Detection and Ranging). They are semantically enriched with information obtained from auxiliary GIS data like Cadastral data which contains information about the boundaries of properties, road networks, rivers, lakes etc. Technical advances in the LiDAR data acquisition systems made possible the rapid acquisition of high resolution topographical information for an entire country. Such data sets are now reaching the trillion points barrier. To cope with this data deluge and provide up-to-date 3D digital city models on demand current geospatial management strategies should be re-thought. This work presents a column-oriented Spatial Database Management System which provides in-situ data access, effective data skipping, efficient spatial operations, and interactive data visualization. Its efficiency and scalability is demonstrated using a dense LiDAR scan of The Netherlands consisting of 640 billion points and the latest Cadastral information, and compared with PostGIS.


Nature Methods | 2018

Template-free 2D particle fusion in localization microscopy

Hamidreza Heydarian; Florian Schueder; Maximilian T. Strauss; Ben van Werkhoven; Mohamadreza Fazel; Keith A. Lidke; Ralf Jungmann; Sjoerd Stallinga; Bernd Rieger

Methods that fuse multiple localization microscopy images of a single structure can improve signal-to-noise ratio and resolution, but they generally suffer from template bias or sensitivity to registration errors. We present a template-free particle-fusion approach based on an all-to-all registration that provides robustness against individual misregistrations and underlabeling. We achieved 3.3-nm Fourier ring correlation (FRC) image resolution by fusing 383 DNA origami nanostructures with 80% labeling density, and 5.0-nm resolution for structures with 30% labeling density.An all-to-all registration approach allows for improved, high-resolution, template-free single-particle reconstruction from localization microscopy data under realistic experimental conditions such as low labeling density.


international conference on e-science | 2015

An Integrated Approach to Porting Large Scientific Applications to GPUs

Ben van Werkhoven; Pieter Hijma

There are many large scientific applications that have been actively developed for several decades. However, in this time the hardware has evolved considerably. It is taking large scientific applications a very long time to get adjusted to the new computing infrastructure. This is because porting these applications to new hardware, such as Graphics Processing Units (GPUs), currently requires a huge amount of manual labor, even though the computations are very well suited for GPUs. In this paper we propose an integrated approach to semi-automatically port large long-lived scientific codes to GPUs. We propose a method that considerably reduces the effort required by experienced GPU programmers to port these applications. This approach is supported by a tool that is able to analyze, transform, and translate source code into different programming languages. We evaluate our approach by applying it to the Parallel Ocean Program, a representative, very large, and widely-used scientific application.


Future Generation Computer Systems | 2019

Kernel Tuner: A search-optimizing GPU code auto-tuner

Ben van Werkhoven

Abstract A very common problem in GPU programming is that some combination of thread block dimensions and other code optimization parameters, like tiling or unrolling factors, results in dramatically better performance than other kernel configurations. To obtain highly-efficient kernels it is often required to search vast and discontinuous search spaces that consist of all possible combinations of values for all tunable parameters. This paper presents Kernel Tuner, an easy-to-use tool for testing and auto-tuning OpenCL, CUDA, and C kernels with support for many search optimization algorithms that accelerate the tuning process. This paper introduces the application of many new solvers and global optimization algorithms for auto-tuning GPU applications. We demonstrate that Kernel Tuner can be used in a wide range of application scenarios and drastically decreases the time spent tuning, e.g. tuning a GEMM kernel on AMD Vega Frontier Edition 71.2x faster than brute force search.


international conference on e-science | 2016

OMUSE: Oceanographic multipurpose software environment

F. Inti Pelupessy; Ben van Werkhoven; Arjen van Elteren; Jan Viebahn; Adam S. Candy; Simon Portegies Zwart; Henk A. Dijkstra

This talk will give a brief introduction to OMUSE, the Oceanographic Multipurpose Software Environment, which is currently being developed. OMUSE is a Python framework that provides high-level object-oriented interfaces to existing or newly developed numerical ocean simulation codes, simplifying their use and development In this way, OMUSE facilitates the efficient design of numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales, for example coupling a global open ocean simulation with a regional coastal ocean model. OMUSE enables its users to write high-level Python scripts that describe simulations. The functionality provided by OMUSE takes care of the low-level integration with the code and deploying simulations on high-performance computing resources, allowing its users to focus on the physics of the simulation. We give an overview of the design of OMUSE and the modules and model components currently included. In particular, we will discuss the process of creating a new OMUSE interface to an existing code, and explain how OMUSE keeps track of the internal state of a running simulation. In addition, we will discuss the grid data types and grid remapping functionality that OMUSE provides. We also give an example of performing online data analysis on a running simulation, which is becoming increasingly important as models simulate a broader range of scales, generating large datasets that cannot be fully stored for offline analysis.


international symposium on computer architecture | 2010

Towards user transparent parallel multimedia computing on GPU-Clusters

Ben van Werkhoven; Jason Maassen; Frank J. Seinstra

Collaboration


Dive into the Ben van Werkhoven's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam S. Candy

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henri E. Bal

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge