Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Niels Drost is active.

Publication


Featured researches published by Niels Drost.


Astronomy and Astrophysics | 2013

The Astrophysical Multipurpose Software Environment

F. I. Pelupessy; A. van Elteren; N. de Vries; Steve McMillan; Niels Drost; S. Portegies Zwart

We present the open source Astrophysical Multi-purpose Software Environment (AMUSE), a component library for performing astrophysical simulations involving different physical domains and scales. It couples existing codes within a Python framework based on a communication layer using MPI. The interfaces are standardized for each domain and their implementation based on MPI guarantees that the whole framework is well-suited for distributed computation. It includes facilities for unit handling and data storage. Currently it includes codes for gravitational dynamics, stellar evolution, hydrodynamics and radiative transfer. Within each domain the interfaces to the codes are as similar as possible. We describe the design and implementation of AMUSE, as well as the main components and community codes currently supported and we discuss the code interactions facilitated by the framework. Additionally, we demonstrate how AMUSE can be used to resolve complex astrophysical problems by presenting example applications.


international symposium on multimedia | 2009

eyeDentify: Multimedia Cyber Foraging from a Smartphone

Roelof Kemp; Nicholas Palmer; Thilo Kielmann; Frank J. Seinstra; Niels Drost; Jason Maassen; Henri E. Bal

The recent introduction of smartphones has resulted in an explosion of innovative mobile applications. The computational requirements of many of these applications, however, can not be met by the smartphone itself. The compute power of the smartphone can be enhanced by distributing the application over other compute resources. Existing solutions comprise of a light weight client running on the smartphone and a heavy weight compute server running on, for example, a cloud. This places the user in a dependent position, however, because the user only controls the client application. In this paper, we follow a different model, called cyber foraging, that gives users full control over all parts of the application. We have implemented the model using the Ibis middleware. We evaluate the model using an innovative application in the domain of multimedia computing, and show that cyber foraging increases the applications responsiveness and accuracy whilst decreasing its energy usage.


cluster computing and the grid | 2006

Simple locality-aware co-allocation in peer-to-peer supercomputing

Niels Drost; R.V. van Nieuwpoort; Henri E. Bal

With current grid middleware, it is difficult to deploy distributed supercomputing applications that run concurrently on multiple resources. As current grid middleware systems have problems with co-allocation (scheduling across multiple grid sites), fault-tolerance and are difficult to set-up and maintain, we consider an alternative: peer-to-peer (P2P) supercomputing. P2P supercomputing middleware systems overcome many limitations of current grid systems. However, the lack of central components makes scheduling on P2P systems inherently difficult. As a possible scheduling solution for P2P supercomputing middleware we introduce flood scheduling. It is locality aware, decentralized, flexible and supports co-allocation. We introduce Zorilla, a prototype P2P supercomputing middleware system. Evaluation of Zorilla on over 600 processors at six sites of the Grid5000 system shows that flood scheduling, when used in a P2P network with suitable properties, is a good alternative to centralized algorithms.


IEEE Computer | 2010

Real-World Distributed Computer with Ibis

Henri E. Bal; Jason Maassen; Rob V. van Nieuwpoort; Niels Drost; Roelof Kemp; Timo van Kessel; Nick Palmer; Gosia Wrzesińska; Thilo Kielmann; Kees van Reeuwijk; Frank J. Seinstra; Ceriel J. H. Jacobs; Kees Verstoep

The use of parallel and distributed computing systems is essential to meet the ever-increasing computational demands of many scientific and industrial applications. Ibis allows easy programming and deployment of compute-intensive distributed applications, even for dynamic, faulty, and heterogeneous environments.


high performance distributed computing | 2007

ARRG: real-world gossiping

Niels Drost; Elth Ogston; Rob V. van Nieuwpoort; Henri E. Bal

Gossiping is an effective way of disseminating information in large dynamic systems. Until now, most gossiping algorithms have been designed and evaluated using simulations. However, these algorithms often cannot cope with several real-world problems that tend to be overlooked in simulations, such as node failures, message loss, non-atomicity ofinformation exchange, and firewalls. We explore the problems in designing and applying gossiping algorithms in real systems. Next to identifying the most prominent real-world problems and their current solutions, we introduce Actualized Robust Random Gossiping (ARRG), an algorithm specifically designed to take all of these real-world problems into account simultaneously. To address network connectivity problems such as firewalls we introduce a novel technique, the Fallback Cache. This cache can be applied to existing gossiping algorithms to improve their resilience against connectivity problems. We introduce a new metric, Perceived Network Size to measure a gossiping algorithms effectiveness. In contrast to existing metrics, our new metric does not require global knowledge. Evaluation of ARRG and the Fallback Cache in a number of realistic scenarios shows that the proposed techniques significantly improve the performance of gossiping algorithms in networks with limited connectivity. Even in pathological situations, with 50% message loss and with 80% of the nodes behind a NAT, ARRG continues to work well. Existing algorithms fail in these circumstances.


Concurrency and Computation: Practice and Experience | 2013

Scalable RDF data compression with MapReduce

Jacopo Urbani; Jason Maassen; Niels Drost; Frank J. Seinstra; Henri E. Bal

The Semantic Web contains many billions of statements, which are released using the resource description framework (RDF) data model. To better handle these large amounts of data, high performance RDF applications must apply a compression technique. Unfortunately, because of the large input size, even this compression is challenging. In this paper, we propose a set of distributed MapReduce algorithms to efficiently compress and decompress a large amount of RDF data. Our approach uses a dictionary encoding technique that maintains the structure of the data. We highlight the problems of distributed data compression and describe the solutions that we propose. We have implemented a prototype using the Hadoop framework, and evaluate its performance. We show that our approach is able to efficiently compress a large amount of data and scales linearly on both input size and number of nodes. Copyright


Grids, Clouds and Virtualization | 2011

Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

Frank J. Seinstra; Jason Maassen; Rob V. van Nieuwpoort; Niels Drost; Timo van Kessel; Ben van Werkhoven; Jacopo Urbani; Ceriel J. H. Jacobs; Thilo Kielmann; Henri E. Bal

In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle .


Concurrency and Computation: Practice and Experience | 2011

Zorilla: a peer-to-peer middleware for real-world distributed systems

Niels Drost; Rob V. van Nieuwpoort; Jason Maassen; Frank J. Seinstra; Henri E. Bal

The inherent complex nature of current distributed computing architectures hinders the widespread adoption of these systems for mainstream use. In general, users have access to a highly heterogeneous set of compute resources, which may include clusters, grids, desktop grids, clouds, and other compute platforms. This heterogeneity is especially problematic when running parallel and distributed applications. Software is needed which easily combines as many resources as possible into one coherent computing platform.


high performance distributed computing | 2011

Towards jungle computing with Ibis/Constellation

Jason Maassen; Niels Drost; Henri E. Bal; Frank J. Seinstra

The scientific computing landscape is becoming more and more complex. Besides traditional supercomputers and clusters, scientists can also apply grid and cloud infrastructures. Moreover, the current integration of many-core technologies such as GPUs with such infrastructures adds to the complexity. To make matters worse, data distribution, hardware availability, software heterogeneity, and increasing data sizes, commonly force scientists to use multiple computing platforms simultaneously: a true computing jungle. In this paper we introduce Ibis/Constellation, a software platform specifically designed for distributed, heterogeneous and hierarchical computing environments. In Ibis/Constellation we assume that applications consist of several distinct (but somehow related) activities. These activities can be implemented independently using existing, well understood tools (e.g. MPI, CUDA, etc.). Ibis/Constellation is then used to construct the overall application by coupling the distinct activities. Using application defined labels in combination with context-aware work stealing, Ibis/Constellation provides a simple and efficient mechanism for automatically mapping the activities to the appropriate resources, taking data locality and heterogeneity into account. We show that an existing supernova detection application can be ported to Ibis/Constellation with little effort. By making small changes to the application defined labels, this example application can run efficiently in three very different HPC computing environments: a distributed set of clusters, a large 48-core machine, and a GPU cluster.


international parallel and distributed processing symposium | 2012

High-Performance Distributed Multi-Model / Multi-Kernel Simulations: A Case-Study in Jungle Computing

Niels Drost; Jason Maassen; Maarten van Meersbergen; Henri E. Bal; F. Inti Pelupessy; Simon Portegies Zwart; Michael Kliphuis; Henk A. Dijkstra; Frank J. Seinstra

High-performance scientific applications require more and more compute power. The concurrent use of multiple distributed compute resources is vital for making scientific progress. The resulting distributed system, a so-called Jungle Computing System, is both highly heterogeneous and hierarchical, potentially consisting of grids, clouds, stand-alone machines, clusters, desktop grids, mobile devices, and supercomputers, possibly with accelerators such as GPUs. One striking example of applications that can benefit greatly of Jungle Computing Systems are Multi-Model / Multi-Kernel simulations. In these simulations, multiple models, possibly implemented using different techniques and programming models, are coupled into a single simulation of a physical system. Examples include the domain of computational astrophysics and climate modeling. In this paper we investigate the use of Jungle Computing Systems for such Multi-Model / Multi-Kernel simulations. We make use of the software developed in the Ibis project, which addresses many of the problems faced when running applications on Jungle Computing Systems. We create a prototype Jungle-aware version of AMUSE, an astrophysical simulation framework. We show preliminary experiments with the resulting system, using clusters, grids, stand-alone machines, and GPUs.

Collaboration


Dive into the Niels Drost's collaboration.

Top Co-Authors

Avatar

Henri E. Bal

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nick van de Giesen

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rolf Hut

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge