G. Dick van Albada
University of Amsterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by G. Dick van Albada.
ieee international conference on high performance computing data and analytics | 1999
G. Dick van Albada; J. Clinckmaillie; A. H. L. Emmen; Jörn Gehring; Oliver Heinz; Frank van der Linden; Benno J. Overeinder; Alexander Reinefeld; Peter M. A. Sloot
Workstations make up a very large fraction of the total available computing capacity in many organisations. In order to use this capacity optimally, dynamic allocation of computing resources is needed. The Esprit project Dynamite addresses this load balancing problem through the migration of tasks in a dynamically linked parallel program. An important goal of the project is to accomplish this in a manner that is transparent both to the application programmer and to the user. As a test bed, the Pam-Crash software from ESI is used.
international conference on computational science | 2002
Zhiming Zhao; Robert G. Belleman; G. Dick van Albada; Peter M. A. Sloot
An Interactive Simulation System (ISS) allows a user to interactively explore simulation results and modify the parameters of the simulation at run-time. An ISS is commonly implemented as a distributed system. Integrating distributed modules into one system requires certain control components to be added in each module. When interaction scenarios are complicated, these control components often become large and complex, and are often limited in their reusability. To make the integration more flexible and the solution more reusable, we isolated these control components out of the systems modules and implemented them as an agent framework. In this paper we will describe the architecture of this agent framework, and discuss how they flexibly integrate distributed modules and provide interaction support.
agent directed simulation | 2005
Zhiming Zhao; G. Dick van Albada; Peter M. A. Sloot
Human-in-the-loop simulation systems, also called interactive simulation systems (ISSs), play an increasingly important role in problem-solving environments for complex problems. The High Level Architecture (HLA) provides a uniform interface for realizing the interoperability between distributed modules and has been widely applied in the construction of ISSs. However, using the current architecture, control of the simulation logic and activity flows is often fused with interconnection details, and the constituent components of an ISS have limited adaptability for other applications for which they would, in principle, be suited. An agent-based architecture, named the Interactive Simulation System Conductor (ISS-Conductor), is developed on top of the HLA. It provides a separate layer for describing, interpreting, and controlling activity flow between the HLA components. Using the ISS-Conductor architecture, a simulation or an interactive visualization system is encapsulated as a component, which contains an agent for invoking the simulation and visualization activities and an agent for controlling the runtime behavior.
grid computing | 2000
Kamil Iskra; Z.W. Hendrikse; G. Dick van Albada; Benno J. Overeinder; Peter M. A. Sloot; Jörn Gehring
The combined computing capacity of the workstations that are present in many organisations nowadays is often under-utilised, as the performance for parallel programs is unpredictable. Load balancing through dynamic task re-allocation can help to obtain a more reliable performance. The Esprit project Dynamite provides such an automated load balancing system. It can migrate tasks that are part of a parallel program using a message passing library. Currently Dynamite supports PVM only, but it is being extended to support MPI as well. The Dynamite package is completely transparent, i.e. neither system (kernel) nor application source code need to be modified. Dynamite supports migration of tasks using dynamically linked libraries, open files and both direct and indirect PVM communication. Monitors and a scheduler are included. In this paper, we first briefly describe the Dynamite system. Next we describe how migration decisions are made and report on some performance measurements.
international conference on computational science | 2003
Alfredo Tirado-Ramos; Katarzyna Zajac; Zhiming Zhao; Peter M. A. Sloot; G. Dick van Albada; Marian Bubak
Interactive Problem Solving Environments (PSEs) offer an integrated approach for constructing and running complex systems, such as distributed simulation systems. New distributed infrastructures, like the Grid, support the access to a large variety of core services and resources that can be used by interactive PSEs in a secure environment. We are experimenting with Grid access for interactive PSEs built on top of the High Level Architecture (HLA), a middleware for interactive simulations. Our current approach is such that once a PSE simulation has been executed in the framework, mechanisms from both HLA and Grid middleware are used to broker resources, for job submission services, performance monitoring services, and security services for efficient and transparent execution. We are experimenting with the Web-based Open Grid Services Architecture (OGSA) for HLA RTI Federate registration and discovery, as well as for data transmission. We have found that Grid Services give the possibility to allow for dynamic modification capabilities of HLA Federates, though the dynamic discovery of Federates and the use of Service Data (metadata) for service introspection is not trivial. Also, we have found that opening many data transmission channels between HLA Federates to one destination affects the number of connections you can make with other destinations.
ieee international conference on high performance computing data and analytics | 2000
J. Santoso; G. Dick van Albada; Bobby A. A. Nazief; Peter M. A. Sloot
In this paper we study hierarchical job scheduling strategies for clusters of workstations. Our approach uses two-level scheduling: global scheduling and local scheduling. The local scheduler refines the scheduling decisions made by the global scheduler, taking into account the most recent information. In this paper, we explore the First Come First Served (FCFS), the Shortest Job First (SJF), and the First Fit (FF) policies at the global level and the local level. In addition, we use separate queues at the global level for arriving jobs, where the jobs with the same number of tasks are placed in one queue. At both levels, the schedulers strive to maintain a good load balance. The unit of load balancing at the global level is the job consisting of one or more parallel tasks; at the local level it is the task.
european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 2000
Kamil Iskra; Z.W. Hendrikse; G. Dick van Albada; Benno J. Overeinder; Peter M. A. Sloot
The total computing capacity of workstations can be harnessed more efficiently by using a dynamic task allocation system. The Esprit project Dynamite provides such an automated load balancing system, through the migration of tasks of a parallel program using PVM. The Dynamite package is completely transparent, i.e. neither system (kernel) nor application program modifications are needed. Dynamite supports migration of tasks using dynamically linked libraries, open files and both direct and indirect PVM communication. In this paper we briefly introduce the Dynamite system and subsequently report on a collection of performance measurements.
ieee international conference on high performance computing data and analytics | 1997
Jan F. de Ronde; G. Dick van Albada; Peter M. A. Sloot
In this paper the design and validation of a high performance simulation is discussed that is of critical value to the feasibility study of the GRAIL project, the aim of which is to build a gravitational radiation antenna. Two relatively simple simulation models of this antenna are shown to be too restrictive for our purposes, necessitating the development of a simulation program that utilizes an explicit finite element kernel. The computational complexity of this simulation kernel requires the power that is offered by high performance computing methodology. Therefore it is tailored for execution on parallel systems. Since it is developed from scratch, we can circumvent notorious parallel programming pitfalls that usually are present in code migration. The simulation program is validated for its physical correctness as well as its performance gain. Performance results are presented for two distributed memory parallel systems: A Parsytec PowerXplorer (32 PowerPCs) and Parsytec CC (40 PowerPC+s).
international conference on computational science | 2002
J. Santoso; G. Dick van Albada; T. Basaruddin; Peter M. A. Sloot
In this paper we present a simulation environment for the study of hierarchical job scheduling on distributed systems. The environment provides a multi-level mechanism to simulate various types of jobs. An execution model of jobs is implemented to simulate the behaviour of jobs to obtain an accurate performance prediction. For parallel jobs, two execution models have been implemented: one in which the tasks of the job frequently synchronise and effectively run in lock step and a second in which the tasks only synchronise at beginning and end.The simulator is based on an object approach and on process oriented simulation. Our model supports an unlimited number of workstations, grouped into clusters with their own local resource manager (RM). Work is distributed over these clusters by a global RM. To validate the model, we use two approaches, analysing the main queueing systems and experimenting with real jobs to obtain the actual performance as a reference.
international conference on computational science | 2002
P.F. Spinnato; G. Dick van Albada; Peter M. A. Sloot
We present here a performance model which simulates different versions of the hierarchical treecode on different computer architectures, including hybrid architectures, where a parallel distributed general purpose host is connected to special purpose devices that accelerate specific compute-intense tasks. We focus on the inverse square force computation task, and study the interaction of the treecode with hybrid architectures including the GRAPE boards specialised in the gravity force computation. We validate the accuracy and versatility of our model by simulating existing configurations reported in the literature, and use it to forecast the performance of other architectures, in order to assess the optimal hardware-software configuration.