Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan J. Zasada is active.

Publication


Featured researches published by Stefan J. Zasada.


Journal of Internet Services and Applications | 2013

A data infrastructure reference model with applications: Towards realization of a ScienceTube vision with a data replication service

Morris Riedel; Peter Wittenburg; Johannes Reetz; Mark van de Sanden; Jedrzej Rybicki; Benedikt von St. Vieth; Giuseppe Fiameni; Giacomo Mariani; Alberto Michelini; Claudio Cacciari; Willem Elbers; Daan Broeder; Robert Verkerk; Elena Erastova; Michael Lautenschlaeger; Reinhard Budig; Hannes Thielmann; Peter V. Coveney; Stefan J. Zasada; Ali Nasrat Haidar; Otto Buechner; Cristina Manzano; Shiraz Memon; Shahbaz Memon; Heikki Helin; Jari Suhonen; Damien Lecarpentier; Kimmo Koski; Thomas Lippert

AbstractThe wide variety of scientific user communities work with data since many years and thus have already a wide variety of data infrastructures in production today. The aim of this paper is thus not to create one new general data architecture that would fail to be adopted by each and any individual user community. Instead this contribution aims to design a reference model with abstract entities that is able to federate existing concrete infrastructures under one umbrella. A reference model is an abstract framework for understanding significant entities and relationships between them and thus helps to understand existing data infrastructures when comparing them in terms of functionality, services, and boundary conditions. A derived architecture from such a reference model then can be used to create a federated architecture that builds on the existing infrastructures that could align to a major common vision. This common vision is named as ’ScienceTube’ as part of this contribution that determines the high-level goal that the reference model aims to support. This paper will describe how a well-focused use case around data replication and its related activities in the EUDAT project aim to provide a first step towards this vision. Concrete stakeholder requirements arising from scientific end users such as those of the European Strategy Forum on Research Infrastructure (ESFRI) projects underpin this contribution with clear evidence that the EUDAT activities are bottom-up thus providing real solutions towards the so often only described ’high-level big data challenges’. The followed federated approach taking advantage of community and data centers (with large computational resources) further describes how data replication services enable data-intensive computing of terabytes or even petabytes of data emerging from ESFRI projects.


Computer Physics Communications | 2007

The Application Hosting Environment: Lightweight Middleware for Grid-Based Computational Science

Peter V. Coveney; Radhika S. Saksena; Stefan J. Zasada; Mark McKeown; Stephen Pickles

Abstract Grid computing is distributed computing performed transparently across multiple administrative domains. Grid middleware, which is meant to enable access to grid resources, is currently widely seen as being too heavyweight and, in consequence, unwieldy for general scientific use. Its heavyweight nature, especially on the client-side, has severely restricted the uptake of grid technology by computational scientists. In this paper, we describe the Application Hosting Environment (AHE) which we have developed to address some of these problems. The AHE is a lightweight, easily deployable environment designed to allow the scientist to quickly and easily run legacy applications on distributed grid resources. It provides a higher level abstraction of a grid than is offered by existing grid middleware schemes such as the Globus Toolkit. As a result, the computational scientist does not need to know the details of any particular underlying grid middleware and is isolated from any changes to it on the distributed resources. The functionality provided by the AHE is ‘application-centric’: applications are exposed as web services with a well-defined standards-compliant interface. This allows the computational scientist to start and manage application instances on a grid in a transparent manner, thus greatly simplifying the user experience. We describe how a range of computational science codes have been hosted within the AHE and how the design of the AHE allows us to implement complex workflows for deployment on grid infrastructure.


Computer Physics Communications | 2009

Virtualizing access to scientific applications with the Application Hosting Environment

Stefan J. Zasada; Peter V. Coveney

The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion.


Journal of Chemical Information and Modeling | 2008

Automated molecular simulation based binding affinity calculator for ligand-bound HIV-1 proteases.

S. Kashif Sadiq; David W. Wright; Simon J. Watson; Stefan J. Zasada; Ileana Stoica; Peter V. Coveney

The successful application of high throughput molecular simulations to determine biochemical properties would be of great importance to the biomedical community if such simulations could be turned around in a clinically relevant timescale. An important example is the determination of antiretroviral inhibitor efficacy against varying strains of HIV through calculation of drug-protein binding affinities. We describe the Binding Affinity Calculator (BAC), a tool for the automated calculation of HIV-1 protease-ligand binding affinities. The tool employs fully atomistic molecular simulations alongside the well established molecular mechanics Poisson-Boltzmann solvent accessible surface area (MMPBSA) free energy methodology to enable the calculation of the binding free energy of several ligand-protease complexes, including all nine FDA approved inhibitors of HIV-1 protease and seven of the natural substrates cleaved by the protease. This enables the efficacy of these inhibitors to be ranked across several mutant strains of the protease relative to the wildtype. BAC is a tool that utilizes the power provided by a computational grid to automate all of the stages required to compute free energies of binding: model preparation, equilibration, simulation, postprocessing, and data-marshaling around the generally widely distributed compute resources utilized. Such automation enables the molecular dynamics methodology to be used in a high throughput manner not achievable by manual methods. This paper describes the architecture and workflow management of BAC and the function of each of its components. Given adequate compute resources, BAC can yield quantitative information regarding drug resistance at the molecular level within 96 h. Such a timescale is of direct clinical relevance and can assist in decision support for the assessment of patient-specific optimal drug treatment and the subsequent response to therapy for any given genotype.


Philosophical Transactions of the Royal Society A | 2008

Patient-specific simulation as a basis for clinical decision-making.

S. Kashif Sadiq; Marco D. Mazzeo; Stefan J. Zasada; Steven Manos; Ileana Stoica; Catherine V. Gale; Simon J. Watson; Paul Kellam; Stefan Brew; Peter V. Coveney

Patient-specific medical simulation holds the promise of determining tailored medical treatment based on the characteristics of an individual patient (for example, using a genotypic assay of a sequence of DNA). Decision-support systems based on patient-specific simulation can potentially revolutionize the way that clinicians plan courses of treatment for various conditions, ranging from viral infections to arterial abnormalities. Basing medical decisions on the results of simulations that use models derived from data specific to the patient in question means that the effectiveness of a range of potential treatments can be assessed before they are actually administered, preventing the patient from experiencing unnecessary or ineffective treatments. We illustrate the potential for patient-specific simulation by first discussing the scale of the evolving international grid infrastructure that is now available to underpin such applications. We then consider two case studies, one concerned with the treatment of patients with HIV/AIDS and the other addressing neuropathologies associated with the intracranial vasculature. Such patient-specific medical simulations require access to both appropriate patient data and the computational resources on which to perform potentially very large simulations. Computational infrastructure providers need to furnish access to a wide range of different types of resource, typically made available through heterogeneous computational grids, and to institute policies that facilitate the performance of patient-specific simulations on those resources. To support these kinds of simulations, where life and death decisions are being made, computational resource providers must give urgent priority to such jobs, for example by allowing them to pre-empt the queue on a machine and run straight away. We describe systems that enable such priority computing.


Computing in Science and Engineering | 2014

Survey of Multiscale and Multiphysics Applications and Communities

Derek Groen; Stefan J. Zasada; Peter V. Coveney

Multiscale and multiphysics applications are now commonplace, and many researchers focus on combining existing models to construct new multiscale models. This concise review of multiscale applications and their source communities in the EU and US outlines differences and commonalities among approaches and identifies areas in which collaboration between disciplines could be particularly beneficial. Because different communities adopt very different approaches to constructing multiscale simulations, and simulations on a length scale of a few meters and a time scale of a few hours can be found in many multiscale research domains, communities might receive additional benefit from sharing methods that are geared towards these scales. The Web extra is the full literature list mentioned in the article.


Interface Focus | 2013

Flexible composition and execution of high performance, high fidelity multiscale biomedical simulations.

Derek Groen; Joris Borgdorff; Carles Bona-Casas; James Hetherington; Rupert W. Nash; Stefan J. Zasada; Ilya Saverchenko; Mariusz Mamonski; Krzysztof Kurowski; Miguel O. Bernabeu; Alfons G. Hoekstra; Peter V. Coveney

Multiscale simulations are essential in the biomedical domain to accurately model human physiology. We present a modular approach for designing, constructing and executing multiscale simulations on a wide range of resources, from laptops to petascale supercomputers, including combinations of these. Our work features two multiscale applications, in-stent restenosis and cerebrovascular bloodflow, which combine multiple existing single-scale applications to create a multiscale simulation. These applications can be efficiently coupled, deployed and executed on computers up to the largest (peta) scale, incurring a coupling overhead of 1–10% of the total execution time.


Interface Focus | 2011

Clinically driven design of multi-scale cancer models: the ContraCancrum project paradigm

Kostas Marias; Dionysia Dionysiou; Sakkalis; Norbert Graf; Rainer M. Bohle; Peter V. Coveney; Shunzhou Wan; Amos Folarin; P Büchler; M Reyes; Gordon J. Clapworthy; Enjie Liu; Jörg Sabczynski; T Bily; A Roniotis; M Tsiknakis; Eleni A. Kolokotroni; S Giatili; Christian Veith; E Messe; H Stenzhorn; Yoo-Jin Kim; Stefan J. Zasada; Ali Nasrat Haidar; Caroline May; S Bauer; T Wang; Yanjun Zhao; M Karasek; R Grewer

The challenge of modelling cancer presents a major opportunity to improve our ability to reduce mortality from malignant neoplasms, improve treatments and meet the demands associated with the individualization of care needs. This is the central motivation behind the ContraCancrum project. By developing integrated multi-scale cancer models, ContraCancrum is expected to contribute to the advancement of in silico oncology through the optimization of cancer treatment in the patient-individualized context by simulating the response to various therapeutic regimens. The aim of the present paper is to describe a novel paradigm for designing clinically driven multi-scale cancer modelling by bringing together basic science and information technology modules. In addition, the integration of the multi-scale tumour modelling components has led to novel concepts of personalized clinical decision support in the context of predictive oncology, as is also discussed in the paper. Since clinical adaptation is an inelastic prerequisite, a long-term clinical adaptation procedure of the models has been initiated for two tumour types, namely non-small cell lung cancer and glioblastoma multiforme; its current status is briefly summarized.


distributed simulation and real-time applications | 2012

Distributed Infrastructure for Multiscale Computing

Stefan J. Zasada; Mariusz Mamonski; Derek Groen; Joris Borgdorff; Ilya Saverchenko; Tomasz Piontek; Krzysztof Kurowski; Peter V. Coveney

Today scientists and engineers are commonly faced with the challenge of modelling, predicting and controlling multiscale systems which cross scientific disciplines and where several processes acting at different scales coexist and interact. Such multidisciplinary multiscale models, when simulated in three dimensions, require large scale or even extreme scale computing capabilities. The MAPPER project is developing computational strategies, software and services to enable distributed multiscale simulations across disciplines, exploiting existing and evolving e-Infrastructure. The resulting multi-tiered software infrastructure, which we present in this paper, has as its aim the provision of a persistent, stable infrastructure that will support any computational scientist wishing to perform distributed, multiscale simulations.


Future Generation Computer Systems | 2010

Large scale computational science on federated international grids: The role of switched optical networks

Peter V. Coveney; Giovanni Giupponi; Shantenu Jha; Steven Manos; Jon MacLaren; Stephen Pickles; Radhika S. Saksena; Thomas Soddemann; James L. Suter; Mary-Ann Thyveetil; Stefan J. Zasada

The provision of high performance compute and data resources on a grid has often been the primary concern of grid resource providers, with the network links used to connect them only a secondary matter. Certain large scale distributed scientific simulations, especially ones which involve cross-site runs or interactive visualisation and steering capabilities, often require high quality of service, high bandwidth, low latency network interconnects between resources. In this paper, we describe three applications which require access to such network infrastructure, together with the middleware and policies needed to make them possible.

Collaboration


Dive into the Stefan J. Zasada's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Derek Groen

University College London

View shared research outputs
Top Co-Authors

Avatar

Shunzhou Wan

University College London

View shared research outputs
Top Co-Authors

Avatar

David W. Wright

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali E. Abdallah

London South Bank University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge