Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mariusz Mamonski is active.

Publication


Featured researches published by Mariusz Mamonski.


Interface Focus | 2013

Flexible composition and execution of high performance, high fidelity multiscale biomedical simulations.

Derek Groen; Joris Borgdorff; Carles Bona-Casas; James Hetherington; Rupert W. Nash; Stefan J. Zasada; Ilya Saverchenko; Mariusz Mamonski; Krzysztof Kurowski; Miguel O. Bernabeu; Alfons G. Hoekstra; Peter V. Coveney

Multiscale simulations are essential in the biomedical domain to accurately model human physiology. We present a modular approach for designing, constructing and executing multiscale simulations on a wide range of resources, from laptops to petascale supercomputers, including combinations of these. Our work features two multiscale applications, in-stent restenosis and cerebrovascular bloodflow, which combine multiple existing single-scale applications to create a multiscale simulation. These applications can be efficiently coupled, deployed and executed on computers up to the largest (peta) scale, incurring a coupling overhead of 1–10% of the total execution time.


Journal of Computational Science | 2014

Distributed multiscale computing with MUSCLE 2, the Multiscale Coupling Library and Environment

Joris Borgdorff; Mariusz Mamonski; Bartosz Bosak; Krzysztof Kurowski; M. Ben Belgacem; Bastien Chopard; Derek Groen; Peter V. Coveney; Alfons G. Hoekstra

We present the Multiscale Coupling Library and Environment: MUSCLE 2. This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We demonstrate its local and distributed computing capabilities and compare its performance to MUSCLE 1, file copy, MPI, MPWide, and GridFTP. The local throughput of MPI is about two times higher, so very tightly coupled code should use MPI as a single submodel of MUSCLE 2; the distributed performance of GridFTP is lower, especially for small messages. We test the performance of a canal system model with MUSCLE 2, where it introduces an overhead as small as 5% compared to MPI.


international conference on conceptual structures | 2012

A distributed multiscale computation of a tightly coupled model using the Multiscale Modeling Language

Joris Borgdorff; Carles Bona-Casas; Mariusz Mamonski; Krzysztof Kurowski; Tomasz Piontek; Bartosz Bosak; Katarzyna Rycerz; Eryk Ciepiela; Tomasz Gubała; Daniel Harezlak; Marian Bubak; Eric Lorenz; Alfons G. Hoekstra

Abstract Nature is observed at all scales; with multiscale modeling, scientists bring together several scales for a holistic analysis of a phenomenon. The models on these different scales may require significant but also heterogeneous computational resources, creating the need for distributed multiscale computing. A particularly demanding type of multiscale models, tightly coupled, brings with it a number of theoretical and practical issues. In this contribution, a tightly coupled model of in-stent restenosis is first theoretically examined for its multiscale merits using the Multiscale Modeling Language (MML); this is aided by a toolchain consisting of MAPPER Memory (MaMe), the Multiscale Application Designer (MAD), and Gridspace Experiment Workbench. It is implemented and executed with the general Multiscale Coupling Library and Environment (MUSCLE). Finally, it is scheduled amongst heterogeneous infrastructures using the QCG-Broker. This marks the first occasion that a tightly coupled application uses distributed multiscale computing in such a general way.


international conference on conceptual structures | 2013

Distributed Multiscale Computations Using the MAPPER Framework

Mohamed Ben Belgacem; Bastien Chopard; Joris Borgdorff; Mariusz Mamonski; Katarzyna Rycerz; Daniel Harezlak

We present a global overview of the methodology developed within the MAPPER European project to design, implement and run a multiscale simulation on a distributed supercomputing infrastructure. Our goal is to highlight the main steps required when developing an application within this framework. More specifically, we illustrate the proposed approach in the case of hydrology applications. A performance model describing the execution time of the application as a function of its spatial resolution and the hardware performance is proposed. It shows that Distributed Multiscal Computation is beneficial for large scale problems.


international conference on conceptual structures | 2013

Multiscale Computing with the Multiscale Modeling Library and Runtime Environment

Joris Borgdorff; Mariusz Mamonski; Bartosz Bosak; Derek Groen; Mohamed Ben Belgacem; Krzysztof Kurowski; Alfons G. Hoekstra

We introduce a software tool to simulate multiscale models: the Multiscale Coupling Library and Environment 2 (MUSCLE 2). MUSCLE 2 is a component-based modeling tool inspired by the multiscale modeling and simulation framework, with an easy-to-use API which supports Java, C++, C, and Fortran. We present MUSCLE 2s runtime features, such as its distributed computing capabilities, and its benefits to multiscale modelers. We also describe two multiscale models that use MUSCLE 2 to do distributed multiscale computing: an in-stent restenosis and a canal system model. We conclude that MUSCLE 2 is a notable improvement over the previous version of MUSCLE, and that it allows users to more flexibly deploy simulations of multiscale models, while improving their performance.


distributed simulation and real-time applications | 2012

Distributed Infrastructure for Multiscale Computing

Stefan J. Zasada; Mariusz Mamonski; Derek Groen; Joris Borgdorff; Ilya Saverchenko; Tomasz Piontek; Krzysztof Kurowski; Peter V. Coveney

Today scientists and engineers are commonly faced with the challenge of modelling, predicting and controlling multiscale systems which cross scientific disciplines and where several processes acting at different scales coexist and interact. Such multidisciplinary multiscale models, when simulated in three dimensions, require large scale or even extreme scale computing capabilities. The MAPPER project is developing computational strategies, software and services to enable distributed multiscale simulations across disciplines, exploiting existing and evolving e-Infrastructure. The resulting multi-tiered software infrastructure, which we present in this paper, has as its aim the provision of a persistent, stable infrastructure that will support any computational scientist wishing to perform distributed, multiscale simulations.


eScience on Distributed Computing Infrastructure - Volume 8500 | 2014

New QosCosGrid Middleware Capabilities andźItsźIntegration with European e-Infrastructure

Bartosz Bosak; Krzysztof Kurowski; Tomasz Piontek; Mariusz Mamonski

QosCosGrid QCG is an integrated system offering leading job and resource management capabilities in order to deliver supercomputer-like performance and structure to end users. By combining many distributed computing resources together, QCG offers highly efficient mapping, execution and monitoring capabilities for a variety of applications, such as parameter sweep, workflows, multi-scale, MPI or hybrid MPI-OpenMP. The QosCosGrid middleware also provides ai¾źset of unique features, such as advance reservation, co-allocation of distributed computing resources, support for interactive tasks and monitoring of ai¾źprogress of running applications. The middleware is offered to end users by well-designed and easy-to-use client tools. At the time of writing, QosCosGrid is the most popular middleware within the PL-Grid Infrastructure. After its successful adoption within the Polish research communities, it has been integrated with the EGI infrastructure and through a release in UMD and EGI-AppDB it is also available at European level. In this article, we focus on the extensions that were introduced to QosCosGrid during the period of the PL-Grid and PLGrid Plus projects in order to support advanced user scenarios and to integrate the stack with the Polish and European e-Infrastructures.


eScience on Distributed Computing Infrastructure - Volume 8500 | 2014

Reservations for Compute Resources in Federated e-Infrastructure

Marcin Radecki; Tadeusz Szymocha; Tomasz Piontek; Bartosz Bosak; Mariusz Mamonski; Paweł Wolniewicz; Krzysztof Benedyczak; Rafał Kluszczyński

This paper presents work done to prepare compute resource reservations in the PL-Grid Infrastructure. A compute resource reservation allows a user to allocate some fraction of resources for exclusive access, when reservation is prepared. That way the user is able to run his/her job without waiting for allocating resources in a batch system. In the PL-Grid Infrastructure reservations can be allocated up to amount negotiated in a PL-Grid grant. One way of getting reservation is allocation by a resource administrator. Another way is to use predefined pool of resources accessible by various middleware. In both approaches once obtained, reservations identifiers can be used by middleware during job submissions. Enabling reservations requires changes in middleware. The modifications needed in each middleware will be described. The possible extension of existing reservation model in the PL-Grid Infrastructure can be envisaged: reservation usage normalization and reservation accounting. The reservations are created and utilized in the users context, so there must be a way to pass the reservation details from the user-level tools to a batch system. Each of PL-Grid supported middleware, namely gLite, UNICORE and QosCosGrid, required adaptations to implement this goal.


computational methods in science and technology | 2010

Parallel Large Scale Simulations in the PL-Grid Environment

Krzysztof Kurowski; Tomasz Piontek; Mariusz Mamonski; Bartosz Bosak


Archive | 2013

The virtual physiological human: integrative approaches to computational biomedicine

H. Talbot; S. Marchesseau; C. Duriez; M. Sermesant; S. Cotin; H. Delingette; L. Mountrakis; E. Lorenz; Alfons G. Hoekstra; Miguel O. Bernabeu; Rupert W. Nash; Derek Groen; Hywel B. Carver; James Hetherington; Timm Krüger; Peter V. Coveney; Joris Borgdorff; Carles Bona-Casas; Stefan J. Zasada; I. Saverchenko; Mariusz Mamonski; Krzysztof Kurowski; S. A. Niederer; W. E. Louch; O. M. Sejersted; N. P. Smith; E. Pervolaraki; R. A. Anderson; A. P. Benson; B. Hayes-Gill

Collaboration


Dive into the Mariusz Mamonski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bartosz Bosak

Polish Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomasz Piontek

Polish Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Derek Groen

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Harezlak

AGH University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Katarzyna Rycerz

AGH University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge