Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriele Pierantoni is active.

Publication


Featured researches published by Gabriele Pierantoni.


Future Generation Computer Systems | 2013

HELIO: Discovery and analysis of data in heliophysics

Robert D. Bentley; John Brooke; Andre Csillaghy; Donal Fellows; Anja Le Blanc; Mauro Messerotti; David Pérez-Suárez; Gabriele Pierantoni; Marco Soldati

Heliophysics is the study of highly energetic events that originate on the sun and propogate through the solar system. Such events can cause critical and possibly fatal disruption of the electromagnetic systems on spacecraft and on ground based structures such as electric power grids, so there is a clear need to understand the events in their totality as they propogate through space and time. This poses a fascinating eScience challenge since the data is gathered by many observatories and communities that have hitherto not needed to work together. We describe how we are developing an eScience infrastructure to make the discovery and analysis of such complex events possible for the communities of heliophysics. The new systematic and data-centric science which will develop from this will be a child of both the space and information ages.


international conference on e science | 2014

Scientific Workflow Management -- For Whom?

Sílvia Delgado Olabarriaga; Gabriele Pierantoni; Giuliano Taffoni; Eva Sciacca; Mohammad Mahdi Jaghoori; Vladimir Korkhov; Giuliano Castelli; Claudio Vuerli; Ugo Becciani; Eoin P. Carley; Bob Bentley

Workflow management has been widely adopted by scientific communities as a valuable tool to carry out complex experiments. It allows for the possibility to perform computations for data analysis and simulations, whereas hiding details of the complex infrastructures underneath. There are many workflow management systems that offer a large variety of generic services to coordinate the execution of workflows. Nowadays, there is a trend to extend the functionality of workflow management systems to cover all possible requirements that may arise from a user community. However, there are multiple scenarios for usage of workflow systems, involving various actors that require different services to be supported by these systems. In this paper we reflect about the usage scenarios of scientific workflow management based on the practical experience of heavy users of workflow technology from communities in three scientific domains: Astrophysics, Heliophysics and Biomedicine. We discuss the requirements regarding services and information to be provided by the workflow management system for each usage profile, and illustrate how these requirements are fulfilled by the tools these communities currently adopt. This paper contributes to the understanding of properties of future workflow management systems that are important to increase their adoption in a large variety of usage scenarios.


IWSG '14 Proceedings of the 2014 6th International Workshop on Science Gateways | 2014

Metaworkflows and Workflow Interoperability for Heliophysics

Gabriele Pierantoni; Eoin P. Carley

Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, heliophysicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, heliophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS-PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They implement Science Cases (the definition of a scientific challenge) by composing different Basic Workflows. The third and last layer,Iterative Science Workflows, is developed in WSPGRADE. It executes sub-workflows (either Basic or Science Workflows) as parameter sweep jobs to investigate Science Cases on large multiple data sets. So far, this approach has proven fruitful for three Science Cases of which one has been completed and two are still being tested.


Computer Science | 2013

EXTENDING THE SHEBA PROPAGATION MODEL TO REDUCE PARAMETER-RELATED UNCERTAINTIES

Gabriele Pierantoni; Brian A. Coghlan; Eamonn Kenny; Peter T. Gallagher; David Pérez-Suárez

Heliophysics is the branch of physics that investigates the interactions and cor-relation of different events across the Solar System. The mathematical modelsthat describe and predict how physical events move across the solar system (ie.Propagation Models) are of great relevance. These models depend on parame-ters that users must set, hence the ability to correctly set the values is key toreliable simulations. Traditionally, parameter values can be inferred from dataeither at the source (the Sun) or arrival point (the target) or can be extrapo-lated from common knowledge of the event under investigation. Another way ofsetting parameters for Propagation Models is proposed here: instead of guess-ing a priori parameters from scientific data or common knowledge, the model isexecuted as a parameter-sweep job and selects a posteriori the parameters thatyield results most compatible with the event data. In either case (a priori anda posteriori), the correct use of Propagation Models requires information toeither select the parameters, validate the results, or both. In order to do so, itis necessary to access sources of information. For this task, the HELIO projectproves very effective as it offers the most comprehensive integrated informationsystem in this domain and provides access and coordination to services to mineand analyze data. HELIO also provides a Propagation Model called SHEBA,the extension of which is currently being developed within the SCI-BUS project(a coordinated effort for the development of a framework capable of offering toscience gateways seamless access to major computing and data infrastructures).


ieee international conference on escience | 2011

HELIO: Discovery and Analysis of Data in Heliophysics

Robert D. Bentley; John Brooke; Andre Csillaghy; Donal Fellows; Anja Le Blanc; Mauro Messerotti; David Perez-Su´rez; Gabriele Pierantoni; Marco Soldati

Heliophysics is the study of highly energetic events that originate on the sun and propogate through the solar system. Such events can cause critical and possibly fatal disruption of the electromagnetic systems on spacecraft and on ground based structures such as electric power grids, so there is a clear need to understand the events in their totality as they propogate through space and time. This poses a fascinating eScience challenge since the data is gathered by many observatories and communities that have hitherto not needed to work together. We describe how we are developing an eScience infrastructure to make the discovery and analysis of such complex events possible for the communities of heliophysics. The new systematic and data-centric science which will develop from this will be a child of both the space and information ages.


Archive | 2011

Social Grid Agents

Gabriele Pierantoni; Brian A. Coghlan; Eamonn Kenny

Social grid agents are a socially inspired solution designed to address the problem of resource allocation in grid computing, they offer a viable solution to alleviating some of the problems associated with interoperability and utilization of diverse computational resources and to modeling the large variety of relationships among the different actors. The social grid agents provide an abstraction layer between resource providers and consumers. The social grid agent prototype is built in a metagrid environment, and its architecture is based on agnosticism both regarding technological solutions and economic precepts proves now useful in extending the environment of the agents from multiple grid middlewares, the metagrid, to multiple computational environments encompassing grids, clouds and volunteer-based computational systems. The presented architecture is based on two layers: (1) Production grid agents compose various grid services as in a supply chain, (2) Social grid agents that own and control the agents in the lower layer engage in social and economic exchange. The design of social grid agents focuses on how to handle the three flows (production, ownership, policies) of information in a consistent, flexible, and scalable manner. Also, a native functional language is used to describe the information that controls the behavior of the agents and the messages exchanged by them.


Journal of Grid Computing | 2010

The Back-End of a Two-Layer Model for a Federated National Datastore for Academic Research VOs that Integrates EGEE Data Management

Brian A. Coghlan; John J. Walsh; Stephen Childs; Geoff Quigley; David O’Callaghan; Gabriele Pierantoni; John Ryan; Neil Simon; Keith Rochford

This paper proposes an architecture for the back-end of a federated national datastore for use by academic research communities, developed by the e-INIS (Irish National e-InfraStructure) project, and describes in detail one member of the federation, the regional datastore at Trinity College Dublin. It builds upon existing infrastructure and services, including Grid-Ireland, the National Grid Initiative and EGEE, Europe’s leading Grid infrastructure. It assumes users are in distinct research communities and that their data access patterns can be described via two properties, denoted as mutability and frequency-of-access. The architecture is for a back-end—individual academic communities are best qualified to define their own front-end services and user interfaces. The proposal is designed to facilitate front-end development by placing minimal restrictions on how the front-end is implemented and on the internal community security policies. The proposal also seeks to ensure that the communities are insulated from the back-end and from each other in order to ensure quality of service and to decouple their front-end implementation from site-specific back-end implementations.


Computer Science | 2014

A Workflow-oriented Approach to Propagation Models in Heliophysics

Gabriele Pierantoni; Eoin P. Carley; Jason P. Byrne; David Pérez-Suárez; Peter T. Gallagher

The Sun is responsible for the eruption of billions of tons of plasma and the generation of near light-speed particles that propagate throughout the solar system and beyond. If directed towards Earth, these events can be damaging to our tecnological infrastructure. Hence there is an eort to understand the cause of the eruptive events and how they propagate from Sun to Earth. However, the physics governing their propagation is not well understood, so there is a need to develop a theoretical description of their propagation, known as a Propagation Model, in order to predict when they may impact Earth. It is often dicult to define a single propagation model that correctly describes the physics of solar eruptive events, and even more dicult to implement models capable of catering for all these complexities and to validate them using real observational data. In this paper, we envisage that workflows oer both a theoretical and practical framework for a novel approach to propagation models. We define a mathematical framework that aims at encompassing the dierent modalities with which workflows can be used, and provide a set of generic building blocks written in the TAVERNA workflow language that users can use to build their own propagation models. Finally we test both the theoretical model and the composite building blocks of the workflow with a real Science Use Case that was discussed during the 4th CDAW (Coordinated Data Analysis Workshop) event held by the HELIO project. We show that generic workflow building blocks can be used to construct a propagation model that succesfully describes the transit of solar eruptive events toward Earth and predict a correct Earth-impact time


Computer Science | 2012

The Use of Standards in HELIO

Gabriele Pierantoni; Brian A. Coghlan; Eamonn Kenny

HELIO [8] is a project funded under the FP7 program for the discovery and analysis of data for heliophysics. During its development, standards and common frameworks were adopted in three main areas of the project: query services, processing services, and the security infrastructure. After a first, proprietary implementation of the security service, it was suggested moving it to a standard security framework to simplify the enforcement of security on the different sites. As the HELIO front end is built with Spring and the TAVERNA server (HELIO workflow engine) has a security framework compatible with Spring, it has been decided to move the CIS in Spring security [2]. HELIO has two different processing services: one is a generic processing service called HELIO Processing Services (HPS), the other is called Context Service (CTX) and it runs specific IDL procedures. The CTX implements the UWS [4] interface from the IVOA [5], a standard interface for job submission used in the helio and astro-physics community. In its final release, the HPS will expose an UWS compliant interface. Finally, some of the HELIO services perform queries, to simplify the implementation and usage of this services a single query interface (the HELIO Query Interface) has been designed for all these services. The use of these solutions for security, execution, and query allows for easier implementation of the original HELIO architecture and for a simpler deployment of the services.


parallel computing | 2006

A transparent grid filesystem

Brian A. Coghlan; Geoff Quigley; Soha Maad; Gabriele Pierantoni; John Ryan; Eamonn Kenny; David O'Callaghan

Existing data management solutions fail to adequately support data management needs at the inter-grid (interoperability) level. We describe a possible solution, a transparent grid filesystem, and consider in detail a challenging use case.

Collaboration


Dive into the Gabriele Pierantoni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Pérez-Suárez

Finnish Meteorological Institute

View shared research outputs
Top Co-Authors

Avatar

John Brooke

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Tamas Kiss

University of Westminster

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anja Le Blanc

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Donal Fellows

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge