Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iraklis Klampanos is active.

Publication


Featured researches published by Iraklis Klampanos.


Computer Science Review | 2012

Survey: Searching in peer-to-peer networks

Iraklis Klampanos; Joemon M. Jose

As peer-to-peer networks are proving capable of handling huge volumes of data, the need for effective search tools is lasting and imperative. During the last years, a number of research studies have been published, which attempt to address the problem of search in large, decentralized networks. In this article, we mainly focus on content and concept-based retrieval. After providing a useful discussion on terminology, we introduce a representative sample of such studies and categorize them according to basic functional and non-functional characteristics. Following our analysis and discussion we conclude that future work should focus on information filtering, re-ranking and merging of results, relevance feedback and content replication as well as on related user-centric aspects of the problem.


ieee international conference on high performance computing data and analytics | 2012

The Design of a Community Science Cloud: The Open Science Data Cloud Perspective

Allison P. Heath; Ray Powell; Rafael Suarez; Walt Wells; Kevin P. White; Malcolm P. Atkinson; Iraklis Klampanos; Heidi L. Alvarez; Christine Harvey; Joe Mambretti

In this paper we describe the design, and implementation of the Open Science Data Cloud, or OSDC. The goal of the OSDC is to provide petabyte-scale data cloud infrastructure and related services for scientists working with large quantities of data. Currently, the OSDC consists of more than 2000 cores and 2 PB of storage distributed across four data centers connected by 10G networks. We discuss some of the lessons learned during the past three years of operation and describe the software stacks used in the OSDC. We also describe some of the research projects in biology, the earth sciences, and social sciences enabled by the OSDC.


Proceedings of the 2014 International Workshop on Data Intensive Scalable Computing Systems | 2014

dispel4py: a Python framework for data-intensive scientific computing

Rosa Filguiera; Iraklis Klampanos; Amrey Krause; Mario David; Alexander Moreno; Malcolm P. Atkinson

This paper presents dispel4py, a new Python framework for describing abstract stream-based workflows for distributed data-intensive applications. The main aim of dispel4py is to enable scientists to focus on their computation instead of being distracted by details of the computing infrastructure they use. Therefore, special care has been taken to provide dispel4py with the ability to map abstract workflows to different enactment platforms dynamically, at run time. In this work we present four dispel4py mappings: Apache Storm, MPI, multi-threading and sequential. The results show that dispel4py is successful in enacting on different platforms, while also providing scalable performance.


international conference on e-science | 2015

VERCE Delivers a Productive E-science Environment for Seismology Research

Malcolm P. Atkinson; Michele Carpenè; Emanuele Casarotti; Steffen Claus; Rosa Filgueira; Anton Frank; Michelle Galea; Tom Garth; André Gemünd; Heiner Igel; Iraklis Klampanos; Amrey Krause; Lion Krischer; Siew Hoon Leong; Federica Magnoni; Jonas Matser; Alberto Michelini; Andreas Rietbrock; Horst Schwichtenberg; Alessandro Spinuso; Jean-Pierre Vilotte

The VERCE project has pioneered an e-Infrastructure to support researchers using established simulation codes on high-performance computers in conjunction with multiple sources of observational data. This is accessed and organised via the VERCE science gateway that makes it convenient for seismologists to use these resources from any location via the Internet. Their data handling is made flexible and scalable by two Python libraries, ObsPy and dispel4py and by data services delivered by ORFEUS and EUDAT. Provenance driven tools enable rapid exploration of results and of the relationships between data, which accelerates understanding and method improvement. These powerful facilities are integrated and draw on many other e-Infrastructures. This paper presents the motivation for building such systems, it reviews how solid-Earth scientists can make significant research progress using them and explains the architecture and mechanisms that make their construction and operation achievable. We conclude with a summary of the achievements to date and identify the crucial steps needed to extend the capabilities for seismologists, for solid-Earth scientists and for similar disciplines.


workflows in support of large scale science | 2013

The demand for consistent web-based workflow editors

Sandra Gesing; Malcolm P. Atkinson; Iraklis Klampanos; Michelle Galea; Michael R. Berthold; R. Barbera; Diego Scardaci; Gabor Terstyanszky; Tamas Kiss; Péter Kacsuk

This paper identifies the high value to researchers in many disciplines of having web-based graphical editors for scientific workflows and draws attention to two technological transitions: good quality editors can now run in a browser and workflow enactment systems are emerging that manage multiple workflow languages and support multi-lingual workflows. We contend that this provides a unique opportunity to introduce multi-lingual graphical workflow editors which in turn would yield substantial benefits: workflow users would find it easier to share and combine methods encoded in multiple workflow languages, the common framework would stimulate conceptual convergence and increased workflow component sharing, and the many workflow communities could share a substantial part of the effort of delivering good quality graphical workflow editors in browsers. The paper examines whether such a common framework is feasible and presents an initial design for a web-based editor, tested with a preliminary prototype. It is not a fait accompli but rather an urgent rallying cry to explore collaboratively a generic web-based framework before investing in many divergent individual implementations.


international conference on e-science | 2015

dispel4py: An Agile Framework for Data-Intensive eScience

Rosa Filgueira; Amrey Krause; Malcolm P. Atkinson; Iraklis Klampanos; Alessandro Spinuso; Susana Sanchez-Exposito

We present dispel4py a versatile data-intensive kit presented as a standard Python library. It empowers scientists to experiment and test ideas using their familiar rapid-prototyping environment. It delivers mappings to diverse computing infrastructures, including cloud technologies, HPC architectures and specialised data-intensive machines, to move seamlessly into production with large-scale data loads. The mappings are fully automated, so that the encoded data analyses and data handling are completely unchanged. The underpinning model is lightweight composition of fine-grained operations on data, coupled together by data streams that use the lowest cost technology available. These fine-grained workflows are locally interpreted during development and mapped to multiple nodes and systems such as MPI and Storm for production. We explain why such an approach is becoming more essential in order that data-driven research can innovate rapidly and exploit the growing wealth of data while adapting to current technical trends. We show how provenance management is provided to improve understanding and reproducibility, and how a registry supports consistency and sharing. Three application domains are reported and measurements on multiple infrastructures show the optimisations achieved. Finally we present the next steps to achieve scalability and performance.


Archive | 2015

dispel4py: An User-friendly Framework for Describing eScience Applications

Rosa Filgueira; Amrey Krause; Malcolm P. Atkinson; Iraklis Klampanos; Alessandro Spinuso; Susana Sanchez-Exposito

We present dispel4py a versatile data-intensive kit presented as a standard Python library. It empowers scientists to experiment and test ideas using their familiar rapid-prototyping environment. It delivers mappings to diverse computing infrastructures, including cloud technologies, HPC architectures and specialised data-intensive machines, to move seamlessly into production with large-scale data loads. The mappings are fully automated, so that the encoded data analyses and data handling are completely unchanged. The underpinning model is lightweight composition of fine-grained operations on data, coupled together by data streams that use the lowest cost technology available. These fine-grained workflows are locally interpreted during development and mapped to multiple nodes and systems such as MPI and Storm for production. We explain why such an approach is becoming more essential in order that data-driven research can innovate rapidly and exploit the growing wealth of data while adapting to current technical trends. We show how provenance management is provided to improve understanding and reproducibility, and how a registry supports consistency and sharing. Three application domains are reported and measurements on multiple infrastructures show the optimisations achieved. Finally we present the next steps to achieve scalability and performance.


international workshop on data intensive distributed computing | 2014

FAST: flexible automated synchronization transfer

Rosa Filgueira; Iraklis Klampanos; Yusuke Tanimura; Malcolm P. Atkinson

This paper presents a new data synchronizing transfer tool called FAST (Flexible Automated Synchronization Transfer) which allows facilities for reliably transferring multi-channel data and metadata periodically from rock physics experiments to a repository and a database located in a remote machine. FAST is compatible with all operating systems, and allows for different types of transfer opera- tions by selecting a different transfer protocol. FAST is very easy to set up and requires little oversight during operation. It copes automatically with interruptions to local operations and communications.


international supercomputing conference | 2013

Towards Addressing CPU-Intensive Seismological Applications in Europe

Michele Carpenè; Iraklis Klampanos; Siew Hoon Leong; Emanuele Casarotti; Peter Danecek; Graziella Ferini; André Gemünd; Amrey Krause; Lion Krischer; Federica Magnoni; Marek Simon; Alessandro Spinuso; Luca Trani; Malcolm P. Atkinson; Giovanni Erbacci; Anton Frank; Heiner Igel; Andreas Rietbrock; Horst Schwichtenberg; Jean-Pierre Vilotte

Advanced application environments for seismic analysis help geoscientists to execute complex simulations to predict the behaviour of a geophysical system and potential surface observations. At the same time data collected from seismic stations must be processed comparing recorded signals with predictions. The EU-funded project VERCE ( http://verce.eu/ ) aims to enable specific seismological use-cases and, on the basis of requirements elicited from the seismology community, provide a service-oriented infrastructure to deal with such challenges. In this paper we present VERCE’s architecture, in particular relating to forward and inverse modelling of Earth models and how the, largely file-based, HPC model can be combined with data streaming operations to enhance the scalability of experiments. We posit that the integration of services and HPC resources in an open, collaborative environment is an essential medium for the advancement of sciences of critical importance, such as seismology.


IEEE | 2015

Proceedings of 11th IEEE eScience 2015, Munich, Germany, September 1-4

Rosa Filgueira Vicente; Amrey Krause; Malcolm P. Atkinson; Iraklis Klampanos; Alessandro Spinuso; Susana Sanchez-Exposito

We present dispel4py a versatile data-intensive kit presented as a standard Python library. It empowers scientists to experiment and test ideas using their familiar rapid-prototyping environment. It delivers mappings to diverse computing infrastructures, including cloud technologies, HPC architectures and specialised data-intensive machines, to move seamlessly into production with large-scale data loads. The mappings are fully automated, so that the encoded data analyses and data handling are completely unchanged. The underpinning model is lightweight composition of fine-grained operations on data, coupled together by data streams that use the lowest cost technology available. These fine-grained workflows are locally interpreted during development and mapped to multiple nodes and systems such as MPI and Storm for production. We explain why such an approach is becoming more essential in order that data-driven research can innovate rapidly and exploit the growing wealth of data while adapting to current technical trends. We show how provenance management is provided to improve understanding and reproducibility, and how a registry supports consistency and sharing. Three application domains are reported and measurements on multiple infrastructures show the optimisations achieved. Finally we present the next steps to achieve scalability and performance.

Collaboration


Dive into the Iraklis Klampanos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amrey Krause

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Alessandro Spinuso

Royal Netherlands Meteorological Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emanuele Casarotti

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jean-Pierre Vilotte

Institut de Physique du Globe de Paris

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Moreno

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge