Gabor Terstyanszky
University of Westminster
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gabor Terstyanszky.
Journal of Grid Computing | 2005
Thierry Delaitre; Tamas Kiss; Ariel Goyeneche; Gabor Terstyanszky; Stephen Winter; Péter Kacsuk
There are many legacy code applications that cannot be run in a Grid environment without significant modification. To avoid re-engineering of legacy code, we developed the Grid Execution Management for Legacy Code Architecture (GEMLCA) that enables deployment of legacy code applications as Grid services. GEMLCA implements a general architecture for deploying legacy applications as Grid services without the need for code re-engineering, or even access to the source files. With GEMLCA, only a user-level understanding is required to run a legacy application from a standard Grid service client. The legacy code runs in its native environment using the GEMLCA resource layer to communicate with the Grid client, thus hiding the legacy nature of the application and presenting it as a Grid service. GEMLCA as a Grid service layer supports submitting jobs, getting their results and status back. The paper introduces the GEMLCA concept, its life cycle, design and implementation. It also presents as an example a legacy simulation code that has been successfully transformed into a Grid service using GEMLCA.
Future Generation Computer Systems | 2011
Gabor Kecskemeti; Gabor Terstyanszky; Péter Kacsuk; Zsolt Németh
Fulfilling a service request in highly dynamic service environments may require deploying a service. Therefore, the effectiveness of service deployment systems affects initial service response times. On Infrastructure as a Service (IaaS) cloud systems deployable services are encapsulated in virtual appliances. Services are deployed by instantiating virtual machines with their virtual appliances. The virtual machine instantiation process is highly dependent on the size and availability of the virtual appliance that is maintained by service developers. This article proposes an automated virtual appliance creation service that aids the service developers to create efficiently deployable virtual appliances - in former systems this task was carried out manually by the developer. We present an algorithm that decomposes these appliances in order to replicate the common virtual appliance parts in IaaS systems. These parts are used to reduce the deployment time of the service by rebuilding the virtual appliance of the service on the deployment target site. With the prototype implementation of the proposed algorithms we demonstrate the decomposition and appliance rebuilding algorithms on a complex web service.
ieee international conference on cloud computing technology and science | 2014
Stephen Winter; Christopher J. Reynolds; Tamas Kiss; Gabor Terstyanszky; Pamela Greenwell; Sharron McEldowney; Sándor Ács; Péter Kacsuk
Cloud technology has the potential for widening access to high-performance computational resources for e-science research, but barriers to engagement with the technology remain high for many scientists. Workflows help overcome barriers by hiding details of underlying computational infrastructure and are portable between various platforms including cloud; they are also increasingly accepted within e-science research communities. Issues arising from the range of workflow systems available and the complexity of workflow development have been addressed by focusing on workflow interoperability, and providing customised support for different science communities. However, the deployments of such environments can be challenging, even where user requirements are comparatively modest. RESWO (Reconfigurable Environment Service for Workflow Orchestration) is a virtual platform-as-a-service cloud model that allows leaner customised environments to be assembled and deployed within a cloud. Suitable distributed computation resources are not always easily affordable and can present a further barrier to engagement by scientists. Desktop grids that use the spare CPU cycles available within an organisation are an attractively inexpensive type of infrastructure for many, and have been effectively virtualised as a cloud-based resource. However, hosts in this environment are volatile: leading to the tail problem, where some tasks become randomly delayed, affecting overall performance. To solve this problem, new algorithms have been developed to implement a cloudbursting scheduler in which durable cloud-based CPU resources may execute replicas of jobs that have become delayed. This paper describes experiences in the development of a RESWO instance in which a desktop grid is buttressed with CPU resources in the cloud to support the aspirations of bioscience researchers. A core component of the architecture, the cloudbursting scheduler, implements an algorithm to perform late job detection, cloud resource management and job monitoring. The experimental results obtained demonstrate significant performance improvements and benefits illustrated by use cases in bioscience research.
Future Generation Computer Systems | 2014
Gabor Terstyanszky; Tamas Kukla; Tamas Kiss; Péter Kacsuk; Ákos Balaskó; Zoltan Farkas
E-scientists want to run their scientific experiments on Distributed Computing Infrastructures (DCI) to be able to access large pools of resources and services. To run experiments on these infrastructures requires specific expertise that e-scientists may not have. Workflows can hide resources and services as a virtualization layer providing a user interface that e-scientists can use. There are many workflow systems used by research communities but they are not interoperable. To learn a workflow system and create workflows in this workflow system may require significant efforts from e-scientists. Considering these efforts it is not reasonable to expect that research communities will learn new workflow systems if they want to run workflows developed in other workflow systems. The solution is to create workflow interoperability solutions to allow workflow sharing. The FP7 Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs (SHIWA) project developed two interoperability solutions to support workflow sharing: Coarse-Grained Interoperability (CGI) and Fine-Grained Interoperability (FGI). The project created the SHIWA Simulation Platform (SSP) to implement the Coarse-Grained Interoperability approach as a production-level service for research communities. The paper describes the CGI approach and how it enables sharing and combining existing workflows into complex applications and run them on Distributed Computing Infrastructures. The paper also outlines the architecture, components and usage scenarios of the simulation platform.
Proceedings. 30th Euromicro Conference, 2004. | 2004
Thierry Delaitre; Ariel Goyeneche; Péter Kacsuk; Tamas Kiss; Gabor Terstyanszky; Stephen Winter
The grid execution management for legacy code architecture (GEMLCA) describes a solution for exposing and executing legacy applications through an OGSI grid service. This architecture has been introduced in a previous paper by the same authors where the general concept was demonstrated by creating an OGSI/GT3 version of the MadCity traffic simulator. The class structure of the architecture is described presenting each component and describing the relationships between them. Also, the current architecture implementation is evaluated through test results gained by running the MadCity traffic simulator as a C/PVM legacy application.
parallel, distributed and network-based processing | 2008
Gabor Kecskemeti; Péter Kacsuk; Gabor Terstyanszky; Tamas Kiss; Thierry Delaitre
Manual deployment of the application usually requires expertise both about the underlying system and the application. Automatic service deployment can improve deployment significantly by using on-demand deployment and self-healing services. To support these features this paper describes an extension the globus workspace service. This extension includes creating virtual appliances for grid services, service deployment from a repository, and influencing the service schedules by altering execution planning services, candidate set generators or information systems.
Parallel Processing Letters | 2008
Zoltán Balaton; Zoltan Farkas; Gábor Gombás; Péter Kacsuk; Róbert Lovas; Attila Csaba Marosi; Gabor Terstyanszky; Tamas Kiss; Oleg Lodygensky; Gilles Fedak; Ad Emmen; Ian Kelley; Ian Taylor; Miguel Cardenas-Montes; Filipe Araujo
Service grids and desktop grids are both promoted by their supportive communities as great solutions for solving the available compute power problem and helping to balance loads across network systems. Little work, however, has been undertaken to blend these two technologies together. In this paper we introduce a new EU project, that is building technological bridges to facilitate service and desktop grid interoperability. We provide a taxonomy and background into service grids, such as EGEE and desktop grids or volunteer computing platforms, such as BOINC and XtremWeb. We then describe our approach for identifying translation technologies between service and desktop grids. The individual themes discuss the actual bridging technologies employed and the distributed data issues surrounding deployment.
grid computing | 2010
Tamas Kiss; Pamela Greenwell; Hans Heindl; Gabor Terstyanszky; Noam Weingarten
Carbohydrate recognition is a phenomenon critical to a number of biological functions in humans. Understanding the dynamic behaviour of oligosaccharides should help in the discovery of the mechanisms which lead to specific and selective recognition of carbohydrates by proteins. Computer programs which can provide insight into such biological recognition processes have significant potential to contribute to biomedical research if the results of the simulation can prove consistent with the outcome of conventional wet laboratory experiments. In order to validate these simulation tools and support their wider uptake by the bio-scientist research community, high-level easy to use integrated environments are required to run massively parallel simulation workflows. This paper describes how the ProSim Science Gateway, based on the WS-PGRADE Grid portal, has been created to execute and visualise the results of complex parameter sweep workflows for modelling carbohydrate recognition.
ieee international conference on cloud computing technology and science | 2011
Christopher J. Reynolds; Stephen Winter; Gabor Terstyanszky; Tamas Kiss; Pamela Greenwell; Sándor Ács; Péter Kacsuk
Scientific workflows are common in biomedical research, particularly for molecular docking simulations such as those used in drug discovery. Such workflows typically involve data distribution between computationally demanding stages which are usually mapped onto large scale compute resources. Volunteer or Desktop Grid (DG) computing can provide such infrastructure but has limitations resulting from the heterogeneous nature of the compute nodes. These constraints mean that reducing the make span of a given workflow stage submitted to a DG becomes problematic. Late jobs can significantly affect the make span, often completing long after the bulk of the computation has finished. In this paper we present a system capable of significantly reducing the make span of a scientific workflow. Our system comprises a DG which is dynamically augmented with an infrastructure as a service (IaaS) Cloud. Using this solution, the Cloud resources are used to process replicated late jobs. Our system comprises a core component termed the scheduler, which implements an algorithm to perform late job detection, Cloud resource management (instantiation and reuse), and job monitoring. We offer a formal definition of this algorithm, whilst we also provide an evaluation of our prototype using a production scientific workflow.
workflows in support of large-scale science | 2008
Tamas Kukla; Tamas Kiss; Gabor Terstyanszky; Péter Kacsuk
Several widely utilized, grid workflow management systems emerged in the last decade. These systems were developed by different scientific communities for various purposes. Enhancing these systems with the capability of invoking and nesting the workflows of other systems within their native workflows makes these communities to be able to carry out cross-organizational experiments and share non-native workflows. The novel solution described in this paper allows the integration of different workflow engines and makes them accessible for workflow systems in order to achieve this goal. The solution is based on an application repository and submitter, which exposes different workflow engines and executes them using the computational resources of the grid. In contrast with other approaches, our solution is scalable in terms of both number of workflows and amount of data, easily extendable in the sense that the integration of a new workflow engine does not require code re-engineering, and general, since it can be adopted by numerous workflow systems.