Balazs Konya
Lund University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Balazs Konya.
international conference on computational science | 2003
Oxana Smirnova; Paula Eerola; T. Ekelof; M. Ellert; John Renner Hansen; Aleksandr Konstantinov; Balazs Konya; Jakob Langgaard Nielsen; F. Ould-Saada; Anders Wäänänen
The NorduGrid project operates a production Grid infrastructure in Scandinavia and Finland using own innovative middleware solutions. The resources range from small test clusters at academic institutions to large farms at several supercomputer centers, and are used for various scientific applications. This talk reviews the architecture and describes the Grid services, implemented via the NorduGrid middleware.
Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2003
M. Ellert; Aleksandr Konstantinov; Balazs Konya; Oxana Smirnova; Anders Wäänänen
The NorduGrid is the pioneering Grid project in Scandinavia. The purpose of the project is to create a Grid computing infrastructure in Nordic countries. The cornerstone of the infrastructure adopted at NorduGrid is the Globus toolkit developed at Argonne National Laboratory and University of Southern California. It is, however, missing several important high-level services. With the need to provide working production system, NorduGrid has developed its own solution for the most essential parts. An early prototype implementation of the proposed architecture is being tested and further developed. Aiming at simple but yet functional system capable of handling common computational problems encountered in the Nordic scientific communities, we were choosing simple, but still functional solutions with the necessary parts implemented first.
latin american web congress | 2003
Paula Eerola; Balazs Konya; Oxana Smirnova; T. Ekelof; M. Ellert; John Renner Hansen; Jakob Langgaard Nielsen; Anders Wäänänen; Aleksandr Konstantinov; Juha Herrala; Miika Tuisku; Trond Myklebust; F. Ould-Saada; Brian Vinter
Nordugrid offers reliable grid services for academic users over an increasing set of computing & storage resources spanning through the Nordic countries Denmark, Finland, Norway and Sweden. A small group of scientists has already been using the Nordugrid as their daily computing utility. In the near future we expect a rapid growth both in the number of active users and available resources thanks to the recently launched Nordic grid projects.We report on the present status and short term plans of the Nordic grid infrastructure and describe the available and foreseen resources, grid services and our forming user base.
IEEE Internet Computing | 2003
Paula Eerola; Balazs Konya; Oxana Smirnova; T. Ekelof; M. Ellert; John Renner Hansen; Jakob Langgaard Nielsen; Anders Wäänänen; Aleksandr Konstantinov; F. Ould-Saada
Innovative middleware solutions are key to the NorduGrid testbed, which spans academic institutes and supercomputing centers throughout Scandinavia and Finland and provides continuous grid services to its users.
parallel computing | 2002
Anders Wäänänen; M. Ellert; Aleksandr Konstantinov; Balazs Konya; Oxana Smirnova
This document gives an overview of a Grid testbed architecture proposal for the NorduGrid project [1]. The aim of the project is to establish an inter-Nordic testbed facility for implementation of wide area computing and data handling. The architecture is supposed to define a Grid system suitable for solving data intensive problems at the Large Hadron Collider at CERN [2]. We present the various architecture components needed for such a system. After that we go on to give a description of the dynamics by showing the task flow.
international conference on e-science | 2012
Cristina Aiftimiei; A Aimar; Andrea Ceccanti; Marco Cecchi; Alberto Di Meglio; F. Estrella; Patrick Fuhrmam; Emidio Giorgio; Balazs Konya; Laurence Field; J. K. Nilsen; Morris Riedel; John White
The last two decades have seen an exceptional increase of the available networking, computing and storage resources. Scientific research communities have exploited these enhanced capabilities developing large scale collaborations, supported by distributed infrastructures. In order to enable usage of such infrastructures, several middleware solutions have been created. However such solutions, having been developed separately, have been resulting often in incompatible middleware and infrastructures. The European Middleware Initiative (EMI) is a collaboration, started in 2010, among the major European middleware providers (ARC, dCache, gLite, UNICORE), aiming to consolidate and evolve the existing middleware stacks, facilitating their interoperability and their deployment on large distributed infrastructures, establishing at the same time a sustainable model for the future maintenance and evolution of the middleware components. This paper presents the strategy followed for the achievements of these goals : after an analysis of the situation before EMI, it is given an overview of the development strategy, followed by the most notable technical results, grouped according to the four development areas (Compute, Data, Infrastructure, Security). The rigorous process ensuring the quality of provided software is then illustrated, followed by a description the release process, and of the relations with the user communities. The last section provides an outlook to the future, focusing on the undergoing actions looking toward the sustainability of activities.
ieee international conference on escience | 2008
Laurence Field; Sergio Andreozzi; Balazs Konya
A fundamental building block of any grid infrastructures is the grid information service and the information model. The information model describes the entities and relationships between entities within the infrastructure along with their semantics. The realization into a concrete data model defines the syntax by which these concepts can be exchanged. This data model enables consumers of information to efficiently find the information they require and ensures that there is agreement on the meaning with the information producer. The need for a common information model is therefore critical for the seamless interoperation of grid infrastructures. A number of example interoperation activities are presented which highlight this point and the requirement for a common schema in general. An attempt to achieve interoperability between multiple grid infrastructures, which was demonstrated at super computing 2006, helped motivate work on a common schema within the open grid forum. The result of this effort, GLUE 2.0, which in itself defines the current view of grid computing, is presented.
parallel computing | 2006
Paula Eerola; T. Ekelof; M. Ellert; Michael Grønager; John Renner Hansen; S. Haug; Josva Kleist; Aleksandr Konstantinov; Balazs Konya; F. Ould-Saada; Oxana Smirnova; Ferenc Szalai; Anders Wäänänen
The Advanced Resource Connector (ARC) or the NorduGrid middleware is an open source software solution enabling production quality computational and data Grids, with special emphasis on scalability, stability, reliability and performance. Since its first release in May 2002, the middleware is deployed and being used in production environments. This paper aims to present the future development directions and plans of the ARC middleware in terms of outlining the software development roadmap.
International Conference on Computing in High Energy and Nuclear Physics (CHEP) | 2012
Balazs Konya; Cristina Aiftimiei; M. Cecchi; Laurence Field; P. Fuhrmann; J. K. Nilsen; John White
Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.
Journal of Physics: Conference Series 119 (Proceedings of CHEP07); pp 062009-062009 (2008) | 2008
Sergio Andreozzi; S. Burke; Laurence Field; Balazs Konya
A key advantage of Grid systems is the ability to share heterogeneous resources and services between traditional administrative and organizational domains. This ability enables virtual pools of resources to be created and assigned to groups of users. Resource awareness, the capability of users or user agents to have knowledge about the existence and state of resources, is required in order utilize the resource. This awareness requires a description of the services and resources typically defined via a community-agreed information model. One of the most popular information models, used by a number of Grid infrastructures, is the GLUE Schema, which provides a common language for describing Grid resources. Other approaches exist,however they follow different modeling strategies. The presence of different favors of information models for Grid resources is a barrier for enabling inter-Grid interoperability. In order to solve this problem, the GLUE Working Group in the context of the Open Grid Forum was started. The purpose of the group is to oversee a major redesign of the GLUE Schema which should consider the successful modeling choices and flaws that have emerged from practical experience and modeling choices from other initiatives. In this paper, we present the status of the new model for describing computing resources as the first output from the working group with the aim of dissemination and soliciting feedback from the community. (Less)