Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurence Field is active.

Publication


Featured researches published by Laurence Field.


Journal of Physics: Conference Series | 2008

Experiences with the GLUE information schema in the LCG/EGEE production grid

S. Burke; S Andreozzi; Laurence Field

A common information schema for the description of Grid resources and services is an essential requirement for interoperating Grid infrastructures, and its implementation interacts with every Grid component. In this context, the GLUE information schema was originally defined in 2002 as a joint project between the European DataGrid and DataTAG projects and the US iVDGL. The schema has major components to describe Computing and Storage Elements, and also generic Service and Site information. It has been used extensively in the LCG/EGEE Grid, for job submission, data management, service discovery and monitoring. In this paper we present the experience gained over the last five years, highlighting both successes and problems. In particular, we consider the importance of having a clear definition of schema attributes; the construction of standard information providers and difficulties encountered in mapping an abstract schema to diverse real systems; the configuration of publication in a way which suits system managers and the varying characteristics of Grid sites; the validation of published information; the ways in which information can be used (and misused) by Grid services and users; and issues related to managing schema upgrades in a large distributed system.


international conference on e-science | 2012

Towards next generations of software for distributed infrastructures: The European Middleware Initiative

Cristina Aiftimiei; A Aimar; Andrea Ceccanti; Marco Cecchi; Alberto Di Meglio; F. Estrella; Patrick Fuhrmam; Emidio Giorgio; Balazs Konya; Laurence Field; J. K. Nilsen; Morris Riedel; John White

The last two decades have seen an exceptional increase of the available networking, computing and storage resources. Scientific research communities have exploited these enhanced capabilities developing large scale collaborations, supported by distributed infrastructures. In order to enable usage of such infrastructures, several middleware solutions have been created. However such solutions, having been developed separately, have been resulting often in incompatible middleware and infrastructures. The European Middleware Initiative (EMI) is a collaboration, started in 2010, among the major European middleware providers (ARC, dCache, gLite, UNICORE), aiming to consolidate and evolve the existing middleware stacks, facilitating their interoperability and their deployment on large distributed infrastructures, establishing at the same time a sustainable model for the future maintenance and evolution of the middleware components. This paper presents the strategy followed for the achievements of these goals : after an analysis of the situation before EMI, it is given an overview of the development strategy, followed by the most notable technical results, grouped according to the four development areas (Compute, Data, Infrastructure, Security). The rigorous process ensuring the quality of provided software is then illustrated, followed by a description the release process, and of the relations with the user communities. The last section provides an outlook to the future, focusing on the undergoing actions looking toward the sustainability of activities.


Journal of Grid Computing | 2009

Grid Deployment Experiences: Grid Interoperation

Laurence Field; Erwin Laure; Markus Schulz

Over recent years a number of Grid projects have emerged which have built Grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one Grid Infrastructure due to the different middleware and operations procedures used. Grid Interoperation is trying to bridge these differences and enable Virtual Organizations to access resources independent of the Grid project affiliation. Building upon the experiences the authors have gained while working on interoperation between EGEE and various other Grid infrastructures as well as through co-chairing the Grid Interoperation Now (GIN) efforts of the Open Grid Forum (OGF), this paper gives an overview of Grid Interoperation and describes various methods that can be used to connect Grid Infrastructures. The case is made for standardization in key areas and why the Grid community should move more aggressively towards standards.


ieee international conference on escience | 2008

Grid Information System Interoperability: The Need For A Common Information Model

Laurence Field; Sergio Andreozzi; Balazs Konya

A fundamental building block of any grid infrastructures is the grid information service and the information model. The information model describes the entities and relationships between entities within the infrastructure along with their semantics. The realization into a concrete data model defines the syntax by which these concepts can be exchanged. This data model enables consumers of information to efficiently find the information they require and ensures that there is agreement on the meaning with the information producer. The need for a common information model is therefore critical for the seamless interoperation of grid infrastructures. A number of example interoperation activities are presented which highlight this point and the requirement for a common schema in general. An attempt to achieve interoperability between multiple grid infrastructures, which was demonstrated at super computing 2006, helped motivate work on a common schema within the open grid forum. The result of this effort, GLUE 2.0, which in itself defines the current view of grid computing, is presented.


Journal of Physics: Conference Series | 2008

Scalability and performance analysis of the EGEE information system

F Ehm; Laurence Field; M W Schulz

Grid information systems are mission-critical components in todays production grid infrastructures. They provide detailed information about grid services which is needed for job submission, data management and general monitoring of the grid. As the number of services within these infrastructures continues to grow, it must be understood if the current information system used in EGEE has the capacity to handle the extra load. This paper describes the current usage of the EGEE information system obtained by monitoring the existing system. A test framework is described which simulates the existing usage patterns and can be used to measure the performance of information systems. The framework is then used to conduct tests on the existing EGEE information system components to evaluate various performance enhancements. Finally, the framework is used to simulate the performance of the information system if the existing grid would double in size.


ieee international conference on escience | 2008

Interoperability between ARC and gLite - Understanding the Grid-Job Life Cycle

Michael Grønager; D. Johansson; Josva Kleist; C. Sttrup; A. Waananen; Laurence Field; Di Qing; Kalle Happonen; T. Lindén

ARC and gLite are two of the leading production-ready Grid middleware solutions being used by thousands of researchers every day. Even though the middlewares leverage the same technologies, there are substantial architectural and implementation divergences. Today, users face difficulties trying to cross the boundaries of the two systems. The gLite clients have so far not been capable of accessing ARC resources and vice versa. This paper is a follow up on an earlier proposal on how to enable interoperability between these two middlewares. Further, the paper presents a thorough walkthrough of the protocols and steps involved in the submission of a job in grids built up of the two different middlewares.


Journal of Physics: Conference Series | 2008

Grid Interoperability: The Interoperations Cookbook

Laurence Field; Markus Schulz

Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one grid infrastructure due to the different middleware and procedures used in each grid. Grid interoperation is trying to bridge these differences and enable virtual organizations to access resources independent of the grid project affiliation. This paper gives an overview of grid interoperation and describes the current methods used to bridge the differences between grids. Actual use cases encountered during the last three years are discussed and the most important interfaces required for interoperability are highlighted. A summary of the standardisation efforts in these areas is given and we argue for moving more aggressively towards standards.


grid computing | 2014

The EMI Registry: Discovering Services in a Federated World

Laurence Field; Shiraz Memon; Iván Márton; Gábor Szigeti

The Distributed Computing Infrastructure (DCI) has become an indispensable tool for scientific research. Such infrastructures are composed of many independent services that are managed by autonomous service providers. The discovery of services is therefore a primary function, which is a precursor for enabling efficient workflows that utilise multiple cooperating services. As DCIs, such as the European Grid Initiative (EGI), are based on a federated model of cooperating yet autonomous service providers, a federated approach to service discovery is required that seamlessly fits into the operational and management procedures of the infrastructure. Many existing approaches rely on a centralised service registry, which is not suited to a federated deployment and operational model. A federated service registry is therefore required that is capable of scaling to handle the number of services and discovery requests found in a production DCI. In this paper we present the EMI Registry (EMIR), a decentralised architecture that supports both hierarchical and peering topologies, enabling autonomous domains to collaborate in a federated infrastructure. An EMIR pilot service is used in order to evaluate a prototype of this architecture under real-world conditions with a geographically-dispersed deployment. The results of this initial deployment are provided along with a few performance measurements.


International Conference on Computing in High Energy and Nuclear Physics (CHEP) | 2012

Consolidation and development roadmap of the EMI middleware

Balazs Konya; Cristina Aiftimiei; M. Cecchi; Laurence Field; P. Fuhrmann; J. K. Nilsen; John White

Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.


Journal of Physics: Conference Series | 2010

The impact and adoption of GLUE 2.0 in the LCG/EGEE production Grid

S. Burke; Sergio Andreozzi; Flavia Donno; Felix Ehm; Laurence Field; Maarten Litmaath; Paul Millar

The GLUE information schema has been in use in the LCG/EGEE production Grid since the first version was defined in 2002. In 2007 a major redesign of GLUE, version 2.0, was started in the context of the Open Grid Forum following the creation of the GLUE Working Group. This process has taken input from a number of Grid projects, but as a major user of the version 1 schema LCG/EGEE has had a strong interest that the new schema should support its needs. In this paper we discuss the structure of the new schema in the light of the LCG/EGEE requirements and explain how they are met, and where improvements have been achieved compared with the version 1 schema. In particular we consider some difficulties encountered in recent extensions of the use of the version 1 schema to aid resource accounting in LCG, to enable the use of the SRM version 2 storage protocol by the LHC experiments, and to publish information about a wider range of services to improve service discovery. We describe how these can be better met by the new schema, and we also discuss the way in which the transition to the new schema is being managed.

Collaboration


Dive into the Laurence Field's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Burke

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar

Erwin Laure

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Grønager

Helsinki Institute of Physics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Morris Riedel

Forschungszentrum Jülich

View shared research outputs
Researchain Logo
Decentralizing Knowledge