Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where G. Grosdidier is active.

Publication


Featured researches published by G. Grosdidier.


Journal of Physics: Conference Series | 2010

Data management in EGEE

Ákos Frohner; Jean-Philippe Baud; Rosa Maria Garcia Rioja; G. Grosdidier; Rémi Mollon; David Smith; Paolo Tedesco

Data management is one of the cornerstones in the distributed production computing environment that the EGEE project aims to provide for a e-Science infrastructure. We have designed and implemented a set of services and client components, addressing the diverse requirements of all user communities. LHC experiments as main users will generate and distribute approximately 15 PB of data per year worldwide using this infrastructure. Another key user community, biomedical projects, have strict security requirements with less emphasis on the volume of data. We maintain three service groups for grid data management: The Disk Pool Manager (DPM) Storage Element (with more than 100 instances deployed world-wide), the LCG File Catalogue (LFC) and the File Transfer Service (FTS) which sustains an aggregated transfer rate of 1.5GB/sec. They are complemented by individual client components and also tools which help coordinating more complex uses cases with multiple services (GFAL-client, lcg util, eds-cli). In this paper we show how these services, keeping clean and standard interfaces among each other, can work together to cover the data flow and how they can be used as individual components to cover diverse requirements. We will also describe areas that we consider for further improvements, both for performance and functionality.


ieee conference on mass storage systems and technologies | 2007

Storage Resource Managers: Recent International Experience on Requirements and Multiple Co-Operating Implementations

Lana Abadie; Paolo Badino; J.-P. Baud; Ezio Corso; M. Crawford; S. De Witt; Flavia Donno; A. Forti; Ákos Frohner; Patrick Fuhrmann; G. Grosdidier; Junmin Gu; Jens Jensen; B. Koblitz; Sophie Lemaitre; Maarten Litmaath; D. Litvinsev; G. Lo Presti; L. Magnoni; T. Mkrtchan; Alexander Moibenko; Rémi Mollon; Vijaya Natarajan; Gene Oleynik; Timur Perelmutov; D. Petravick; Arie Shoshani; Alex Sim; David Smith; M. Sponza

Storage management is one of the most important enabling technologies for large-scale scientific investigations. Having to deal with multiple heterogeneous storage and file systems is one of the major bottlenecks in managing, replicating, and accessing files in distributed environments. Storage resource managers (SRMs), named after their Web services control protocol, provide the technology needed to manage the rapidly growing distributed data volumes, as a result of faster and larger computational facilities. SRMs are grid storage services providing interfaces to storage resources, as well as advanced functionality such as dynamic space allocation and file management on shared storage systems. They call on transport services to bring files into their space transparently and provide effective sharing of files. SRMs are based on a common specification that emerged over time and evolved into an international collaboration. This approach of an open specification that can be used by various institutions to adapt to their own storage systems has proven to be a remarkable success - the challenge has been to provide a consistent homogeneous interface to the grid, while allowing sites to have diverse infrastructures. In particular, supporting optional features while preserving interoperability is one of the main challenges we describe in this paper. We also describe using SRM in a large international high energy physics collaboration, called WLCG, to prepare to handle the large volume of data expected when the Large Hadron Collider (LHC) goes online at CERN. This intense collaboration led to refinements and additional functionality in the SRM specification, and the development of multiple interoperating implementations of SRM for various complex multi- component storage systems.


ieee conference on mass storage systems and technologies | 2007

Grid-Enabled Standards-based Data Management

Lana Abadie; Paolo Badino; Jean-Philippe Baud; James Casey; Ákos Frohner; G. Grosdidier; Sophie Lemaitre; Gavin McCance; Rémi Mollon; Krzysztof Nienartowicz; David Smith; Paolo Tedesco

The worlds largest scientific machine - the large hadron collider (LHC), situated outside Geneva, Switzerland - will generate some 15PB of data at rates up to 1.5 GB/s (in the case of the heavy-ion experiment, ALICE) to tape per year of operation. The processing of this data will be performed using a world-wide grid, the (worldwide) LHC computing grid built on top of the enabled grid for e-science and open science grid infrastructures. The LHC computing grid, which has offered a service for over two years now, is based upon a tier model comprising some 150 sites in tens of countries. In this paper, we describe the data management middleware stack - one of the key services provided by data grids. We give an overview of the different services implemented, a disk-based storage system which can support encryption, tools to manage the storage system and access files, the LCG file catalogue, and the file transfer service. We also review the relationship between these services.


arXiv: Programming Languages | 2012

QIRAL: A High Level Language for Lattice QCD Code Generation

Denis Barthou; G. Grosdidier; Michael Kruse; O. Pène; Claude Tadonki

In functional programming, datatypes a la carte provide a convenient modular representation of recursive datatypes, based on their initial algebra semantics. Unfortunately it is highly challenging to implement this technique in proof assistants that are based on type theory, like Coq. The reason is that it involves type definitions, such as those of type-level fixpoint operators, that are not strictly positive. The known work-around of impredicative encodings is problematic, insofar as it impedes conventional inductive reasoning. Weak induction principles can be used instead, but they considerably complicate proofs. This paper proposes a novel and simpler technique to reason inductively about impredicative encodings, based on Mendler-style induction. This technique involves dispensing with dependent induction, ensuring that datatypes can be lifted to predicates and relying on relational formulations. A case study on proving subject reduction for structural operational semantics illustrates that the approach enables modular proofs, and that these proofs are essentially similar to conventional ones.Quantum chromodynamics (QCD) is the theory of subnuclear physics, aiming at mod- eling the strong nuclear force, which is responsible for the interactions of nuclear particles. Lattice QCD (LQCD) is the corresponding discrete formulation, widely used for simula- tions. The computational demand for the LQCD is tremendous. It has played a role in the history of supercomputers, and has also helped defining their future. Designing efficient LQCD codes that scale well on large (probably hybrid) supercomputers requires to express many levels of parallelism, and then to explore different algorithmic solutions. While al- gorithmic exploration is the key for efficient parallel codes, the process is hampered by the necessary coding effort. We present in this paper a domain-specific language, QIRAL, for a high level expression of parallel algorithms in LQCD. Parallelism is expressed through the mathematical struc- ture of the sparse matrices defining the problem. We show that from these expressions and from algorithmic and preconditioning formulations, a parallel code can be automatically generated. This separates algorithms and mathematical formulations for LQCD (that be- long to the field of physics) from the effective orchestration of parallelism, mainly related to compilation and optimization for parallel architectures.


Journal of Physics: Conference Series | 2010

Towards the petaflop for Lattice QCD simulations the PetaQCD project

Jean-Christian Anglès d'Auriac; Denis Barthou; Damir Becirevic; René Bilhaut; François Bodin; Philippe Boucaud; Olivier Brand-Foissac; Jaume Carbonell; Christine Eisenbeis; Pascal Gallard; G. Grosdidier; P. Guichon; Pierre-François Honoré; Guy Le Meur; O. Pène; Louis Rilling; P. Roudeau; André Seznec; A. Stocchi; François Touze


Archive | 2013

Automated Code Generation for Lattice QCD Simulation

Denis Barthou; G. Grosdidier; Konstantin Petrov; Michael Kruse; Christine Eisenbeis; O. Pène; Olivier Brand-Foissac; Claude Tadonki; Romain Dolbeau


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2005

Performance of the B aB ar-DIRC

Jochen Schwiening; Roy Aleksan; N. Arnaud; N. Aston; N. van Bakel; Dominique Bernard; P. Bourgeois; Francoise Brochard; D. N. Brown; Julien Chauveau; M. E. Convery; S. Emery; A. Gaidot; X. Giroux; P. Grenier; G. Grosdidier; T. Hadig; Gautier Hamel de Monchenault; B. L. Hartfiel; Andreas Hoecker; M. J. J. John; Richard W. Kadel; J. Libby; A. M. Lutz; Julie Malcles; Giampiero Mancinelli; Brian T. Meadows; K. Mishra; Dieter Muller; J. Ocariz

Collaboration


Dive into the G. Grosdidier's collaboration.

Researchain Logo
Decentralizing Knowledge