M. Ernst
Brookhaven National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M. Ernst.
grid computing | 2011
Mine Altunay; P. Avery; K. Blackburn; Brian Bockelman; M. Ernst; Dan Fraser; Robert Quick; Robert Gardner; Sebastien Goasguen; Tanya Levshina; Miron Livny; John McGee; Doug Olson; R. Pordes; Maxim Potekhin; Abhishek Singh Rana; Alain Roy; Chander Sehgal; I. Sfiligoi; Frank Wuerthwein
This article describes the Open Science Grid, a large distributed computational infrastructure in the United States which supports many different high-throughput scientific applications, and partners (federates) with other infrastructures nationally and internationally to form multi-domain integrated distributed systems for science. The Open Science Grid consortium not only provides services and software to an increasingly diverse set of scientific communities, but also fosters a collaborative team of practitioners and researchers who use, support and advance the state of the art in large-scale distributed computing. The scale of the infrastructure can be expressed by the daily throughput of around seven hundred thousand jobs, just under a million hours of computing, a million file transfers, and half a petabyte of data movement. In this paper we introduce and reflect on some of the OSG capabilities, usage and activities.
Archive | 2004
Jon Bakken; I. Fisk; Patrick Fuhrmann; Tigran Mkrtchyan; Timur Perelmutov; D. Petravick; M. Ernst; Desy
The LHC needs to achieve reliable high performance access to vastly distributed storage resources across the network. USCMS has worked with Fermilab-CD and DESY-IT on a storage service that was deployed at several sites. It provides Grid access to heterogeneous mass storage systems and synchronization between them. It increases resiliency by insulating clients from storage and network failures, and facilitates file sharing and network traffic shaping. This new storage service is implemented as a Grid Storage Element (SE). It consists of dCache, jointly developed by DESY and Fermilab, as the core storage system and an implementation of the Storage Resource Manager (SRM), that together allow both local and Grid based access to the mass storage facilities. It provides advanced accessing and distributing collaboration data. USCMS is using this system both as Disk Resource Manager at the Tier-1 center and at multiple Tier-2 sites, and as Hierarchical Resource Manager with Enstore as tape back-end at the Fermilab CMS Tier-1 center. It is used for providing shared managed disk pools at sites for streaming data between the CERN Tier-0, the Fermilab Tier-1 and U.S. Tier-2 centers. Applications can reserve space for a time period, ensuring space availability when the application runs. Worker nodes without WAN connectivity can trigger file replication from a central repository to the local SE and then access data using POSIX-like file system semantics via the LAN. Moving the SE functionality off the worker nodes reduces load and improves reliability of the compute farm elements significantly.
Physics Letters B | 1997
Ingo Bojak; M. Ernst
It has been observed recently that a consistent LO BFKL gluon evolution leads to a steep growth of F2(x, Q2) for x → 0 almost independently of Q2. We show that current data from the DESY HERA collider are precise enough to finally rule out a pure BFKL behaviour in the accessible small x region. Several attempts have been made by other groups to treat the BFKL type small x resummations instead as additions to the conventional anomalous dimensions of the successful renormalization group “Altarelli-Parisi” equations. We demonstrate that all presently available F2 data, in particular at lower values of Q2, can not be described using the presently known NLO (two-loop consistent) small x resummations. Finally we comment on the common reason for the failure of these BFKL inspired methods which result, in general, in too steep >x-dependencies as x → 0.
Nuclear Physics | 1997
Ingo Bojak; M. Ernst
We discuss several methods of calculating the DIS structure functions F2(x,Q2) based on BFKL-type small x resummations. Taking into account new HERA data ranging down to small xand low Q2, the pure leading order BFKL-based approach is excluded. Other methods based on high energy factorization are closer to conventional renormalization group equations. Despite several difficulties and ambiguities in combining the renormalization group equations with small x resummed terms, we find that a fit to the current data is hardly feasible, since the data in the low Q2 region are not as steep as the BFKL formalism predicts. Thus we conclude that deviations from the (successful) renormalization group approach towards summing up logarithms in 1/x are disfavoured by experiment.
Physical Review D | 1996
Ingo Bojak; M. Ernst
The BFKL equation and the kT-factorization theorem are used to obtain predictions for F2 in the small Bjorken-x region over a wide range of Q**2. The dependence on the parameters, especially on those concerning the infrared region, is discussed. After a background fit to recent experimental data obtained at HERA and at Fermilab (E665 experiment), we find that the predicted, almost Q**2 independent BFKL slope lambda >= 0.5 appears to be too steep at lower Q**2 values. Thus there seems to be a chance that future HERA data can distinguish between pure BFKL and conventional field theoretic renormalization group approaches.The BFKL equation and the {ital k}{sub {ital T}}-factorization theorem are used to obtain predictions for {ital F}{sub 2} in the small Bjo/rken-{ital x} region over a wide range of {ital Q}{sup 2}. The dependence on the parameters, especially on those concerning the infrared region, is discussed. After a background fit to recent experimental data obtained at DESY HERA and at Fermilab (E665 experiment) we find that the predicted, almost {ital Q}{sup 2} independent BFKL slope {lambda}{approx_gt}0.5 appears to be too steep at lower {ital Q}{sup 2} values. Thus there seems to be a chance that future HERA data can distinguish between pure BFKL and conventional field theoretic renormalization group approaches. {copyright} {ital 1995 The American Physical Society.}
Journal of Physics: Conference Series | 2010
S. Panitkin; M. Ernst; R Petkus; O Rind; Torre Wenaus
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
Journal of Physics: Conference Series | 2012
L. A. T. Bauerdick; M. Ernst; Dan Fraser; Miron Livny; R. Pordes; Chander Sehgal; F. Würthwein
As it enters adolescence the Open Science Grid (OSG) is bringing a maturing fabric of Distributed High Throughput Computing (DHTC) services that supports an expanding HEP community to an increasingly diverse spectrum of domain scientists. Working closely with researchers on campuses throughout the US and in collaboration with national cyberinfrastructure initiatives, we transform their computing environment through new concepts, advanced tools and deep experience. We discuss examples of these including: the pilot-job overlay concepts and technologies now in use throughout OSG and delivering 1.4 Million CPU hours/day; the role of campus infrastructures- built out from concepts of sharing across multiple local faculty clusters (made good use of already by many of the HEP Tier-2 sites in the US); the work towards the use of clouds and access to high throughput parallel (multi-core and GPU) compute resources; and the progress we are making towards meeting the data management and access needs of non-HEP communities with general tools derived from the experience of the parochial tools in HEP (integration of Globus Online, prototyping with IRODS, investigations into Wide Area Lustre). We will also review our activities and experiences as HTC Service Provider to the recently awarded NSF XD XSEDE project, the evolution of the US NSF TeraGrid project, and how we are extending the reach of HTC through this activity to the increasingly broad national cyberinfrastructure. We believe that a coordinated view of the HPC and HTC resources in the US will further expand their impact on scientific discovery.
ieee nuclear science symposium | 2006
Abhishek Singh Rana; F. Würthwein; Timur Perelmutov; Robert Kennedy; Jon Bakken; Ted Hesselroth; I. Fisk; Patrick Fuhrmann; M. Ernst; Markus Lorch; Dane Skow
We introduce gPLAZMA (grid-aware PLuggable Authorization MAnagement) for dCache/SRM in this publication. Our work is motivated by a need for fine-grained security (Role Based Access Control or RBAC) in storage systems on global data grids, and utilizes VOMS extended X.509 certificate specification for defining extra attributes (FQANs), based on RFC3281. Our implementation, gPLAZMA in dCache, introduces storage authorization callouts for SRM and GridFTP. It allows using different authorization mechanisms simultaneously, fine-tuned with switches and priorities of mechanisms. Of the four mechanisms currently supported, one is an integration with RBAC services in the open science grid (OSG) USCMS/USATLAS Privilege Project, others are built-in as a lightweight suite of services (gPLAZMA lite authorization services suite) including the legacy dcache.kpwd file, as well as the popular grid-mapfile, augmented with a gPLAZMALite specific RBAC mechanism. Based on our current work, we also outline a list of future tasks. This work was undertaken as collaboration between PPDG Common project, OSG Privilege project, and the dCache/SRM groups at DESY, FNAL and UCSD.
Journal of Physics: Conference Series | 2010
S. Panitkin; D. Benjamin; G Carillo Montoya; K. Cranmer; M. Ernst; Wen Guan; H. Ito; T. Maeno; S. Majewski; B. Mellado; O Rind; A. Shibata; F. Tarrade; Torre Wenaus; N. Xu; S. Ye
The Parallel ROOT Facility – PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF can be configured to work with centralized storage systems, but it is especially effective together with distributed local storage systems – like Xrootd, when data are distributed over computing nodes. It works efficiently on different types of hardware and scales well from a multi-core laptop to large computing farms. From that point of view it is well suited for both large central analysis facilities and Tier 3 type analysis farms. PROOF can be used in interactive or batch like regimes. The interactive regime allows the user to work with typically distributed data from the ROOT command prompt and get a real time feedback on analysis progress and intermediate results. We will discuss our experience with PROOF in the context of ATLAS Collaboration distributed analysis. In particular we will discuss PROOF performance in various analysis scenarios and in multi-user, multi-session environments. We will also describe PROOF integration with the ATLAS distributed data management system and prospects of running PROOF on geographically distributed analysis farms.
Archive | 2015
Miron Livny; James Shank; M. Ernst; K. Blackburn; Sebastien Goasguen; Michael Tuts; Lawrence Gibbons; R. Pordes; Piotr Sliz; Ewa Deelman; William Barnett; Doug Olson; John McGee; Robert Cowles; Frank Wuerthwein; Robert Gardner; P. Avery; Shaowen Wang; David Swanson Lincoln
Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. We operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.