Tanya Levshina
Fermilab
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tanya Levshina.
grid computing | 2011
Mine Altunay; P. Avery; K. Blackburn; Brian Bockelman; M. Ernst; Dan Fraser; Robert Quick; Robert Gardner; Sebastien Goasguen; Tanya Levshina; Miron Livny; John McGee; Doug Olson; R. Pordes; Maxim Potekhin; Abhishek Singh Rana; Alain Roy; Chander Sehgal; I. Sfiligoi; Frank Wuerthwein
This article describes the Open Science Grid, a large distributed computational infrastructure in the United States which supports many different high-throughput scientific applications, and partners (federates) with other infrastructures nationally and internationally to form multi-domain integrated distributed systems for science. The Open Science Grid consortium not only provides services and software to an increasingly diverse set of scientific communities, but also fosters a collaborative team of practitioners and researchers who use, support and advance the state of the art in large-scale distributed computing. The scale of the infrastructure can be expressed by the daily throughput of around seven hundred thousand jobs, just under a million hours of computing, a million file transfers, and half a petabyte of data movement. In this paper we introduce and reflect on some of the OSG capabilities, usage and activities.
ieee nuclear science symposium | 2009
Garhan Attebury; Andrew Baranovski; K. Bloom; Brian Bockelman; D. Kcira; J. Letts; Tanya Levshina; Carl Lundestedt; Terrence Martin; Will Maier; Haifeng Pi; Abhishek Singh Rana; I. Sfiligoi; Alexander Sim; M. Thomas; Frank Wuerthwein
Data distribution, storage and access are essential to CPU-intensive and data-intensive high performance Grid computing. A newly emerged file system, Hadoop distributed file system (HDFS), is deployed and tested within the Open Science Grid (OSG) middleware stack. Efforts have been taken to integrate HDFS with other Grid tools to build a complete service framework for the Storage Element (SE). Scalability tests show that sustained high inter-DataNode data transfer can be achieved for the cluster fully loaded with data-processing jobs. The WAN transfer to HDFS supported by BeStMan and tuned GridFTP servers shows large scalability and robustness of the system. The hadoop client can be deployed at interactive machines to support remote data access. The ability to automatically replicate precious data is especially important for computing sites, which is demonstrated at the Large Hadron Collider (LHC) computing centers. The simplicity of operations of HDFS-based SE significantly reduces the cost of ownership of Petabyte scale data storage over alternative solutions.
arXiv: High Energy Physics - Experiment | 2013
A. Avetisyan; Saptaparna Bhattacharya; M. Narain; S. Padhi; Jim Hirschauer; Tanya Levshina; Patricia McBride; Chander Sehgal; Marko Slyz; Mats Rynge; Sudhir Malik; J. Stupak
Snowmass is a US long-term planning study for the high-energy community by the American Physical Society’s Division of Particles and Fields. For its simulation studies, opportunistic resources are harnessed using the Open Science Grid infrastructure. Late binding grid technology, GlideinWMS, was used for distributed scheduling of the simulation jobs across many sites mainly in the US. The pilot infrastructure also uses the Parrot mechanism to dynamically access CvmFS in order to ascertain a homogeneous environment across the nodes. This report presents the resource usage and the storage model used for simulating large statistics Standard Model backgrounds needed for Snowmass Energy Frontier studies.
grid computing | 2009
G. Garzoglio; Ian D. Alderman; Mine Altunay; Rachana Ananthakrishnan; Joe Bester; Keith Chadwick; Vincenzo Ciaschini; Yuri Demchenko; Andrea Ferraro; Alberto Forti; D.L. Groep; Ted Hesselroth; John Hover; Oscar Koeroo; Chad La Joie; Tanya Levshina; Zach Miller; Jay Packard; Håkon Sagehaug; Valery Sergeev; I. Sfiligoi; N Sharma; Frank Siebenlist; Valerio Venturi; John Weigand
In order to ensure interoperability between middleware and authorization infrastructures used in the Open Science Grid (OSG) and the Enabling Grids for E-science (EGEE) projects, an Authorization Interoperability activity was initiated in 2006. The interoperability goal was met in two phases: firstly, agreeing on a common authorization query interface and protocol with an associated profile that ensures standardized use of attributes and obligations; and secondly implementing, testing, and deploying on OSG and EGEE, middleware that supports the interoperability protocol and profile. The activity has involved people from OSG, EGEE, the Globus Toolkit project, and the Condor project. This paper presents a summary of the agreed-upon protocol, profile and the software components involved.
Archive | 2009
G. Garzoglio; Tanya Levshina; Parag Mhashilkar; Steve Timm
The Open Science Grid offers access to hundreds of computing and storage resources via standard Grid interfaces. Before the deployment of an automated resource selection system, users had to submit jobs directly to these resources. They would manually select a resource and specify all relevant attributes in the job description prior to submitting the job. The necessity of a human intervention in resource selection and attribute specification hinders automated job management components from accessing OSG resources and it is inconvenient for the users. The Resource Selection Service (ReSS) project addresses these shortcomings. The system integrates condor technology, for the core match making service, with the gLite CEMon component, for gathering and publishing resource information in the Glue Schema format. Each one of these components communicates over secure protocols via web services interfaces. The system is currently used in production on OSG by the DZero Experiment, the Engagement Virtual Organization, and the Dark Energy. It is also the resource selection service for the Fermilab Campus Grid, FermiGrid. ReSS is considered a lightweight solution to push-based workload management. This paper describes the architecture, performance, and typical usage of the system.
Journal of Physics: Conference Series | 2015
B Jayatilaka; Tanya Levshina; Mats Rynge; Chander Sehgal; M Slyz
The Open Science Grid (OSG) ties together individual experiments’ computing power, connecting their resources to create a large, robust computing grid, this computing infrastructure started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero. In the years since, the OSG has broadened its focus to also address the needs of other US researchers and increased delivery of Distributed High Through-put Computing (DHTC) to users from a wide variety of disciplines via the OSG Open Facility. Presently, the Open Facility delivers about 100 million computing wall hours per year to researchers who are not already associated with the owners of the computing sites, this is primarily accomplished by harvesting and organizing the temporarily unused capacity (i.e. opportunistic cycles) from the sites in the OSG. Using these methods, OSG resource providers and scientists share computing hours with researchers in many other fields to enable their science, striving to make sure that these computing power used with maximal efficiency. Furthermore, we believe that expanded access to DHTC is an essential tool for scientific innovation and work continues in expanding this service.
Journal of Physics: Conference Series | 2010
G. Garzoglio; Ian D. Alderman; Mine Altunay; Rachana Ananthakrishnan; Joe Bester; Keith Chadwick; Vincenzo Ciaschini; Yuri Demchenko; Andrea Ferraro; Alberto Forti; D.L. Groep; Ted Hesselroth; John Hover; Oscar Koeroo; C La Joie; Tanya Levshina; Zachary Miller; Jay Packard; Håkon Sagehaug; I. Sfiligoi; N Sharma; S Timm; Frank Siebenlist; Valerio Venturi; J Weigand
The Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) have a common security model, based on Public Key Infrastructure. Grid resources grant access to users because of their membership in a Virtual Organization (VO), rather than on personal identity. Users push VO membership information to resources in the form of identity attributes, thus declaring that resources will be consumed on behalf of a specific group inside the organizational structure of the VO. Resources contact an access policies repository, centralized at each site, to grant the appropriate privileges for that VO group. Before the work in this paper, despite the commonality of the model, OSG and EGEE used different protocols for the communication between resources and the policy repositories. Hence, middleware developed for one Grid could not naturally be deployed on the other Grid, since the authorization module of the middleware would have to be enhanced to support the other Grids communication protocol. In addition, maintenance and support for different authorization call-out protocols represents a duplication of effort for our relatively small community. To address these issues, OSG and EGEE initiated a joint project on authorization interoperability. The project defined a common communication protocol and attribute identity profile for authorization call-out and provided implementation and integration with major Grid middleware. The activity had resonance with middleware development communities, such as the Globus Toolkit and Condor, who decided to join the collaboration and contribute requirements and software. In this paper, we discuss the main elements of the profile, its implementation, and deployment in EGEE and OSG. We focus in particular on the operations of the authorization infrastructures of both Grids.
Journal of Physics: Conference Series | 2011
G. Garzoglio; Joe Bester; Keith Chadwick; D Dykstra; D.L. Groep; J Gu; Ted Hesselroth; Oscar Koeroo; Tanya Levshina; S Martin; Mischa Sallé; N Sharma; A Sim; S Timm; A Verstegen
The Authorization Interoperability activity was initiated in 2006 to foster interoperability between middleware and authorization infrastructures deployed in the Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) projects. This activity delivered a common authorization protocol and a set of libraries that implement that protocol. In addition, a set of the most common Grid gateways, or Policy Enforcement Points (Globus Toolkit v4 Gatekeeper, GridFTP, dCache, etc.) and site authorization services, or Policy Decision Points (LCAS/LCMAPS, SCAS, GUMS, etc.) have been integrated with these libraries. At this time, various software providers, including the Globus Toolkit v5, BeStMan, and the Site AuthoriZation service (SAZ), are integrating the authorization interoperability protocol with their products. In addition, as more and more software supports the same protocol, the community is converging on LCMAPS as a common module for identity attribute parsing and authorization call-out. This paper presents this effort, discusses the status of adoption of the common protocol and projects the community work on authorization in the near future.
Journal of Physics: Conference Series | 2011
A Amin; Brian Bockelman; J. Letts; Tanya Levshina; T Martin; Haifeng Pi; I. Sfiligoi; M. Thomas; Frank Wuerthwein
Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.
Journal of Physics: Conference Series | 2010
G. Garzoglio; Nanbor Wang; I. Sfiligoi; Tanya Levshina; Balamurali Ananthan
Grids enable uniform access to resources by implementing standard interfaces to resource gateways. In the Open Science Grid (OSG), privileges are granted on the basis of the users membership to a Virtual Organization (VO). However, individual Grid sites are solely responsible to determine and control access privileges to resources. While this guarantees that the sites retain full control on access rights, it often leads to heterogeneous VO privileges throughout the Grid and hardly fits with the Grid paradigm of uniform access to resources. To address these challenges, we developed the Scalable Virtual Organization Privileges Management Environment (SVOPME), which provides tools for VOs to define, publish, and verify desired privileges. Moreover, SVOPME provides tools for grid sites to analyze site access policies for various resources, verify compliance with preferred VO policies, and generate directives for site administrators on how the local access policies can be amended to achieve such compliance without taking control of local configurations away from site administrators. This paper describes how SVOPME implements privilege management tools for the OSG and our experiences in deploying and running the tools in a test bed. Finally, we outline our plan to continue to improve SVOPME and have it included as part of the standard Grid software distributions.