Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Moibenko is active.

Publication


Featured researches published by Alexander Moibenko.


ieee conference on mass storage systems and technologies | 2007

Storage Resource Managers: Recent International Experience on Requirements and Multiple Co-Operating Implementations

Lana Abadie; Paolo Badino; J.-P. Baud; Ezio Corso; M. Crawford; S. De Witt; Flavia Donno; A. Forti; Ákos Frohner; Patrick Fuhrmann; G. Grosdidier; Junmin Gu; Jens Jensen; B. Koblitz; Sophie Lemaitre; Maarten Litmaath; D. Litvinsev; G. Lo Presti; L. Magnoni; T. Mkrtchan; Alexander Moibenko; Rémi Mollon; Vijaya Natarajan; Gene Oleynik; Timur Perelmutov; D. Petravick; Arie Shoshani; Alex Sim; David Smith; M. Sponza

Storage management is one of the most important enabling technologies for large-scale scientific investigations. Having to deal with multiple heterogeneous storage and file systems is one of the major bottlenecks in managing, replicating, and accessing files in distributed environments. Storage resource managers (SRMs), named after their Web services control protocol, provide the technology needed to manage the rapidly growing distributed data volumes, as a result of faster and larger computational facilities. SRMs are grid storage services providing interfaces to storage resources, as well as advanced functionality such as dynamic space allocation and file management on shared storage systems. They call on transport services to bring files into their space transparently and provide effective sharing of files. SRMs are based on a common specification that emerged over time and evolved into an international collaboration. This approach of an open specification that can be used by various institutions to adapt to their own storage systems has proven to be a remarkable success - the challenge has been to provide a consistent homogeneous interface to the grid, while allowing sites to have diverse infrastructures. In particular, supporting optional features while preserving interoperability is one of the main challenges we describe in this paper. We also describe using SRM in a large international high energy physics collaboration, called WLCG, to prepare to handle the large volume of data expected when the Large Hadron Collider (LHC) goes online at CERN. This intense collaboration led to refinements and additional functionality in the SRM specification, and the development of multiple interoperating implementations of SRM for various complex multi- component storage systems.


broadband communications, networks and systems | 2006

Lambda Station: On-Demand Flow Based Routing for Data Intensive Grid Applications Over Multitopology Networks

A. Bobyshev; M. Crawford; P. DeMar; V. Grigaliunas; M. Grigoriev; Alexander Moibenko; D. Petravick; Ron Rechenmacher; Harvey B Newman; J. Bunn; F. van Lingen; Dan Nae; Sylvain Ravot; Conrad Steenberg; Xun Su; M. Thomas; Yang Xia

Lambda Station is an ongoing project of Fermi National Accelerator Laboratory and the California Institute of Technology. The goal of this project is to design, develop and deploy network services for path selection, admission control and flow based forwarding of traffic among data- intensive Grid applications such as are used in High Energy Physics and other communities. Lambda Station deals with the last-mile problem in local area networks, connecting production clusters through a rich array of wide area networks. Selective forwarding of traffic is controlled dynamically at the demand of applications. This paper introduces the motivation of this project, design principles and current status. Integration of Lambda Station client API with the essential Grid middleware such as the dCache/SRM Storage Resource Manager is also described. Finally, the results of applying Lambda Station services to development and production clusters at Fermilab and Caltech over, advanced networks such as DOEs UltraScience Net and NSFs UltraLight is covered.


ieee conference on mass storage systems and technologies | 2003

The Fermilab data storage infrastructure

Jon Bakken; Eileen Berman; Chih-Hao Huang; Alexander Moibenko; D. Petravick; Michael Zalokar

Fermilab, in collaboration with the DESY laboratory in Hamburg, Germany, has created a petabyte scale data storage infrastructure to meet the requirements of experiments to store and access large data sets. The Fermilab data storage infrastructure consists of the following major storage and data transfer components: the Enstore mass storage system, DCache distributed data cache, FTP and grid FTP for primarily external data transfers. This infrastructure provides a data throughput sufficient for transferring data from experimental data acquisition systems. It also allows access to data in the grid framework.


ieee conference on mass storage systems and technologies | 2005

Fermilab's multi-petabyte scalable mass storage system

Gene Oleynik; Bonnie Alcorn; Wayne Baisley; Jon Bakken; David Berg; Eileen Berman; Chih-Hao Huang; Terry Jones; Robert Kennedy; A. Kulyavtsev; Alexander Moibenko; Timur Perelmutov; D. Petravick; Vladimir Podstavkov; George Szmuksta; Michael Zalokar

Fermilab provides a multi-petabyte scale mass storage system for high energy physics (HEP) experiments and other scientific endeavors. We describe the scalability aspects of the hardware and software architecture that were designed into the mass storage system to permit us to scale to multiple petabytes of storage capacity, manage tens of terabytes per day in data transfers, support hundreds of users, and maintain data integrity. We discuss in detail how we scale the system over time to meet the ever-increasing needs of the scientific community, and relate our experiences with many of the technical and economic issues related to scaling the system. Since the 2003 MSST conference, the experiments at Fermilab have generated more than 1.9 PB of additional data. We present results on how this system has scaled and performed for the Fermilab CDF and D0 Run II experiments as well as other HEP experiments and scientific endeavors.


Presented at 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012) | 2012

Scalability and Performance Improvements in the Fermilab Mass Storage System

Matt Crawford; Catalin Dumitrescu; Dmitry Litvintsev; Alexander Moibenko; Gene Oleynik

By 2009 the Fermilab Mass Storage System had encountered two major challenges: the required amount of data stored and accessed in both tiers of the system (dCache and Enstore) had significantly increased and the number of clients accessing Mass Storage System had increased from tens to hundreds of nodes and from hundreds to thousands of parallel requests. To address these challenges Enstore and the SRM part of dCache were modified to scale for performance, access rates, and capacity. This work increased the amount of simultaneously processed requests in a single Enstore Library instance from about 1000 to 30000. The rates of incoming requests to Enstore increased from tens to hundreds per second. Fermilab is invested in LTO4 tape technology and we have investigated both LTO5 and Oracle T10000C to cope with the increasing needs in capacity. We have decided to adopt T10000C, mainly due to its large capacity, which allows us to scale up the existing robotic storage space by a factor 6. This paper describes the modifications and investigations that allowed us to meet these scalability and performance challenges and provided some perspectives of Fermilab Mass Storage System.


Journal of Physics: Conference Series | 2012

Enstore with Chimera namespace provider

Dmitry Litvintsev; Alexander Moibenko; Gene Oleynik; Michael Zalokar

Enstore is a mass storage system developed by Fermilab that provides distributed access and management of data stored on tapes. It uses a namespace service, PNFS, developed by DESY to provide a filesystem-like view of the stored data. PNFS is a legacy product and is being replaced by a new implementation, called Chimera, which is also developed by DESY. Chimera offers multiple advantages over PNFS in terms of performance and functionality. The Enstore client component, encp, has been modified to work with Chimera, as well as with any other namespace provider. We performed high load end-to-end acceptance test of Enstore with the Chimera namespace. This paper describes the modifications to Enstore, the test procedure and the results of the acceptance testing.


Journal Name: Submitted to J.Phys.Conf.Ser.; Conference: Presented at 18th International Conference on Computing in High Energuy and Nuclear Physics (CHEP 2010), Taipei, Taiwan, 18-22 Oct 2010 | 2011

Horizontally scaling dCache SRM with the Terracotta platform

T Perelmutov; Matt Crawford; Alexander Moibenko; Gene Oleynik

The dCache disk caching file system has been chosen by a majority of LHC experiments Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a single node. Using the Terracotta platform, we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.


Archive | 2004

Monitoring a petabyte scale storage system

Jon Bakken; Eileen Berman; Chih-Hao Huang; Alexander Moibenko; D. Petravick; Michael Zalokar


cluster computing and the grid | 2018

Intelligently-Automated Facilities Expansion with the HEPCloud Decision Engine

Parag Mhashilkar; Mine Altunay; William Dagenhart; S. Fuess; Burt Holzman; Jim Kowalkowski; Dmitry Litvintsev; Qiming Lu; Alexander Moibenko; Marc Paterno; Panagiotis Spentzouris; Steven Timm; Anthony Tiradani


Archive | 2017

Fermilab HEPCloud Facility Decision Engine Design

Anthony Tiradani; Mine Altunay; David Dagenhart; Jim Kowalkowski; Dmitry Litvintsev; Qiming Lu; Parag Mhashilkar; Alexander Moibenko; Marc Paterno; Steven Timm

Collaboration


Dive into the Alexander Moibenko's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Conrad Steenberg

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge